How to measure unit test quality?

To measure unit test quality, some projects use Test-Driven Development (TDD), most use code coverage, some have internal peer reviews. Tools used include JUnit, Clover, Sonar, Maven. Each metric used alone has some flaw.
 
I’m not a huge fan of the code coverage metric. It’s rudimentary. Telling you that your tests run 54% of your code doesn’t tell you much. Sure it means that you’ve written a fair number of tests. But even if you reach 100% it doesn’t ensure that you are testing the right things. You may have one test for an entire function or method that indeed does exercise all of its code. But if there were a condition in it that was being met in that test you are not testing what happens when that condition is not met. I think the code coverage metric is used because there really aren’t any other metrics that are easy to compute.
One metric that would be interesting to track would be the number of exceptions your code produces over a unit of time. Ultimately the goal of unit testing is to catch and prevent such exceptions (bugs) from reaching production. So I would watch that metric closely.
Another metric would be time spent writing unit tests vs time spent writing code as a ratio. This could be a useless metric, but like code coverage, at least you know you’re in the correct range. If you’re spend 10 times as much time writing code as you do unit tests something is probably off.
In the end I’m not sure there is one or even two good metrics to look at. Unit testing is somewhat of an art form so measuring it is difficult.
Reference:
Advertisements

What are some good ways to write unit test cases with high quality?

Just follow the FIRST principle of unit-testing:
Fast: Test one and one thing only. This is extremely important, because if you test more than one logic, when the test fails you won’t be able to immediately point out what has failed. As you are testing one thing only, your unit tests should run very fast.
Isolated: The test cases should not have dependency to other components, such as file system, networking or database. Your test cases should be running in an environment isolated from other components. Dependency injection techniques are useful here. Most testing frameworks will provide its version of mocks and stubs.
Repeatable: Run a test case one time, ten times, thousand times and the expected output must be the same. The logic you are running against must have a consistent output, and your test case should expect it.
Self-verifying: The result of your unit test must be unambiguous, reporting either that it has succeeded or failed. A test case should only have 2 results: either it detects the logic has expected output or not.
Timely: While some might argue against it, writing test cases first is a Good Thing. Or at least having the notion of testing while implementing logic. As result, your code will be more modularized, readable, testable and overall better designed. And better designed code is good for everyone’s health of mind.
Beside that, there are some best practices:
Do not expect to reach 100% code coverage unless you develop mission critical software. It can be very costly to reach this level and will for most projects not be worth the effort.
Integrate unit tests with nightly builds and release builds.