In my last post, I promised that I would discuss problems with specific Quality Assurance metrics. Lately, I’ve been in a lot of situations where people have focused on the metric of 100% Code Coverage. There seems to be a trend of people trying to write automated tests that cover 100% of code.
100% Code Coverage – Panacea or False Sense of Security
In a past post, I expressed some skepticism that the goal of 100% code coverage was a good use of resources because of the Pareto Principle, also known as the 80 20 rule. Still, I’ve spoken with many people who think that 100% Code Coverage is a great goal because all code is automatically tested with each build. Their thinking is that any time spent reaching the goal of 100% Code Coverage is quickly recouped in time saved during the testing process.
In my experience, I have found that relying solely on automated testing gives a team a false sense of security. I have tested many builds that passed extensive automated unit and integration tests but still had a lot of bugs. The team had the illusion that the code was solid because of the automated test results, and they were sorely disappointed when hundreds of bugs were found during manual testing.
What Went Wrong?
Although automated tests do catch a lot of problems early, they cannot catch all problems. The tests only catch bugs that the test is written to catch. When a developer writes the automated tests, the tests will completely ignore any situations that the developer didn’t anticipate. Even if a tester writes the automated tests, it is difficult for her to anticipate every possible error.
The bottom line is that tests can easily be written that exercise code without actually testing anything. So, when a team sets a specific goal of 100% Code Coverage and has a tight deadline, it is likely that a large number of those tests will be low quality. Even if the project team is given plenty of time to write the tests, 100% code coverage will not truly cover every possible bug because many bugs cannot be anticipated even by the best development team.
Is Code Coverage a Worthless Metric?
Of course, Code Coverage is a useful metric, but like any metric, it needs to be used as just one piece of information in the big picture. I have seen the Code Coverage metric used to identify parts of code that were completely ignored or parts of code that were unused and should be removed altogether. The Code Coverage metric is a great piece of information.
The problems with this metric occur when people equate 100% Code Coverage with 100% perfect code. Managing to a metric means that you will generally meet your metric, but you will not meet your actual goals.
In my conversations, I’ve found that a lot of people insist that 100% Code Coverage is a worthy goal that guarantees quality code. I’m interested in hearing about your experiences related to Code Coverage. Have you tried to reach 100% Code Coverage? If so, did you reach the goal, and did it result in quality code?