In light of recent research serious doubt has been cast on the usefulness of numerical targets such as 80% test coverage.
Modern behavioral research by psychologists such as John Seddon, Dan Ariely and Dan Pink, suggests that numerical targets are actually harmful to the emotional well-being (or if you prefer: reduce the ROI) of knowledge workers.

I have long been fascinated by the phenomenon of software teams pursuing a hard numerical target for code coverage. Therefore I have always made it a point to find out about this practice whenever I visit a software development shop.
Over the years it has always proved interesting to hear engineers’ responses to the following eleven questions:
RSA Animate — Drive: The surprising truth about what motivates us from Daniel Pink on Vimeo.
Eleven weird old questions that will reveal whether your code coverage efforts are useful or just well-intentioned?
I try to ask these questions of engineers whenever discussing a new or existing test coverage project.
- What is the specific, day-to-day benefit of covering every single line of code with a unit test?
- What would be the specific, day-to-day benefit of achieving 80% code coverage?
- How would a codebase (and a system) with 80% coverage behave differently than it does today?
- How much worse would it be to achieve, say 60% coverage instead?
- What about 79% coverage?
- Why only 80% coverage as a goal — why not 90%?
- What are the factors that contribute to system determinism?
- Specifically how would increased test coverage contribute to system determinism?
- When you talk about “code coverage” do you mean line coverage, branch coverage or statement coverage, or a combination of some-or-all of these?
- Does your current code coverage metric include files that have no tests at all?
- In other words, does your test coverage metric include all of your untested code, or do you only measure how well you have covered the code for which unit tests exist?
It is important to listen carefully to how these questions are answered. Does the team in fact have a reasoned answer for each of the 11 questions? Do the coverage metrics that are in use actually make sense from a business perspective? Has the team examined low-cost code quality strategies such as code review and static analysis? Is there an explicit mapping of test automation benefit to widespread organizational benefit, at least within the engineering team? Are the problems the team is facing actually soluble via the route of adding test coverage?
It is unfortunately very easy for humans to place undue faith in numerical targets. This is afaict a consequence of our psychology. That numerical targets are intrinsically deceptive is not a problem to be solved, rather it is a serious limitation that must be considered when designing test infrastructure.
cf “Why most unit testing is waste“ as well as ”Stop Writing Automation.”

