From 835582a3f5ec10bff165fdc558bf2fb19550273f Mon Sep 17 00:00:00 2001 From: zooko <> Date: Wed, 7 Aug 2013 17:57:28 +0000 Subject: [PATCH] update rationale for code coverage, and adjust formatting [Imported from Trac: page HowToWriteTests, version 6] --- HowToWriteTests.md | 6 +++--- 1 file changed, 3 insertions(+), 3 deletions(-) diff --git a/HowToWriteTests.md b/HowToWriteTests.md index 8d5e491..63363d2 100644 --- a/HowToWriteTests.md +++ b/HowToWriteTests.md @@ -46,11 +46,11 @@ That will product a directory named `htmlcov`. View its contents with a web brow # using code coverage results -This is important: we do not treat code coverage numbers as a litmus test (like "aim to have 90% of lines covered"). We hardly even treat it as a scalar measurement of goodness -- 91% code coverage is not necessarily better than 90% code coverage. Maybe the alternative would have been to remove some (covered) lines of code that were not necessary, which would have resulted in a worse "code coverage" metric but a better codebase. Finally, note that even if you have 100% branch-level coverage of a codebase, that doesn't mean that your tests are exercising all possible ways that the codebase could be run! There could be data-dependent bugs, such as a divide-by-zero error, or a path which sets one variable to a setting which is inconsistent with a different variable. These sorts of bugs might not be getting exercised by the test code even though every line and every branch of the code is getting tested. +This is important: we do not treat code coverage numbers as a litmus test (like "aim to have 90% of lines covered"). We hardly even treat it as a scalar measurement of goodness — 91% code coverage is not necessarily better than 90% code coverage. Maybe the alternative would have been to remove some (covered) lines of code that were not necessary, which would have resulted in a worse “code coverage” metric but a better codebase. Finally, note that even if you have 100% branch-level coverage of a codebase, that doesn't mean that your tests are exercising all possible ways that the codebase could be run! There could be data-dependent bugs, such as a divide-by-zero error, or a path which sets one variable to a setting which is inconsistent with a different variable. These sorts of bugs might not be getting exercised by the test code even though every line and every branch of the code is getting tested. -So what do we use it for? It is a lens through which to view your code and your test code. You should look at the code coverage results and think about what it says about your tests. Think about "what could go wrong" in this function -- where bugs could be in this function or a future version of it -- and whether the current tests would catch those bugs. Both authors of patches and reviewers of patches should look at the code coverage results, and see if they indicate important holes in the tests. +So what do we use it for? It is a lens through which to view your code and your test code. You should look at the code coverage results and think about what it says about your tests. Think about “what could go wrong” in this function — where bugs could be in this function or a future version of it — and whether the current tests would catch those bugs. Both authors of patches and reviewers of patches should look at the code coverage results, and see if they indicate important holes in the tests. -Likewise, even if the code coverage shows maximal coverage, you should *still* think "Are there any kinds of bugs that could exist in this or a future version of this that *wouldn't* be caught by these tests?". +Code coverage displays turn out to be very handy for showing you facts about your tests and your code that you didn't know. # further reading