When talking about source code quality, there are always voices to tell you that metrics mean nothing and that plenty of projects have great metrics and poor quality ! Let's look at one particular metric: the code coverage by unit tests.
Evaluating the code coverage of an application means measuring the quantity of code that isexecuted and so automatically tested by your unit tests. So if you get 80% of code coverage on your application, it's really a good news as you can refactor and maintain your code securely. It's like driving a car with a fasten seat belt. Ok, but imagine, even if it's a bit ridiculous for agile guys, that 80% of the code of a fairly big application is covered by less than 10 unit tests. Believe me, I've already encountered this situation in real life with a batch application (8'000 lines of code) in charge of manipulating text files.
It raises two remarks :
- Is that good to have 80% of the code covered by unit tests ? Definitely ! If you've already maintained an application you haven't written from start, you do certainly agree that it's far better to have 10 unit tests covering 80% of code than nothing.
- Ok, but when your seat belt is fasten does that mean you're driving well ? Unhappily not, it's only mean your seat belt is fasten.
So, what is it possible to conclude from the code coverage metric?
In fact let's start first by what you cannot conclude : having a high percentage of code coverage does not mean (without extra information that I will not discuss here) that you are doing good Test Driven Development (TDD).
Instead of looking at the code coverage, you must look at the non-code coverage and then you can conclude something : you know that at least XX% of your code is not covered by unit tests and that you need to do something about it. You've clearly identified a risk !