Setting Automated Test Coverage as a goal is at best, misguided. Automated test coverage is useful as a strategy or as a diagnostic metric, however using it as a goal is idiotic and will lead to waste and the wrong behaviour.
For any IT system, there are three options for testing:
- Automated tests
- Manual tests
- No tests
Lets pop the why stack on automated tests. Automated tests are faster and more reliable than manual tests. Automated and Manual tests are normally safer than no testing. So the reasons for automated tests are:
- Reduced lead time.
- Reduced variability in lead time.
- Lower probability of a production incident.
Our goal should be to improve one of these metrics, normally reduce lead time. Lead time and automated test coverage are correlated. If you attempt to reduce lead time, one of the strategies you are likely to apply is to increase automated test coverage. As such automated test coverage is an excellent diagnostic metric to help the team identify ways to reduce their lead time.
There is not a causal relationship between automated test coverage and lead time. Increasing automated test coverage does not automatically reduce lead time. Many years ago I worked on system with no automated test coverage. Management imposed a 100% test coverage goal for all systems. Everyone on the project stopped working on anything else and spent a few days writing tests. As the business analyst I was given a list of classes and told how to write standards tests for each method to ensure the test coverage tool would register us as meeting our 100% target. We achieved 100% automated test coverage but no improvement in lead time or anything. The activity generated no benefit to the organisation, it was pure waste.
If you set reducing lead time as a goal, you will likely see an increase in automated test coverage. If you set increased automated test coverage, it is possible you will see no benefit.