What is a reasonable Code Coverage % for unit tests (and why)?
What is a reasonable Code Coverage % for unit tests (and why)?
A reasonable code coverage percentage for unit tests typically falls between 70% and 80%. This range is widely accepted in the software development community for several reasons:
Balance Between Coverage and Effort: Achieving 100% code coverage is often impractical and costly. It requires significant effort and resources, which might not be justified by the marginal benefits gained from covering the last few percent of code. Instead, aiming for around 80% strikes a balance between ensuring a substantial portion of the code is tested and the effort required to achieve it[2][6][8].
Diminishing Returns: Beyond a certain point, the effort to increase code coverage yields diminishing returns. The last 10-20% of code can be particularly challenging to cover, often involving edge cases or less critical parts of the application. This time and effort could be better spent on other development activities, such as adding new features or improving existing ones[2][6].
Quality Over Quantity: High code coverage does not necessarily equate to high-quality tests. Tests should be meaningful and ensure that the code behaves as expected under various conditions. Poorly written tests that simply increase coverage metrics without validating the correctness of the code can give a false sense of security[2][4][5].
Critical Code Focus: Not all code is equally important. Critical parts of the application, such as those handling financial transactions or personal data, should have higher coverage and more rigorous testing. Less critical code can have lower coverage without significantly impacting the overall quality of the application[2][6].
Industry Standards and Best Practices: Many industry experts and organizations recommend aiming for around 80% code coverage. For example, Google suggests 60% as "acceptable," 75% as "commendable," and 90% as "exemplary"...
middle
Gợi ý câu hỏi phỏng vấn
Chưa có bình luận nào