When to stop testing?

A software tester asked me: “When should we stop testing? How do we know that we have thoroughly tested the software to achieve zero defects?”

In theory, testers should stop testing until all defects are found and fixed. Of course, it is impossible because no one would know whether all defects have been removed or not. Therefore, testers must stop testing at some time to deliver software to customer.

Testing is a keeping a balance between cost, time, and quality. It is often dictated by a management view on schedule. The approach is to stop testing whenever scheduled time or budgeted cost is reached. Project managers often order to stop testing based on the schedule to deliver software, regardless whether it still has defects or not. Sometime it is also dictated by management view on quality. Project managers order to stop testing when the software meets all requirements, or the benefit from continuing testing cannot justify the testing cost. It will usually require having several quality tests to evaluate the software. Quality tests consist of functional test, reliability test, performance test, security test, and scalability test, etc… Each test requires running the software and collect data to ensure that no failure or bad data happened. To have perfect software or zero defect would require more testing and it may take a long time so it does not make sense to achieve it for most software development.

From the practical view, “Zero defects” concept is an unattainable goal. To achieve it means that every process, every step should be designed perfectly so that it is impossible to produce defect. As far as I know, no software company has ever achieved zero defects.

Sources

  • Blogs of Prof. John Vu, Carnegie Mellon University