About Software Testing and Unit Testing
What Is Software Testing For?
A common goal of many software projects is to make some profit for someone. The usual way in which this goal is realized is directly, by selling the software via the app store or licensing its use in some other way. Software destined for in-house use by the developer’s business often makes its money indirectly by improving the efficiency of some business process, reducing the amount of time paid staff must spend attending to the process. If the savings in terms of process efficiency is greater than the cost of developing the software, the project is profitable. Developers of open source projects often sell support packages or use the software themselves: In these cases the preceding argument still applies.
So, economics 101: If the goal of a software project is to make profit—whether the end product is to be sold to a customer or used internally—it must provide some value to the user greater than the cost of the software in order to meet that goal and be successful. I realize that this is not a groundbreaking statement, but it has important ramifications for software testing.
If testing (also known as Quality Assurance, or QA) is something we do to support our software projects, it must support the goal of making a profit. That’s important because it automatically sets some constraints on how a software product must be tested: If the testing will cost so much that you lose money, it isn’t appropriate to do. But testing software can show that the product works; that is, that the product contains the valuable features expected by your customers. If you can’t demonstrate that value, the customers may not buy the product.
Notice that the purpose of testing is to show that the product works, not discover bugs. It’s Quality Assurance, not Quality Insertion. Finding bugs is usually bad. Why? Because it costs money to fix bugs, and that’s money that’s being wasted because you were being paid to write the software without bugs in in the first place. In an ideal world, you might think that developers just write bug-free software, do some quick testing to demonstrate there are no bugs, and then we upload to iTunes Connect and wait for the money to roll in. But hold on:Working like that might introduce the same cost problem, in another way. How much longer would it take you to write software that you knew, before it was tested, would be 100% free of bugs? How much would that cost?
It seems, therefore, that appropriate software testing is a compromise: balancing the level of control needed on development with the level of checking done to provide some confidence that the software works without making the project costs unmanageable. How should you decide where to make that compromise? It should be based on reducing the risk associated with shipping the product to an acceptable level. So the most “risky” components—those most critical to the software’s operation or those where you think most bugs might be hiding—should be tested first, then the next most risky, and so on until you’re happy that the amount of risk remaining is not worth spending more time and money addressing. The end goal should be that the customer can see that the software does what it ought, and is therefore worth paying for.