view issue

Quality Assurance: Much More than Testing
Download PDF version of this article

by Stuart Feldman | February 16, 2005

Topic: Quality Assurance

  • View Comments
  • Print

Quality Assurance: Much More than Testing

Good QA is not only about technology, but also methods and approaches.


Quality assurance isn’t just testing, or analysis, or wishful thinking. Although it can be boring, difficult, and tedious, QA is nonetheless essential.

Ensuring that a system will work when delivered requires much planning and discipline. Convincing others that the system will function properly requires even more careful and thoughtful effort. QA is performed through all stages of the project, not just slapped on at the end. It is a way of life.


IEEE Standard 12207 defines QA this way: “The quality assurance process is a process for providing adequate assurance that the software products and processes in the product life cycle conform to their specific requirements and adhere to their established plans.”

This sentence uses the word process three times. That is a key aspect of QA—it is not a single technology, but also a method and approach.

Another key point is that quality is not treated as a philosophical issue, but, rather, as measurably meeting expectations and conforming to requirements. The rigor of the process should be chosen to suit the needs of the product and organization.

Finally, QA is about providing assurance and credibility: the product should work right, and people should believe that it will work right.

What goes into QA? Testing, of course, is a key activity. There is, however, an adage that “you can’t test quality into a product.” A solid test plan should catch errors and give a measure of quality. A good QA plan ensures that the design is appropriate, the implementation is careful, and the product meets all requirements before release. An excellent QA plan in an advanced organization includes analysis of defects and continuous improvement. (This feedback loop is characteristic of mature organizations.)

For physical products, QA involves manufacturing process control, design reviews, test plans, statistical methods, and much more. In relaxed implementations, there is occasional monitoring of the production line, and a few pieces are examined at each stage. In extreme cases, every step is monitored and recorded, intermediate products are torture tested with stresses exceeding the specifications, and many final products are destroyed. (Crash testing isn’t always a metaphor.) Only a few outputs make it into the field.

For software products, there are many QA process choices, depending on the structure of the organization, the importance of the software, the risks and costs of failure, and available technologies. These should be conscious decisions that are recorded and revisited periodically.


In an ideal world, perfection would be the norm. In the real world, you must make trade-offs. Although some people claim that “quality is free,” that is rarely the case. After much trial and error, you may arrive at a well-honed process that delivers high quality reliably and efficiently. Until you achieve that stage, demonstrably higher quality usually involves a longer and more expensive process than simply pushing the product out the door.

There are many types of requirements to be QA’d. Some involve meeting basic functional specifications: the system or program does the right thing on expected (or unexpected) inputs. Some involve performance measures such as throughput, latency, reliability, and availability.

Other major considerations depend on the operating environment. If the users will have limited understanding or ability to repair problems, the system must be validated on novices. If the system must operate in many contexts, interoperability and environmental tolerance must be verified.

In certain applications, the costs of failure are so high that it is acceptable to delay until every imagined test and cross-check has been done. In others, repairs are acceptable or affordable, or misbehaviors are tolerated. Just as a bank runs different credit checks on people who want to borrow $1,000 and those who want $1 million, different QA processes are appropriate for spelling checkers and cardiac pacemakers. Much of the fundamental work on high-reliability systems was done for military, aerospace, and telecommunications applications that had extremely rigorous requirements (and large project budgets); telephone switches and mainframes rarely fail.

The spectrum of QA rigor covers a wide range:

Research and experimental software. Requirements for quality may be quite low, and the process may be little better than debugging and a few regression tests. Nonetheless, the risks of embarrassment from failed public demos and withdrawn papers suggest a greater investment.

Business and productivity and entertainment tools. These are expected to work, but the occasional failure is (alas) no surprise. When the consequences of a crash or invalid result are acceptable, it may not be worthwhile to invest in a long QA cycle (or so many vendors say).

Business-critical tools. A much higher standard of planning and testing is required for key organizational software. Software that manages transactions for significant amounts of money, affects people directly, or is required for legal compliance needs to be credible, as well as functional. Errors can destroy an organization or its executives. Any development plan needs a significant investment in quality assurance, including careful record keeping and analysis.

Systems that are widely dispersed or difficult to repair. When it is difficult or expensive to access all the products, there is justification for extensive testing and design for remote repair. If 1 million copies of a game are sold with a highly visible flaw, the cost of upgrading and repairing could easily exceed the profit. A chip with a design flaw or an erroneous boot ROM can lead to the same unfortunate result. Heavy testing in a wide variety of environments is needed to build confidence, even if product launch is repeatedly delayed. In the extreme, it may be impossible to get to the product because it is embedded in equipment or extremely distant; if the mission is important, the products must be designed for remote repair and/or have unusually high quality standards. Examples include famous exploits for repairing space missions millions of miles from home.

Life- and mission-critical software. Failures of some systems can cause loss of life (braking systems, medical devices) or large-scale collapses (phone switching systems, lottery management systems). In such cases, elaborate QA is appropriate to avert disaster. It is not unusual for testing and other QA steps to absorb more than half of the elapsed development time and budget. Analysis must extend far beyond single components and functions—the behavior of the entire system must be assured.


Since QA is a process, it is natural to expect special roles and organizations to be assigned to it. In simple and undemanding projects, the designers and developers may also perform QA tasks, just as they do in traditional debugging and unit testing. Unfortunately, people are usually loath to spend a lot of time on assurance tasks; developing new features is much more exciting. Furthermore, the people who miss a special case during design will also be likely to miss it during testing.

Therefore, in larger organizations or for products with stringent requirements, QA is usually the responsibility of a separate group. Ideally, that group is independent of the development organization and has authority to require redevelopment and retesting when needed. The independent QA people are typically responsible for defining the process and monitoring the details of execution. Sadly, QA people rarely remain best friends with developers.

A separate organization is capable of the deep analysis that supports improvement of the process and the product. High levels of SEI CMM (Software Engineering Institute Capability Maturity Model) certification and ISO quality certification require significant levels of analysis and feedback; the QA organization is the natural home for those activities.

A QA organization need not be huge to be effective. Relatively small groups can do a good job, so long as they have independence, knowledge of the process, and understanding of the product. The QA staff also needs to be experienced in the many ways that products can be botched, processes can be short-circuited, and people can be careless. Assigning QA tasks to the most junior member of the team dooms the product and the staff.


Numerous textbooks and standards documents define the stages of QA. If you are going to be responsible for assuring quality, read them.

QA touches all stages of a software project. It typically requires careful capture and control of many artifacts, as well as strict version management. It is impossible to have a solid and replicable test plan without agreed-upon requirements and specifications.

In a traditional development process, the QA organization requires reviews at each stage, with careful records, verification, and signatures. Tests and release criteria are based on requirements, and release is based on test results. If there are product requirements for reliability and availability, you will need special testing environments and adequate amounts of time to acquire data.

In an agile programming environment, where requirements may be updated every few weeks as customers examine drafts of the software, the QA process needs to be more flexible than in a traditional environment. Nonetheless, someone must be responsible for assuring testing of basic requirements, rapidly updating and recording regression tests, and ensuring progress reviews. (Extreme programming does not excuse curdled databases.)


People are accustomed to software having more bugs than hardware. There are many reasons for this:

  • The difficult, irregular, human-oriented parts of a system are left to the software.
  • Conversely, the hardware usually has replicated components and considerable regularity, so the logical complexity may be much lower than the size of the design suggests. On the other hand, analog and mechanical issues can introduce new dimensions to the problem.
  • Despite decades of experience, managers often plan the software after the hardware designs and tests have been completed. There is then neither time nor budget to support appropriate QA of the software.
  • Hardware engineers have a deeply ingrained respect for quality, both through their education and because they know how hard it is to change a badly designed physical object.

Software engineers can learn a lot from their hardware colleagues about rigorous planning, process, and testing. Hardware people, on the other hand, can learn a lot about usability, flexibility, and complexity.


The details of the QA process depend on the organization, staff, and expected use of the product. It can be difficult, tedious, odious, time-consuming, and expensive.

But it is also necessary. Learn to do QA well to minimize the pain and maximize the reward.

STUART FELDMAN is vice president of On Demand Business Transformation research for IBM. Before that, he was director of the IBM Institute for Advanced Commerce and head of computer science research, then vice president for Internet Technology. Prior to coming to IBM in 1995, Feldman spent 11 years at Bellcore and 10 years at Bell Labs. He was a member of the original Unix research team and is best known as the creator of the Make configuration management system, as well as the author of the first Fortran-77 compiler. Feldman received an A.B. in astrophysical sciences from Princeton University and a Ph.D. in applied mathematics from the Massachusetts Institute of Technology.


Originally published in Queue vol. 3, no. 1
see this item in the ACM Digital Library

Back to top

  • Stuart Feldman is the engineering site lead for the large Google office in New York City and is responsible for the health and productivity of Google's engineering offices in the eastern part of the Americas, Asia, and Australia. He also has executive responsibility for a number of Google products.
    Before joining Google, he worked at IBM for eleven years. Most recently, he was Vice President for Computer Science in IBM Research, where he drove the long-term and exploratory worldwide science strategy in computer science and related fields, led programs for open collaborative research with universities, and influenced national and global computer science policy.
    Prior to that, Feldman served as Vice President for Internet Technology and was responsible for IBM strategies, standards, and policies relating to the future of the Internet, and managed a department that created experimental Internet-based applications. Earlier, he was the founding Director of IBM's Institute for Advanced Commerce, which was dedicated to creating intelle ctual leadership in e-commerce.
    Before joining IBM in mid-1995, he was a computer science researcher at Bell Labs and a research manager at Bellcore (now Telcordia). In addition he was the creator of Make as well as the architect for a large new line of software products at Bellcore.
    Feldman did his academic work in astrophysics and mathematics and earned his AB at Princeton and his PhD at MIT. He was awarded an honorary Doctor of Mathematics by the University of Waterloo in 2010. He is former President of ACM (Association for Computing Machinery) and member of the board of directors of the AACSB (Association to Advanced Collegiate Schools of Business). He received the 2003 ACM Software System Award. He is a Fellow of the IEEE, of the ACM, and of the AAAS and serves on a number of government advisory committees.
    For additional information see the ACM Digital Library Author Page for: Stuart Feldman


Leave this field empty

Post a Comment:

(Required - 4,000 character limit - HTML syntax is not allowed and will be removed)