The fuzzer is for those edge cases that your testing didn't catch.
MBT has positive effects on efficiency and effectiveness, even if it only partially fulfills high expectations.
A discussion with Michael Donat, Jafar Husain, and Terry Coatta
Merging the art and science of software development
Eliminating memory hogs
Embracing Failure to Improve Resilience and Maximize Availability
Failures happen, and resilience drills help organizations prepare for them.
A discussion with Jesse Robbins, Kripa Krishnan, John Allspaw, and Tom Limoncelli
Making the case for resilience testing
Avionics software safety certification is achieved through objective-based standards.
A Discussion with Nico Kicillof, Wolfgang Grieskamp and Bob Binder
Software maintenance is more than just bug fixes.
Have you ever worked with someone who is a complete jerk about measuring everything?
The Jeremiahs of the software world are out there lamenting, "Software is buggy and insecure!" Like the biblical prophet who bemoaned the wickedness of his people, these malcontents tell us we must repent and change our ways. But as someone involved in building commercial software, I'm thinking to myself, "I don't need to repent. I do care about software quality." Even so, I know that I have transgressed. I have shipped software that has bugs in it. Why did I do it? Why can't I ship perfect software all the time?
Networking and the Internet are encouraging increasing levels of interaction and collaboration between people and their software. Whether users are playing games or composing legal documents, their applications need to manage the complex interleaving of actions from multiple machines over potentially unreliable connections. As an example, Silicon Chalk is a distributed application designed to enhance the in-class experience of instructors and students. Its distributed nature requires that we test with multiple machines. Manual testing is too tedious, expensive, and inconsistent to be effective. While automating our testing, however, we have found it very labor intensive to maintain a set of scripts describing each machine's portion of a given test.
Thanks to modern SCM (software configuration management) systems, when developers work on a codeline they leave behind a trail of clues that can reveal what parts of the code have been modified, when, how, and by whom. From the perspective of QA (quality assurance) and test engineers, is this all just "data," or is there useful information that can improve the test coverage and overall quality of a product?
The increasing size and complexity of software, coupled with concurrency and distributed systems, has made apparent the ineffectiveness of using only handcrafted tests. The misuse of code coverage and avoidance of random testing has exacerbated the problem. We must start again, beginning with good design (including dependency analysis), good static checking (including model property checking), and good unit testing (including good input selection). Code coverage can help select and prioritize tests to make you more efficient, as can the all-pairs technique for controlling the number of configurations.
Quality assurance isn't just testing, or analysis, or wishful thinking. Although it can be boring, difficult, and tedious, QA is nonetheless essential.
It's all about what takes place at the boundary of an application.
Source code analysis is an emerging technology in the software industry that allows critical source code defects to be detected before a program runs.
Code diving through unfamiliar source bases is something we do far more often than write new code from scratch--make sure you have the right gear for the job.
Hard-to-track bugs can emerge when you can't guarantee sequential execution. The right tools and the right techniques can help.