September/October 2018 issue of acmqueue The September/October issue of acmqueue is out now

Subscribers and ACM Professional members login here



Quality Assurance

  Download PDF version of this article PDF

Error 526 Ray ID: 47bf4f40ffb5c5fa • 2018-11-19 02:40:00 UTC

Invalid SSL certificate

You

Browser

Working
Newark

Cloudflare

Working
deliverybot.acm.org

Host

Error

What happened?

The origin web server does not have a valid SSL certificate.

What can I do?

If you're a visitor of this website:

Please try again in a few minutes.

If you're the owner of this website:

The SSL certificate presented by the server did not pass validation. This could indicate an expired SSL certificate or a certificate that does not include the requested domain name. Please contact your hosting provider to ensure that an up-to-date and valid SSL certificate issued by a Certificate Authority is configured for this domain name on the origin server. Additional troubleshooting information here.

acmqueue

Originally published in Queue vol. 11, no. 6
see this item in the ACM Digital Library


Tweet


Related:

Robert Guo - MongoDB's JavaScript Fuzzer
The fuzzer is for those edge cases that your testing didn't catch.


Robert V. Binder, Bruno Legeard, Anne Kramer - Model-based Testing: Where Does It Stand?
MBT has positive effects on efficiency and effectiveness, even if it only partially fulfills high expectations.


Terry Coatta, Michael Donat, Jafar Husain - Automated QA Testing at EA: Driven by Events
A discussion with Michael Donat, Jafar Husain, and Terry Coatta


James Roche - Adopting DevOps Practices in Quality Assurance
Merging the art and science of software development



Comments

(newest first)

Bobby Lin | Wed, 21 Aug 2013 00:58:53 UTC

There is a need for help from software testers to test the system in an extreme manner to induce failures, not just based on the methods that are taught in school.


Frank Sowa | Mon, 22 Jul 2013 14:55:57 UTC

This was a well-written summary. As a consultant for 32 years, I'd just like to add a few pragmatic steps that would better ensure use of what you've written. First, create a "production-ready" alternate critical element system redundant to the running system -- that can be fired up offline in the testing facility to deal with symbian army elements that may enter (or even be maliciously placed by hackers) in the production system. And, if financially possible also have a clean back-up architecture (in-house, or cloud-based) to switch over to if the faults are corrupting your system in a sector of your network. This two-step redundancy in design is often seen as an unnecessary cost-burden. But, as often as systems do have corruption issues, they are a low-cost benefit when trouble sets in.

Just as in a disease-control lab in healthcare, viruses and infections are resolved off-line then the immunization is applied post-fix, the "production-ready" elements provide that capability (and in a critical instance can be quickly swapped into the production running system to avoid catastrophe). The latter IT governance approach to have the second "back-up" system to switch to -- allows the system to keep running while components are removed from production to identify and take corrective action. Obviously, these are something IT professionals inherently understand and have been trained in. But, before one balks at what I've written, remember that the strategic drivers and decisions lie in the C-Level offices and are usually made by financial people who may, or may not have, a background that intrinsically understands the depth of these issues (until after the crises occur). So, my point -- you may need to explain importance of the "prepared approach" to proactively setting up the means to resolve these issues.


rich | Fri, 28 Jun 2013 16:41:32 UTC

This is a great summary of how to induce the development of a resilient system. What it doesn't do, though, is speak to what architectural principles are necessary to capitalize on such an approach. For example, in the design phase, one principle that seems important is to make sure that the system can always make "progress." Even if progress is to stop fielding requests of a certain type while remedial action takes place, it should be designed to continue in a controlled manner no matter what conditions it encounters. Often performance or expedience concerns nullify this principle, particularly when delivery deadlines are tight.

Also, the Armies are a great example of how to automate "negative QA" -- the ability to test that the system produce a predictable response to unforseen run time circumstances. One new simian that would be wonderful to meet would be the "Configuration Orangutan" -- an automated chaos monkey that misconfigures the system randomly and then runs it again non chaotic workloads. Often the human element is the most unpredictable and pernicious.

Great article. Thanks for codifying the Netflix approach so neatly.


Leave this field empty

Post a Comment:







© 2018 ACM, Inc. All Rights Reserved.