July/August 2018 issue of acmqueue The July/August issue of acmqueue is out now
Subscribers and ACM Professional members login here



Quality Assurance

  Download PDF version of this article PDF

Error 526 Ray ID: 46e02c921aaa21da • 2018-10-23 00:44:15 UTC

Invalid SSL certificate

You

Browser

Working
Newark

Cloudflare

Working
deliverybot.acm.org

Host

Error

What happened?

The origin web server does not have a valid SSL certificate.

What can I do?

If you're a visitor of this website:

Please try again in a few minutes.

If you're the owner of this website:

The SSL certificate presented by the server did not pass validation. This could indicate an expired SSL certificate or a certificate that does not include the requested domain name. Please contact your hosting provider to ensure that an up-to-date and valid SSL certificate issued by a Certificate Authority is configured for this domain name on the origin server. Additional troubleshooting information here.

acmqueue

Originally published in Queue vol. 9, no. 6
see this item in the ACM Digital Library


Tweet



Related:

Jesse Robbins, Kripa Krishnan, John Allspaw, Thomas A. Limoncelli - Resilience Engineering: Learning to Embrace Failure
A discussion with Jesse Robbins, Kripa Krishnan, John Allspaw, and Tom Limoncelli



Comments

(newest first)

Bob Binder | Wed, 20 Jul 2011 16:24:42 UTC

4) Speed Control. Response time or loading was rarely part of the specification for any of the protocols under test (nearly all were "best effort".) Also, we did not do "on the fly" verification -- all of our results were pre-set in the test suite, so the time to evaluate a response was typically not significant. Marker messages were injected into the traffic using a protocol we developed for that to make it clear in the network message log what which messages corresponded to what test case. For protocols with time-out requirements, the test suites would simply have to be quiet and monitor for the expected response (if any) for the specified interval.


Bob Binder | Wed, 20 Jul 2011 16:10:44 UTC

3) Hellandizing. Of course, built-in test like this is often very useful. I hadn't heard of this approach and it sounds very useful, especially with the advanced concurrent processing in the NonStop OS. A similar strategy was used in the 2nd generation OS/400: http://portal.acm.org/citation.cfm?id=226253

But again, we were limited to testing the as-built protocol in a strictly black-box manner, so no such enhancements were possible.


Bob Binder | Wed, 20 Jul 2011 16:00:07 UTC

2) Visibility of State in the endpoint under test. We were limited to only checking "over the wire" data in a response -- the endpoint under test was a "black box". However, our Spec Explorer test models often included a representation of the state in the endpoint under test. This might be as simple as keeping track of message sequence numbers we expected the server to assign. In some cases we modeled the specified sequential constraints on server response (must login before accepting query or logout, etc.) then generated and checked that all observed traces were consistent with these constraints.


Bob Binder | Wed, 20 Jul 2011 15:46:14 UTC

Great questions. I'll reply to each in separate comments.

1) Hard/Easy (testability). Owing to the interoperability goals of this project, we only used the "over-the-wire" response to a test-case message as it compared with published behavior and data specification. Sometimes a specification did not allow for an unambiguous determination. There were two general causes of this: the result was not observable over the wire or the specification was too broad.

For example, the spec stated "on receipt of message y, the server updates x and returns a success status." First, we'd treat this as two requirements. The return status (second requirement) was observable and testable. If variable x was not visible (reported) in a response message, we would mark that requirement as non-testable. If the protocol allowed for a query on the value of x, we would try to use the query as a partial verification of the requirement.

In the other case (spec too broad), we evolved a "derived requirement" strategy. For example, suppose the spec stated "the response to query q is an unsigned 64 bit integer." To fully test for compliance, we'd have try all 2**64 values -- clearly infeasible and unnecessary. Here we would add a derived requirement, say, "the response to query q is 123456" and devise a test case. This led to strategy for analysis and test design which proved very effective. When possible, we also tried to craft derived requirements so as to allow Spec Explorer to automatically generate feasible and non-trivial test cases. The details are explained in "Discretizing technical documentation for end-to-end traceability tests" (see Read Further.)

Coverage of requirements for every protocol was quantified (we aimed to get at least 80% of total normative statements.) We tracked total, testable, non-testable, derived versus as-stated. Although we didn't do it, it would have been easy to quantify testability with the proportion of testable to non-testable requirements.

Clearly, it is preferable (and usually trivially simple) to design and build testable protocols, but we had to play the cards as dealt.


Tom Van Vleck | Tue, 19 Jul 2011 23:28:45 UTC

Thanks for a nice article!

Some protocols are harder to test than others. It would be interesting to characterize this hardness. Could a protocol be designed so that it was easy to test?

It reminded me of working on testing the Secure Electronic Transaction protocol in the 90s. If any error occurred, there was the same error message: no way to tell if the crypto failed, or the message did not parse, etc.

Seeing into the hidden state of the protocol parties is one issue. Slowing the protocol down so that it pauses between steps to let a test execute is another, reminds me of Pat Helland's idea at Tandem: http://www.multicians.org/thvv/hellandizing.html

To make a protocol easy to test, without compromising security, might require some kind of testing mode that enabled extra messages, like "record your state and tag it with this number" and a parallel out-of-band channel between the tester and the participants.

regards, tom


Leave this field empty

Post a Comment:







© 2018 ACM, Inc. All Rights Reserved.