July/August 2019 issue of acmqueue The July/August 2019 issue of acmqueue is out now

Subscribers and ACM Professional members login here


  Download PDF version of this article PDF

Error 526 Ray ID: 5276f3232fcce728 • 2019-10-18 02:05:52 UTC

Invalid SSL certificate








What happened?

The origin web server does not have a valid SSL certificate.

What can I do?

If you're a visitor of this website:

Please try again in a few minutes.

If you're the owner of this website:

The SSL certificate presented by the server did not pass validation. This could indicate an expired SSL certificate or a certificate that does not include the requested domain name. Please contact your hosting provider to ensure that an up-to-date and valid SSL certificate issued by a Certificate Authority is configured for this domain name on the origin server. Additional troubleshooting information here.


Originally published in Queue vol. 12, no. 5
see this item in the ACM Digital Library



Simson Garfinkel, John M. Abowd, Christian Martindale - Understanding Database Reconstruction Attacks on Public Data
These attacks on statistical databases are no longer a theoretical danger.

Rich Bennett, Craig Callahan, Stacy Jones, Matt Levine, Merrill Miller, Andy Ozment - How to Live in a Post-Meltdown and -Spectre World
Learn from the past to prepare for the next battle.

Arvind Narayanan, Jeremy Clark - Bitcoin's Academic Pedigree
The concept of cryptocurrencies is built from forgotten ideas in research literature.

Geetanjali Sampemane - Internal Access Controls
Trust, but Verify


(newest first)

Displaying 10 most recent comments. Read the full list here

Mike Bland | Tue, 20 May 2014 14:41:43 UTC

@Andrew Raybould: Regarding unit testing (not necessarily pure TDD), I was dismayed that reactions to these bugs did not contain a stronger argument to support the claim that unit testing could've found them, and that some actively dismissed unit testing because the code was "too hard to test". I guess in other parts of the world, there's the opposite problem, with some taking TDD to the extreme and testing things to death. Between this and the Fowler article, I hope to present an emphatic, yet non-extremist view that unit testing, done well, can go a long way to preventing low-level errors such as these and should be a part of every developer's arsenal (along with static analysis et. al.)--which is as much a cultural argument as a technical one.

@Bob Binder: I think, in this one specific case of "abortion clauses"/resource cleanup, goto provides the least cumbersome solution in C. As I mention in the article, you could structure the code with nested ifs or redundant check/release/return statements bound to every step of the algorithm, but I think that potentially harms readability/maintainability more than applying goto to this one particular, well-understood case. But I was always a C++ programmer in my day, and never had an occasion to use goto in my career as best I can recall.

As far as "the Apple flaw" vs "the Heartbleed bug", I was avoiding the use of "bug" back-to-back in the same sentence. I could've used "defect" or "vulnerability" interchangeably as well.

Bob Binder | Mon, 19 May 2014 21:44:32 UTC


The rubric "Go To Considered Harmful" was the creation of Robert Ashenhurst, long time editor of the CACM letters to the editor section, Professor of Computing at the University of Chicago, and a great friend. While it's true Dijkstra didn't write that, I think Ashenhurst succinctly stated the essence of his point.

It is always possible to structure code correctly without gotos - even in C. There's no point in rehashing that war, but using go to seems like pissing in the wind just to prove you can. I wonder if that mind set excluded code reviews and static analyzers as well.

As to whether unit testing should have revealed this bug, I cannot imagine how any minimally adequate test suite could not have resulted in execution of the second go to, which presumably would have triggered an observable failure. By minimally adequate, I mean that each line of code has been executed at least once in test. Also known as statement or line coverage, this criterion is minimally adequate because if you do not execute a statement with a bug, you have no chance of revealing that bug. Of course, this is a very weak criterion that will not reveal many, many kinds of bugs, but it is better than no chance at all.

A question - why is it "The Apple flaw" and the "Heartbleed bug"?

Andrew Raybould | Sun, 18 May 2014 16:31:34 UTC

@Mike Bland: I take your point that you can only address one issue at a time.

I was surprised by your feeling that unit testing is under-appreciated - are you referring specifically to commentary on this bug? At least since the appearance of the phrase 'test-driven development', I have had the impression that testing, and unit testing in particular, has become the latest of the One True Ways. Perhaps the problem is that while everyone is talking about it, few are actually doing it.

It would be a good thing if increased unit testing resulted in a reduction in code duplication, but I wonder if developers will apply cut-and-paste to test generation with the same enthusiasm that they have applied it to application code generation.

I see a pervasive design problem that seems, in some ways, closely related to code duplication: the proliferation of unnecessary classes and methods that do little to advance the computation or provide any other benefit. Instead, they fragment algorithms, responsibilities and information, and scatter their representation across wide swaths of source code, making it difficult to figure out (and therefore analyze) what is going on (the Gertrude Stein problem: "there is no 'there' there.") Unit testing might be more effective in rooting out this practice than simple duplication, because cut-and-paste would not be as (mis-)applicable.

I agree that the continuing difficulties with software quality have a deep-rooted cultural component. A colleague once told me that in the medical equipment company where he once worked, testing for buffer overflows and invalid pointer use was not performed because if problems were found, they would have to fix them. If that is not a crime, it should be.

Mike Bland | Sun, 18 May 2014 15:44:16 UTC

@Steve Mattingly: No worries; I was able to keep the two "yous" separate. Regarding your tone, you should've seen the drafts of this and the Fowler article before I got folks to review them! ;-)

Also, I misspoke in my earlier comment: The next installment of the Fowler article will cover the costs and benefits of unit testing; the following installment will cover complimentary tools and practices (which I've already updated to cover the "false warnings" issue).

Steve M | Sat, 17 May 2014 22:28:58 UTC


Thanks for the feedback; I think we're on the same page.

I apologize if, in my haste, my tone was too strident. (In particular, I hope that addressing "you" Mike Bland in the early part of the post did not give the impression that I was telling "you" Mike Bland to check your ego in the latter part.)

Steve Mattingly

Mike Bland | Sat, 17 May 2014 14:24:28 UTC

@Steve M: Forgot to mention: I actually sympathize with your feeling about a false culture of "false warnings". In retrospect, I regret not being clearer about that; in my haste, I neglected to give that false notion its due. I'll add something to my Fowler article to challenge that very notion.

Mike Bland | Sat, 17 May 2014 14:18:43 UTC

@Steve M and Andrew Raybould: I agree static analysis/compiler warnings should be applied; but they've gotten a lot more publicity so far. I had nothing to add to that topic, and my objective here was to break the silence around unit testing. In my article on Martin Fowler's site (linked from this article), the next installment to appear next week will be devoted to brief highlights of other tools, and will remark on how such tools and unit testing are complimentary; I was up against a word limit here.

I'm all for complimentary tools/redundant safeguards, but I also assert that the tolerance of code duplication is evidence of a deeper cultural issue, and that unit testing alone amongst all other proposed solutions (except maybe code review) would exert the design pressure to avoid such duplication. Braces and analysis would've contained the potential damage in this instance, but would not have gone far to solve the deeper cultural problem that tolerates low-quality code that enables bugs like this to hide.

Also, the culpability isn't just with the team that produced the bug, or even the company: I'd like to see greater accountability across the industry. Every tool should be brought to bear to prevent such defects, and so far unit testing hasn't had very strong representation, and has on occasion been actively dismissed. I aim to fix that.

@Steve M: I agree the bug never should've made it to a test. The programmers never should've allowed the duplication that sheltered it, and unit testing would've (should've) encouraged only one copy of the algorithm. I feel reasonably sure the first copy didn't contain the bug, otherwise it would've appeared in the code more than once, since the presence of six copies strongly suggests cut-and-paste. That, and they should've had compiler warnings turned on full.

We are of the same mind regarding your comment on professional ethics. I'm avoiding using legally-charged words, but I'm hoping folks will put two and two together for themselves before the lawyers do.

Thanks for your thoughtful comments. They are truly a cut above. :-)

Richard Tallent | Sat, 17 May 2014 07:42:56 UTC

I would argue that one of the key root causes is the overuse of branching to compute a result for "err" and act on it. Consider this alternative:

err = SSLFreeBuffer(&hashCtx) || ReadyHash(&SSLHashSHA1, &hashCtx) || SSLHashSHA1.update(&hashCtx, &clientRandom) || SSLHashSHA1.update(&hashCtx, &serverRandom) || SSLHashSHA1.update(&hashCtx, &signedParams) || SSLHashSHA1.final(&hashCtx, &hashOut); if (err != 0) goto fail;

IMHO, this performs just as well (using bitwise short-circuiting), expresses the logic more clearly, and avoids relying on duplicated branching code and sidecar evaluation of assignments.

Andrew Raybould | Sat, 17 May 2014 03:56:36 UTC

In his enthusiasm to promote unit testing, which is a laudable goal, the author seems to me to be giving short shrift to static analysis. It is dismissed with the comment that its results might have been hidden in false positives, while on the other hand, it is assumed that unit testing would have found the bug.

In this particular case, that might be true, but we are not merely attempting to avoid a repeat of this particular bug, and the practical limitations of testing are well-known. In practice, analysis and testing tend to be complementary, as neither are perfect, and any serious attempt to improve code quality should take advantage of both.

The author states that the code could be refactored to make it more testable, but it could also be refactored to make its analysis more tractable. You may recall that this was Dijkstra's primary argument for deprecating the go-to.

Steve M | Fri, 16 May 2014 20:56:14 UTC

I think your emphasis on testing obscures the most important point here. This defect should have never made it to any sort of test.

Instead, it should have been identified and eliminated at compile time. You were very close to the target when you wrote "[c]ompiler and static-analysis warnings also could have detected the unreachable code...". (I would say "should have" rather than "could have".)

But the rest of that sentence illustrates some cultural issues that quickly snuff out this glimmer of insight: "... though false warnings might have drowned out the signal if such tools weren't already being used regularly."

I question the concept of "false warnings". If the compiler, lint tool, or similar is generating warnings that it should not (i.e., the alleged issue does not exist in the code), then you need to change tools. I have never seen this happen. But I have seen many developers and teams perpetuate "false warning" cultures because they don't (or can't be bothered to) understand what the warning is telling them about the code. As a result, the warnings are ignored or (more likely) hidden through compiler flags. I have seen this too many times to count, and this is not the first time I have seen it mask defects in systems with large implications for public welfare.

I advocate a culture that holds these truths to be self-evident: that static analysis tools should be used to the greatest extent possible, with the strictest settings, and that every warning message can and should be resolved appropriately. "Appropriately" here means, in descending order of preference: 1. Change the code to eliminate the issue identified by the warning. If this seems unnecessary, check your ego. If your expertise and judgment actually exceeds that of all the people who produced the compiler or analysis tool, consider changing tools. 2. Change the code to suppress the warning for a specific occurrence. This can usually be done with some sort of annotation to the code. This should include a comment with the rationale for this step. The rationale should be a group consensus, not an individual decision. 3. Suppress the warning for all occurrences, using the analysis tool's configuration options. This also should be by group consensus, and should be a rare step.

This is low-hanging fruit. Professional ethics demand that we advance toward a culture where ignorance of these practices is considered incompetence, and willful failure to apply them is considered malpractice.

Displaying 10 most recent comments. Read the full list here
Leave this field empty

Post a Comment:

© 2019 ACM, Inc. All Rights Reserved.