(newest first)

  • Keith | Tue, 27 Oct 2015 05:13:20 UTC

    It is refreshing to see that a reputable software engineer has openly criticized OpenSSL. OpenSSL is a terrible piece of software. I started using it in 2001.
    The API makes no sense at all. The functions that you call, the way they're called and the sequence the functions are called in cannot be determined from reading the documentation. You have to rely on examples and look at the source code of accompanying programs.
    Function names are made up from macros. This means you cannot easily find them in the source code. These names can change from one release to the next.
    Upgrading to a newer version of OpenSSL will almost always break application code that uses the API. I have concluded that only the authors of OpenSSL can know how to use it. The only portable and reliable way to use OpenSSL is to call the application program with the appropriate command line arguments.
  • Alex | Wed, 06 Aug 2014 02:44:12 UTC

    1. What is the difference between the heartbleed bug and other major problems/bugs with openssl and those of proprietary SSL implementations?
    If you flip to the back of the book, you'll find answers to all odd numbered questions:
    1. The bug was found, reported, and corrected.
    I find it humorous things like this are highlighted when these problems exist in ALL software, not just open source software, and certainly not just openssl. When is the last time any proprietary SSL implementation was audited by a third party? If never is your guess, you're probably right. What makes you think any of their code is any better? Do you just trust their code is more secure? I don't -- I'm only a weekend hacker (and I wouldn't even call myself that level) and I wouldn't say that ~50% of the professional devs I meet are some combination under-educated, under-skilled, and extremely lacking in critical thinking/problem solving skills, and there are a lot of people in the industry who would agree with me. Surely some of those people must be working on SSL implementations, or other security-centric software.
  • Dan Cross | Fri, 09 May 2014 16:27:23 UTC

    tal: it evaluates to neither.  :-)  It's 2^17 (arithmetic shift has lower precedence than addition).
    Something I learned long ago is that if one is to successfully program in some language, one must write idiomatic code in that language.  Unfortunately, I think that one of the problems with writing C code is that there is a lack of idiom around things like parenthesizing expressions that can lead to surprising problems if one is not familiar with the precedence rules of the language.
  • Martin Leiser | Fri, 25 Apr 2014 21:19:11 UTC

    hmmm... I think it is all about Architecture.
    Software has bug. Period.
    On the other side:
    We know for 50 years how a MMU works.
    Hardware has bugs too, but much less.
    Why is to web server, openssl, the private key in one adress space?
    Use stunnel with PKCS11 to off load the private key.
    And the web server in a different process.
    Two simpl rules:
    Divide and conquer.
    Least privilege.
    Add a third rule: CPUs are cheap.. switch off optimizations for security relevant code.
    But all that is not the trend of today is multithreading, throwing 100.000.000
    Lines of code in one application server.....
    Dont blame the openssl programmers, it is their bug,
    but our fault.
  • Sebastiaan | Thu, 17 Apr 2014 22:45:53 UTC

    Must be because English is not my native language, but in my book these commits don't fall in the category 'typo'...;a=commitdiff;h=79dabcc1373be17283d736901ddf9194609ba701;a=commitdiff;h=82f42a1d2e9271359b60d16249c26baadae788db;a=commitdiff;h=24f599af21dfd8e1de693fd84fb612269c28de0c
  • Sebastiaan | Thu, 17 Apr 2014 22:21:07 UTC

    @willy just picked two random commits and yup, it's scary!
    A new go to fail in the making? I'm no C programmer, but these two commits removed curly brackets and subsequently converted a single line of code inside an 'if' statement to two lines... Sure this code is correct, but *visually* harder a lot harder to spot?;a=commitdiff;h=34e43b909f02de444678d937ed3dc347ce13ba1a;a=commitdiff;h=833a896681b3287e5ab9c01f4f0234691f4076a8
  • Morten Jensen | Wed, 16 Apr 2014 20:37:41 UTC

    Great article, was referred here from the debate:
    > /* 
    > * The aim of right-shifting md_size is so that the compiler 
    > * doesn't figure out that it can remove div_spoiler as that 
    > * would require it to prove that md_size is always even, 
    > * which I hope is beyond it. 
    > */ 
    > div_spoiler = md_size >> 1; 
    > div_spoiler  rotate_offset = 
    >    (div_spoiler + mac_start - scan_start) % md_size;
    > Be honest  would you have considered a proven, correct, compiler improvement a security risk > before I showed you that?
    > Of course you wouldn't! It was proven correct, wasn't it?
    I do embedded development, and I sometimes experience that I have to qualify a variable volatile, to get the compiler to stop optimizing the code in ways that spoils the intended behavior. 
    I would imagine the OpenSSL code could do the operation on a volatile-qualified variable and achieve the same effect without having to resort to nasty tricks ?
    From my K&R on the volatile qualifier: "The purpose of volatile is to force an implementation to suppress optimization that could otherwise occur."
  • Peter Kriens | Wed, 16 Apr 2014 14:39:18 UTC

    Isn't this also a reason to not program this kind of code in C(++) instead of a managed language?
  • Kevin L | Wed, 16 Apr 2014 11:32:12 UTC

    As an experienced engineer in both the process industries (MS in ChemE) and software (BS in CompSci), I think that data breaches at the SSL/TLS layer need to be treated in much the same way as pressure breaches in physical systems: you use multiple independent systems to verify each other **while online**, and if any one of those systems sees a problem you interrupt the flow (terminate the connection).
    Applied to this case, that would be two or more SSL libraries executing in separate processes that would each perform the SSL/TLS protocol independently, with the HTTP process comparing their outputs and only sending bytes to the other side when the outputs fully agree.  If they disagree, the HTTP daemon would log an error and terminate the connection.  In this fashion each library would expose the bugs in the other, while protecting the users.  Eventually both libraries would get better, only failing simultaneously for errors in the protocol design or the underlying crypto math.  (This is also the approach used in aviation: 3 computers, 2 to check for hardware faults and a third in an independent implementation to check for logic faults.)
    Outside of generating nonces and random numbers that would have to be passed between the libraries and the HTTP process, this doesn't seem like it would be too hard to do.
  • Javier | Wed, 16 Apr 2014 04:01:52 UTC

    Very nice article
    Aside from DANE, Convergence(  is another alternative to CA.
    OpenBSD is cleaning OpenSSL(
  • Tom Limoncelli | Wed, 16 Apr 2014 01:41:35 UTC

    The patch is a good example of how to write bad C code:
    Statements like this:
    if (1 + 2 + payload + 16 > s->s3->rrec.length)
    More clear and "defensive" would be:
    if ((1 + 2 + payload + 16) > s->s3->rrec.length)
    As published, the next person to modify the code is likely to mess it up.  Don't write code for yourself, write it for the next person that has to read it.
    If you think "but that's a waste of parenthesis" then tell me what this code does:
    x = 1 << 16 + 1
    Does it evaluate to 17 or 32?  The answer will surprise you.
  • Benni | Tue, 15 Apr 2014 18:58:03 UTC

    @Willy: Well, if you do contractor work, and your contractor is the BND, then certainly, they expect a different quality of the code. For example, readable code is, in the eyes of BND, certainly not "good code" if an open source project is concerned where your objective is to "introduce features" that deliberately weaken the security. If the code would be written in a documented and understandable manner, then you could not introduce the "features" that the BND likes without it getting noticed. Either the people on openssl are just plainly incompetent, or their main contractor is in Pullach, wo wants such code styles. 
  • Elessar | Tue, 15 Apr 2014 17:03:16 UTC

    There is an alternative to the CA mafia: DANE. Not perfect, but still better.
  • Willy Tarreau | Tue, 15 Apr 2014 13:00:30 UTC

    @Benni: who are the sponsors ? I don't care a dime provided that code is understandable and auditable. That's the principle of opensource. Whoever can contribute, someone will eventually find a pitfall in what was contributed. It's the same for openssl BTW, except that its level of obfuscation is far beyond average and nobody understands that code, probably even not always the maintainers themselves, judging by some comments and/or commit messages...
    *This* is the problem that needs a fix.
  • Willy Tarreau | Tue, 15 Apr 2014 12:53:49 UTC

    Hi Poul Henning,
    What I find even worse with this project is that there is no hope that it will ever evolve until the whole team is replaced. There is no culture of fixing bugs there. Please simply take a look at the usefulness of the commit messages present in the git repository :
    Hint: every time you see "typo" or "fix warning" or "remove duplicate statement", you should carefully check the change, because sometimes it *silently* fixes a scary bug. Sometimes, you'll even discover a CVE in the detailed commit message but not in the summary. In fact, if you want to remain safe, you MUST NOT use the stable version but only backport from it into yours whatever looks like a *real* bug after carefully reading *all* the changes. A significant number of changes happening in -stable are even accidental backports of features which are not completely reverted. This version management is a total failure and can only lead to the situation we know today.
    Maybe if someone takes over the project and clearly separates maintenance from experimentation, he could save the project, but I really doubt it considering the amount of obfuscation that went into this project over the years. And sadly, most of the internet's security relies on this.
  • Jason Gulldge | Tue, 15 Apr 2014 09:22:27 UTC

    Love the article, loathe what it means for security online, and I long for something better. 
  • Somebody on the internet | Tue, 15 Apr 2014 09:18:51 UTC

    Apparently, the implementation of PowerSSL is far more auditable - thus maintainable.
    For some people, switching might be a good idea.
  • Benni | Tue, 15 Apr 2014 08:14:31 UTC

    Some of the changing managers of Crypto AG did work for Siemens before. Rumors, saying that the German secret service BND was hiding behind this engagement, were strongly denied by Crypto AG.
    But on the other hand it appeared like the German service had an suspiciously great interest in the prosperity of the Swiss company. In October 1970 a secret meeting of the BND discussed, "how the Swiss company Graettner could be guided nearer to the Crypto AG or could even be incorporated with the Crypto AG." Additionally the service considered, how "the Swedish company Ericsson could be influenced through Siemens to terminate its own cryptographic business."
    The secret man have obviously a great interest to direct the trading of encryption devices into ordered tracks. Ernst Polzer*, a former employee of Crypto AG, reported that he had to coordinate his developments with "people from Bad Godesberg". This was the residence of the "central office for encryption affairs" of the BND, and the service instructed Crypto AG what algorithms to use to create the codes. (* name changed by the editor)
    Members of the American secret service National Security Agency (NSA) also visited the Crypto AG often. The memorandum of the secret workshop of the Crypto AG in August 1975 on the occasion of the demonstration of a new prototype of an encryption device mentions as a participant the cryptographer of the NSA, Nora Mackebee.
    Bob Newman, an engineer of the chip producer Motorola, which cooperated with Crypto AG in the seventies to develop a new generation of electronic encryption machines, knows Mackebee. She was introduced to him as a "counselor".
    Depending on the projected usage area the manipulation on the cryptographic devices were more or less subtle, said Polzer. Some buyers only got simplified code technology according to the motto "for these customers that is sufficient, they don't not need such a good stuff."
    In more delicate cases the specialists reached deeper into the cryptographic trick box: The machines prepared in this way enriched the encrypted text with "auxiliary informations" that allowed all who knew this addition to reconstruct the original key. The result was the same: What looked like inpenetrateable secret code to the users of the Crypto-machines, who acted in good faith, was readable with not more than a finger exercise for the informed listener.
    "In the industry everybody knows how such affairs will be dealed with," said Polzer, a former colleague of Buehler. "Of course such devices protect against interception by unauthorized third parties, as stated in the prospectus. But the interesting question is: Who is the authorized fourth?"
  • Benni | Tue, 15 Apr 2014 08:10:53 UTC
    "Please note that we ask permission to identify sponsors and that some sponsors we consider eligible for inclusion here have requested to remain anonymous.""
    They disclose only 3 sponsors. But why should a sponsor for a security library want to be anonymous? Well in case of RSA, the sponsor NSA certainly had an interest to be anonymous.
    The openssl foundation writes:
    "Does your company use the OpenSSL toolkit and need some help porting it to a new platform? Do you need a new feature added? Are you developing new cryptographic functionality for your product?
    To every secret service, this must sound like music in their ears. They can anonymously donate, and get "features" into openssl, similar to the situation with RSA.
    The developers are mainly germans. A lead developer of openssl works near muich (Dachau). If he gets into a suburb train, he is in 20 Minutes at the headquater of the german secret service BND in Pullach. Given that the BND is, according to Spiegel,  a major shareholder of Crypto AG, and has apparently a decade experience in weakening Crypto hardware, it would be naive, to assume the BND would not even try to influence Openssl.
    So, who are the anonymous sponsors of openssl?
    Is the german secret servide BND in the list of Sponsors?
    To exclude this, the openssl foundation should at least declare not to take any money from any  intelligence agencies what so ever.
    that the BND has a large experience how to weaken crypto devices can be seen in this Spiegel article:
  • Poul-Henning Kamp | Tue, 15 Apr 2014 07:47:04 UTC

    I mean that our identities are social constructs.
    I'm "Poul-Henning Kamp" only because my parents named me thus, I could as easily have been named anything else, and there is nothing but social constructs which ties me to that name.
    When you connect to a stranger, and somebody authenticates that stranger as "Poul-Henning Kamp", you still don't know who or what that stranger is, only that other people know him as "Poul-Henning Kamp".
    In particular, do you do not know what other identities that stranger may have, such as "dad", "husband" or "identity thief".
    Identities are only labels, authentication tells you what's on those labels, not what the labels are attached to.
  • Muhammad Haider | Tue, 15 Apr 2014 07:15:49 UTC

    "We have never found a simpler way for two parties with no prior contact, to establish a secure telecommunications channel. Until the development of asymmetric cryptography (1973-1977) it was thought to be impossible. And if you want to know who is at the other end, the only way is to trust some third party who claims to know. No mathematical or cryptographic breakthrough will ever change that, given that our identities are social constructs, not physical realities."
    What do you mean by identity here?
    Doesn't an ip address suffice to function as an identity in online communications?
    Is it mathematically impossible to affirm identity in a sufficiently complicated language?
  • Poul-Henning Kamp | Tue, 15 Apr 2014 06:49:30 UTC

    @Senthil:  Wrong logic.  The attacker just need to find one hole, the defender must close them all.
    @Mansour:  Yes, restraint is often something programmers only learn with experience.
    @bascule:  As a veteran from the OSI-wars, I'd say anthing involving X.[45][0-9][0-9] is a mistake, and X.509 is not one of the smaller of these.
    @Latj: Nobody who issues false certs for should be trusted, no matter what language they speak.  (I have several Turkish friends who I trust.)
  • Senthil Kumaran | Tue, 15 Apr 2014 05:12:08 UTC

    The code is complex not just for the maintainer, but for the attacker too.
    So we get levelled there and I think, who get's the upper hand is in play right now.
    I enjoyed reading this piece/rant. Support a competing, backwards compatible, multi-plaform, community accepted reimplementation! 
  • | Tue, 15 Apr 2014 05:04:15 UTC

    Great you wouldn't have goto in FreeBSD, but that doesn't actually substantively say why. You make it seem like goto is total fail in all cases, which is obviously not true for at least the reason I cited, which you did not refute.
    You also pointed out that one of the "problems" with openssl is "inline assembly code"... but you provide no references to where you found this. I am just speculating here, but I guess that the problem isn't really the inline part, and the assembly? 
    >As for gnutls, their API is not an obvious improvement >over OpenSSL.
    Have you used certtool? Nobody in their right mind can argue that certtool alone is not an obvious improvement over the openssl CLI. 
    Besides, API compatibility is actually a good thing, so improving that isn't exactly what needs to be done there to make it notable.
  • Aditya | Tue, 15 Apr 2014 04:45:10 UTC

    "Now  finally  we can securely transmit a picture of a cat from one computer to another." You got me good at this part. 
  • Chris Samuel | Tue, 15 Apr 2014 00:47:43 UTC

    Have you had a look at Dan Bernstein et. al's crypto library NaCL?
    There is also a minimalist variant called TweetNACL which fitted into 100 tweets. The human readable version is less than 900 lines long. Apparently designed with auditability and good defaults as requirements.  There is a review as part of LWN's security section back in January here:
    Of course a shout out to the US government for their export regulations that triggered the crypto wars in the 80's and 90's and the rise of SSLeay and hence OpenSSL.  Thanks for nothing..
  • No | Tue, 15 Apr 2014 00:29:55 UTC

    Hold your disappointment Lewis Burns. The column this article appears under is called 'The Bikeshed' for a reason.
  • Mansour | Mon, 14 Apr 2014 23:52:08 UTC

    I would add that the exceptionally-skilled programmers must also be exceptionally-skilled hackers, since to write secure code one must know the many things not to do.
    Not long ago I wrote a static analysis tool for C. I tested it with the source code of many open-source projects, and I remember OpenSSL in particular. One true positive would lead to another, and it didn't stop... I believe no amount of auditing or patching will make that code secure.
  • Lewis Burns | Mon, 14 Apr 2014 23:45:35 UTC

    I'm disappointed in reading this type of writing in ACM's web site, which I assume related to academic writing style, but this paper is not. It should go to a newspaper column. Things like the following:
    "We need to ensure that the compiler correctly translates the high-level language to machine instructions. And only exceptionally skilled compiler programmers will do!"
    Basically most claims are author's subjective, no evidence. ACM, please make your writers use evidence or what they think.
  • Greg Buchholz | Mon, 14 Apr 2014 23:16:37 UTC

    If only we didn't needs some many lines of code...
  • Carpii | Mon, 14 Apr 2014 22:59:04 UTC

    This is nothing more than a rant. 
    You aren't actually offering a solution, just pointing out whats inherently difficult about software development. Well, we knew this already. 
    Perhaps you could elaborate on how we can get to this Utopian scenario where no software ever has any bugs? 
  • bascule | Mon, 14 Apr 2014 22:08:49 UTC

    A similar idea: perhaps TLS is the root problem:
  • Latj | Mon, 14 Apr 2014 22:01:11 UTC

    I agree. Anyone who writes or speaks Turkish cannot be trusted.
  • Rob Fielding | Mon, 14 Apr 2014 21:17:43 UTC

    Given that moving to a more abstract language may be no real option, what about tweaks/enhancements on C with different defaults; that force painful overrides at every point in the code that hinders compiler checked memory safety proofs?
    In theory, a language like Rust is memory safe and possibly also safe from race conditions; because of the variety of pointer types it has.  These are concepts that are unique to Rust either.
    The implementation language does not matter provided that the language can reliably prove important properties of what it compiles.  In practice, this has often meant garbage collected languages (for trivial memory safety proofs).  
    But the real problem seems to be the intractability of completely unrestrained pointer arithmetic.  Types are supposed to be proofs, and everything is messed up if we forget that!  Ex:
    "int* x;" should mean "x is proven to be a pointer to an int".  This is clearly nonsense.  "int* x = y" is correct if y is proven to point to an int, and nonsense otherwise.  A nullable pointer is a *different* type, and it is not dereferenceable; where you can only assign it to a pointer if it in fact has a value.  For bounds checks, it's actually the same phenomenon: "*(x+5)=10" is only compiled when it is proven that that memory location points to an int in the same array as x.  With this thinking, x[-5] is just fine if the compiler can prove it.  A richer type system is required for these proofs to happen.  Any overrides of the compiler's judgement should be done at every point by adding an explicit assumption that has actually been found by other means.
  • Jan Bruun ndersen | Mon, 14 Apr 2014 20:05:17 UTC

    @PHK - jeg passede godt på Gimli. 
  • Poul-Henning Kamp | Mon, 14 Apr 2014 19:22:39 UTC

    If government funding were done at credible "arms-length" it would indeed be a good investment in citizen privacy, but it's not my impression that is a government goal anywhere/anymore.
    As for multiple implementations, you said it yourself:  it's good to have mod_ssl/mod_gnutls.  But that only solves the problem for apache.
    Having a good API and multiple implementations would do the same for other OpenSSL uses.
  • Kyle Bader | Mon, 14 Apr 2014 18:52:07 UTC

    I do think that given the degree to which governments depend on OpenSSL it would be in there best interest to fund programmers and cryptographers to improve the project. Creating a new crypto library is going to take time, and then even more time to build trust, port applications and work all those changes into distribution channels. I'm on the fence as to whether a single canonical implementation is desirable situation, after all OpenSSL provides a plurality of ciphers and that has proven beneficial as different threats are discovered. As an operator, being able to toggle between mod_ssl and mod_gnutls helps with "Too big to fail" and is something I would love to see in more applications. Is supporting something like this in applications difficult because the APIs are so different?
  • Poul-Henning Kamp | Mon, 14 Apr 2014 18:30:28 UTC

    If I were in charge, there would be no goto in FreeBSDs kernel, and I don't think there should be in OpenSSL either.
    As for gnutls, their API is not an obvious improvement over OpenSSL.
  • Rich | Mon, 14 Apr 2014 18:22:12 UTC

    You have a lot of valid points, personally I think that the time has come to put weight behind gnutls.
    However, the goto thing seems like a cheap shot. Using goto for error handling with one return statement makes a lot of sense, centralize error handling really helps. Using goto is not an indicator of a problem.
    Also, since you are a freebsd guy, check out how many goto statements are used in the freebsd kernel: nearly 20k.
  • Poul-Henning Kamp | Mon, 14 Apr 2014 18:15:16 UTC

    That's a very interesting angle, from the comments above it could look like people consider OpenSSL "too big to fail".
    Not sure about the government bailout though, I'd prefer somebody trustworthy.
  • Kyle Bader | Mon, 14 Apr 2014 18:10:22 UTC

    Too big to fail, surely it needs a government bailout.
  • Matt Weeks | Mon, 14 Apr 2014 17:56:08 UTC

    While OpenSSL hacks are sad, I do not believe that is the primary issue. Memory safety is the primary issue. For example, look at the browsers, all of which have been developed under some company or other organization, some of which under strict security requirements. And yet they all have been found to have innumerable vulnerabilities. Organization has not saved browsers. Multiple implementations have not saved browsers. A standards body promoting standardized API's and formats for HTML and Javascript has not saved browsers. We've seen the NSA boast that it can easily compromise all of them. Yet virtually every one of the browser vulnerabilities that could lead to code execution that were found and fixed in the past 3 years has been a memory safety issue, which could have been eliminated if the authors had written their code in a memory-safe language. It would be nice to have better memory-safe languages, e.g. more compiled languages, but there are a number of good memory-safe languages we could use.
  • Brad | Mon, 14 Apr 2014 17:45:10 UTC

    It's easy to complain that the code is bad; why haven't you started writing a replacement? Oh, that's for *other* people to do, I forgot. Conceptually OpenSSL should be simple, but when you consider the enormous number of platforms, processor architectures, libraries, and deep-down complexity that it has, it turns out to be not easy. Why is the code a mess? Well, it was written by a large team of people over an even larger period of time. Why? Because nobody cares to write a replacement, just like you. 
    Your replacement (which, let's face it, won't happen any time soon), which will almost certainly have far more lines of code, will also have far more bugs, because not only is it more lines of code, it hasn't been available for scrutiny nearly as long. 
    Unless you're paying for it, don't stand on the sidelines and bitch about the quality of somebody else's work. Put up or shut up. 
  • Ted Myers | Mon, 14 Apr 2014 17:40:54 UTC

    Easy to criticize. Hard to provide a solution. Hot air.
  • Poul-Henning Kamp | Mon, 14 Apr 2014 17:29:55 UTC

    @Arul:  I thought the irony was unescapable.
    @John:  Starting from scratch by designing a usable API is my proposal for a solution.
  • john hanson | Mon, 14 Apr 2014 17:10:20 UTC

    I don't see you proposing any solutions.  It's easy for you to sit there in hindsight and lay blame without any meaningful ideas of your own...
  • Arul Prakash | Mon, 14 Apr 2014 16:27:05 UTC

    64Kb is enough to get a servers private key ( Proof: CloudFlare Challenge Won By Fedor Indutny ). I think its high time that we start with a blank slate.
Leave this field empty

Post a Comment:

(Required - 4,000 character limit - HTML syntax is not allowed and will be removed)