January/February 2018 issue of acmqueue

The January/February issue of acmqueue is out now

The Bike Shed


  Download PDF version of this article PDF

The Most Expensive One-byte Mistake

Did Ken, Dennis, and Brian choose wrong with NUL-terminated text strings?

Poul-Henning Kamp

IT both drives and implements the modern Western-style economy. Thus, we regularly see headlines about staggeringly large amounts of money connected with IT mistakes. Which IT or CS decision has resulted in the most expensive mistake?

Not long ago, a fair number of pundits were doing a lot of hand waving about the financial implications of Sony's troubles with its PlayStation Network, but an event like that does not count here. In my school days, I talked with an inspector from The Guinness Book of World Records who explained that for something to be "a true record," it could not be a mere accident; there had to be direct causation starting with human intent (i.e., we stuffed 26 high school students into our music teacher's Volkswagen Beetle and closed the doors).

Sony (probably) did not intend to see how big a mess it could make with the least attention to security, so this and other such examples of false economy will not qualify. Another candidate could be IBM's choice of Bill Gates over Gary Kildall to supply the operating system for its personal computer. The damage from this decision is still accumulating at breakneck speed, with StuxNet and the OOXML perversion of the ISO standardization process being exemplary bookends for how far and wide the damage spreads. But that was not really an IT or CS decision. It was a business decision that, as far as history has been able to uncover, centered on Kildall's decision not to accept IBM's nondisclosure demands.

A better example would be the decision for MS-DOS to invent its own directory/filename separator, using the backslash (\) rather than the forward slash (/) that Unix used or the period that DEC used in its operating systems. Apart from the actual damage being relatively modest, however, this does not qualify as a good example either, because it was not a real decision selecting a true preference. IBM had decided to use the slash for command flags, eliminating Unix as a precedent, and the period was used between filename and filename extension, making it impossible to follow DEC's example.

Space exploration history offers a pool of well-publicized and expensive mistakes, but interestingly, I didn't find any valid candidates there. Fortran syntax errors and space shuttle computer synchronization mistakes do not qualify for lack of intent. Running one part of a project in imperial units and the other in metric is a "random act of management" that has nothing to do with CS or IT.

The best candidate I have been able to come up with is the C/Unix/Posix use of NUL-terminated text strings. The choice was really simple: Should the C language represent strings as an address + length tuple or just as the address with a magic character (NUL) marking the end? This is a decision that the dynamic trio of Ken Thompson, Dennis Ritchie, and Brian Kernighan must have made one day in the early 1970s, and they had full freedom to choose either way. I have not found any record of the decision, which I admit is a weak point in its candidacy: I do not have proof that it was a conscious decision.

As far as I can determine from my research, however, the address + length format was preferred by the majority of programming languages at the time, whereas the address + magic_marker format was used mostly in assembly programs. As the C language was a development from assembly to a portable high-level language, I have a hard time believing that Ken, Dennis, and Brian gave it no thought at all.

Using an address + length format would cost one more byte of overhead than an address + magic_marker format, and their PDP computer had limited core memory. In other words, this could have been a perfectly typical and rational IT or CS decision, like the many similar decisions we all make every day; but this one had quite atypical economic consequences.

Hardware Development Costs

Initially, Unix had little impact on hardware and instruction set design. The CPUs that offered string manipulation instructions—for example, Z-80 and DEC VAX—did so in terms of the far more widespread adr+len model. Once Unix and C gained traction, however, the terminated string appeared on the radar as an optimization target, and CPU designers started to add instructions to deal with them. One example is the Logical String Assist instructions IBM added to the ES/9000 520-based processors in 1992.1

Adding instructions to a CPU is not cheap, and it happens only when there are tangible and quantifiable monetary reasons to do so.

Performance Costs

IBM added instructions to operate on NUL-terminated strings because its customers spent expensive CPU cycles handling such strings. That bit of information, however, does not tell us if fewer CPU cycles would have been required if a ptr+len format had been used.

Thinking a bit about virtual memory systems settles that question for us. Optimizing the movement of a known-length string of bytes can take advantage of the full width of memory buses and cache lines, without ever touching a memory location that is not part of the source or destination string.

One example is FreeBSD's libc, where the bcopy(3)/memcpy(3) implementation will move as much data as possible in chunks of "unsigned long," typically 32 or 64 bits, and then "mop up any trailing bytes" as the comment describes it, with byte-wide operations.2

If the source string is NUL terminated, however, attempting to access it in units larger than bytes risks attempting to read characters after the NUL. If the NUL character is the last byte of a VM (virtual memory) page and the next VM page is not defined, this would cause the process to die from an unwarranted "page not present" fault.

Of course, it is possible to write code to detect that corner case before engaging the optimized code path, but this adds a relatively high fixed cost to all string moves just to catch this unlikely corner case—not a profitable tradeoff by any means.

If we have out-of-band knowledge of the strings, things are different.

Compiler Development Cost

One thing a compiler often knows about a string is its length, particularly if it is a constant string. This allows the compiler to emit a call to the faster memcpy(3) even though the programmer used strcpy(3) in the source code.

Deeper code inspection by the compiler allows more advanced optimizations, some of them very clever, but only if somebody has written the code for the compiler to do it. The development of compiler optimizations has historically been neither easy nor cheap, but obviously Apple is hoping this will change with LLVM (Low-level Virtual Machine), where optimizers seem to come en gros.

The downside of heavy-duty compiler optimization—in particular, optimizations that take holistic views of the source code and rearrange it in large-scale operations—is that the programmer has to be really careful that the source code specifies his or her complete intention precisely. A programmer who worked with the compilers on the Convex C3800 series supercomputers related his experience as "having to program as if the compiler was my ex-wife's lawyer."

Security Costs

Even if your compiler does not have hostile intent, source code should be written to hold up to attack, and the NUL-terminated string has a dismal record in this respect. Utter security disasters such as gets(3), which "assume the buffer will be large enough," are a problem "we have relatively under control."3

Getting it under control, however, takes additions to compilers that would complain if the gets(3) function were called. Despite 15 years of attention, over- and under-running string buffers is still a preferred attack vector for criminals, and far too often it pays off.

Mitigation of these risks has been added at all levels. Long-missed no-execute bits have been added to CPUs' memory management hardware; operating systems and compilers have added address-space randomization, often at high costs in performance; and static and dynamic analyses of programs have soaked up countless hours, trying to find out if the byzantine diagnostics were real bugs or clever programming.

Yet, absolutely nobody would be surprised if Sony's troubles were revealed to start with a buffer overflow or false NUL-termination assumption.

Slashdot Sensation Prevention Section

We learn from our mistakes, so let me say for the record, before somebody comes up with a catchy but totally misleading Internet headline for this article, that there is absolutely no way Ken, Dennis, and Brian could have foreseen the full consequences of their choice some 30 years ago, and they disclaimed all warranties back then. For all I know, it took at least 15 years before anybody realized why this subtle decision was a bad idea, and few, if any, of my own IT decisions have stood up that long.

In other words, Ken, Dennis, and Brian did the right thing.

But That Doesn't Solve the Problem

To a lot of people, C is a dead language, and ${lang} is the language of the future, for ever-changing transient values of ${lang}. The reality of the situation is that all other languages today directly or indirectly sit on top of the Posix API and the NUL-terminated string of C.

When your Java, Python, Ruby, or Haskell program opens a file, its runtime environment passes the filename as a NUL-terminated string to open(3), and when it resolves queue.acm.org to an IP number, it passes the host name as a NUL-terminated string to getaddrinfo(3). As long as you keep doing that, you retain all the advantages when running your programs on a PDP/11, and all of the disadvantages if you run them on anything else.

I could write a straw man API proposal here, suggest representations, operations, and error-handling strategies, and I am quite certain that it would be a perfectly good waste of a nice afternoon. Experience shows that such proposals go nowhere because backwards compatibility with the PDP/11 and the finite number of programs written are much more important than the ability to write the potentially infinite number of programs in the future in an efficient and secure way.

Thus, the costs of the Ken, Dennis, and Brian decision will keep accumulating, like the dust that over the centuries has almost buried the monuments of ancient Rome.


1. Computer Business Review. 1992. Partitioning and Escon enhancements for top-end ES/9000s; http://www.cbronline.com/news/ibm_announcements_71.

2. ViewVC. 2007. Contents of /head/lib/libc/string/bcopy.c; http://svnweb.freebsd.org/base/head/lib/libc/string/bcopy.c?view=markup.

3. Wikipedia. 2011. Lifeboat sketch; http://en.wikipedia.org/wiki/Lifeboat_sketch.


[email protected]

Poul-Henning Kamp ([email protected]) has programmed computers for 26 years and is the inspiration behind bikeshed.org. His software has been widely adopted as "under the hood" building blocks in both open source and commercial products. His most recent project is the Varnish HTTP accelerator, which is used to speed up large Web sites such as Facebook.

© 2011 ACM 1542-7730/11/0700 $10.00


Originally published in Queue vol. 9, no. 7
see this item in the ACM Digital Library


Nicole Forsgren, Mik Kersten - DevOps Metrics
Your biggest mistake might be collecting the wrong data.

Alvaro Videla - Metaphors We Compute By
Code is a story that explains how to solve a particular problem.

Ivar Jacobson, Ian Spence, Pan-Wei Ng - Is There a Single Method for the Internet of Things?
Essence can keep software development for the IoT from becoming unwieldy.

Ivar Jacobson, Ian Spence, Ed Seidewitz - Industrial Scale Agile - from Craft to Engineering
Essence is instrumental in moving software development toward a true engineering discipline.


(newest first)

Displaying 10 most recent comments. Read the full list here

krater | Tue, 16 Jan 2018 19:29:53 UTC

I imagine a today where Ritchie & Co. decided to use address + length to represent strings.

We have now 1-8 different types of strings, with 1, 2, 4, or 8 byte length field. All incompatible to each other. Additional we have a couple of overhead to convert string type 1 to string type 2. We have overhead to access 2 byte strings on a 32bit machine because 16bit access is slower than 32bit access. We have security issues because someone had invented a string type where it's possible to automaticly recognize the lenght of the size field and some programs would handle this strings wrong or some developer are using this strings wrong.

I don't see any difference at all...

cousteau | Fri, 12 Feb 2016 08:56:50 UTC

What about CRLF for newlines?

Certain OSes and many transfer protocols rely on the double byte CRLF as a line terminator. This, as far as I know, is a historical remainder from teleprinters that relied on two separate "instructions" to move the printing head to the beginning of the line and feed paper for 1 line. Needless to say, nowadays there is no point on having 2 separate characters for this as they have a single meaning together and no meaning separately.

The consequences of this might be expensive: A stream processing mechanism must make the conversions on the fly. There are 2 different incompatible ways to open a file. The "line terminator" cannot be determined by a simple character search, but instead requires a substring match (which is even more complicated to do in hardware). Also, the cases of unmatches CR or LF will usually need to be treated specially. Plus, on a file with an average of 50 characters per line, this implies a 2% extra storage size or transmission time. I don't know if this has great economical repercussions, but it's definitely not for free.

Another candidate is UTF-16 (which is a rather inconvenient encoding whose only point is backwards compatibility with UCS-2), and Unicode adapting its size to it, and not the other way around. Because of it it is required for any other Unicode implementation treats the range D800-DFFF specially, plus it resulted in limiting Unicode to a bit more than one million code points, while UTF-8 had no problems with going up to 2 billion; now those extra code points are required to be treated as invalid, complicating UTF-8 validity check.

Robert | Tue, 01 Sep 2015 16:22:42 UTC

Another thing to consider is the sequential access model used by "everything is a file" unix where typical programming idioms (such as those in the K&R book) typically revolve around reading 'until' some sentinal value. A file typically ends up terminated by EOF (represented as a 0 too), and you don't typically know where it's going to come unless you perform a length calculation first. Such a calculation is expensive on magnetic media - especially tapes. You could work around this again with your run-length byte at the start of a string but then this means that whomever is writing the data in the first place has to calculate the length either beforehand or rewind to the start of the string and put in the length after the write.

rh- | Sat, 24 Jan 2015 04:13:55 UTC

"Using an address + length format would cost one more byte of overhead than an address + magic_marker format" Huh? There I stopped reading it. At the time there was no talking about strings longer than 256 chars, as not many toys had lots of memory space to waste, and the screens to print them was only 40 or 80 chars, and either adding a null terminator or adding a size byte (as in Pascal or ADA) would anyhow add a byte. Talking about longer strings (which would waste two bytes for the size) came much later, and that was not true for long: when the unicode strings came, the null terminator would also waste two bytes. Lack of security? Same idiot forgetting to add the terminator would also forget to set the proper right. Also, when playing with strings, it would be a pain in the ass to always compute the right lengths, the backslash-zero solution is more easy and elegant.

Terry Davis | Tue, 30 Sep 2014 02:12:49 UTC

I had a teacher who was convinced bigendian was correct. Just goes to show how stupid people can be. I like NULL terminated. I am God's chosen.

Keith Thompson | Fri, 11 Apr 2014 19:34:11 UTC

> Should the C language represent strings as an address + length tuple or just as the address with a magic character (NUL) marking the end?

A C string isn't represented as address + marker. A C string is by definition "a contiguous sequence of characters terminated by and including the first null character". You can have a pointer to (equivalently, the address of) a string, but the string itself is not an address; it's a sequence of characters stored in an array. (And no, arrays are not pointers; see section 6 of the comp.lang.c FAQ, http://www.c-faq.com/, if you think they are.)

The point is that the proposed "address + length" representation would store information about the string in two different places.

An alternative might be Pascal-style counted strings; in C terms, you could store the length in the first (sizeof (size_t)) bytes of the char array.

Paul | Fri, 11 Apr 2014 16:18:09 UTC

How is the Therac-25 incident not the most expensive (also one-byte) bug?

theraot | Sat, 28 Jan 2012 13:35:28 UTC

Want to know Why?

This is the best explanation I have so far...

When B was designed there wasn't an standard about the size of a byte or the size of a word. The sizes went form 5 bits to 22 bits, and it became worst later on, for example the Cray-1 used 60 bits and DEC Alpha with 64 bits. And B was meant to be compilable to all the machines. We are talking about B here, so Brian Kernighan is innocent. As for Kenneth Lane Thompson and Dennis MacAlistair Ritchie... they decided that it was easier to have a null termination (a 0 of whatever number of bits it takes) than to have to manage two words, one for a pointer and one for the size.

Aside from that problem, which could have been solved by the compiler and virtual machine* anyway, they opted for immutable strings. Mutable strings seems a good idea for example if you want to get substring that is easy with pointer and size, but this means that you will have different pointers to the same area in memory, therefore you got to wait until you stoped using all the substring to free the memory of the main string, this means complications keeping track of the references, this is expensive when programming. Instead it is easier to make a copy the string, now the new string is stored independently and doesn't impose any limitation to release the memory of the former. This make the live of the programmer easier, so why not use inmmutable strings and make the live easier for the compiler too? It is known that using null terminated strings is less efficient compared to pointer and length strings because I need to copy the string each time I concatenate or substring, although developing the compiler was easier (because of the size of the byte problem I mentioned earlier).

* Remember that virtual machines were a new thing that appeared with BCPL, in which Ritchie participated (although BCPL didn't have strings per se), note that the development of B started in 1971 and was based on BCPL.

C inherited it from B and C++ inherited if from C, and Java, C#... inherited if from C++.

Today when we want to manipulate strings, unless we are doing a single operation, it is better to use a solution based on linked lists or arrays while we are crafting the string, and the retrieve the result as a inmutable string.

Now, it is possible to make inmmutable strings with pointer and size, copying the string anyway to keep the simplicity of the implementation to keep track of the references. For this situation to have the length doesn't represent any benefit, and as mentioner earlier it is harder because of the disparity of architecture of the compuers of the time. Still I know that it is much better to use pointer and length to display the string.

Today the situation is different, we all have bytes of 8 bits, virtual machines are common, we have power to use garbage collectors. Is it time to develop a new mechanism for strings? I don't know, just keep in mind that inmmutable strings are good for thread safety, and computers with multiple cores are common too.

Maybe I should mention that BASIC did use something similar to pointer and lengh, the first data on the destination of the pointer was the length of the string. There is a lost content from msdn (I saw it in the version for visual studio of '98) that explains that microsoft decided to change the implementation of strings for BASIC (I think when they started to call it visual basic) becuase it increased the performance of it by calling C++ libraries. [In fact, in the old msdn was a full discussion about the pros and cons of different ways to handle strings, including things like storing the pointer to start and the pointer to the end, having a pointer after a blocks of constant size of characters to have non contiguous strings and so on... sadly those articles seem to be lost forever].

Dennis Ritchie, rest in peace.

Dana | Tue, 03 Jan 2012 19:24:48 UTC

"...Using an address + length format would cost one more byte of overhead than an address + magic_marker format...

Not really true. The null-terminated string also adds one byte at the end of the string - the null character. Address + length also limits the size of the string to 256 characters unless more than one byte is used as the length designator.

Bruno Vo_qui | Fri, 30 Dec 2011 15:18:28 UTC

May I say, top post and comments are talking about 'in-band' string delimitors. But memory may be marked with 'out-band' properties, for instance ecc-read-write-execute and the likes. What about an 'isSameString' out-band?

Displaying 10 most recent comments. Read the full list here
Leave this field empty

Post a Comment:

© 2018 ACM, Inc. All Rights Reserved.