November/December issue of acmqueue

The November/December issue of acmqueue is out now

The Bike Shed


  Download PDF version of this article PDF

My Compiler Does Not Understand Me

Until our programming languages catch up, code will be full of horrors

Poul-Henning Kamp

Only lately—and after a long wait—have a lot of smart people found audiences for making sound points about what and how we code. Various colleagues have been beating drums and heads together for ages trying to make certain that wise insights about programming stick to neurons. Articles on coding style in this and other publications have provided further examples of such advocacy.

As with many other educational efforts, examples that are used to make certain points are, for the most part, good examples: clear, illustrative, and easy to understand. Unfortunately, the flame kindled by an article read over the weekend often lasts only until Monday morning rolls around when real-world code appears on the screen with a bug report that just doesn’t make sense—as in, “This can’t even happen.”

When I began writing the Varnish HTTP accelerator, one of my design decisions—and I think one of my best decisions—was to upgrade my OCD to CDO, the more severe variant, where you insist letters be sorted alphabetically. As an experiment, I pulled together a number of tricks and practices I had picked up over the years and turned them all up to 11 in the Varnish source code. One of these tricks has been called the red-haired stepchild of good software engineering and is widely shunned by most programmers for entirely wrong and outdated reasons. So let me try to legitimize it with an example.

Here is a surprisingly hard programming problem: What do you do when close(2) fails?

Yes, close(2) does in fact return an error code, and most programmers ignore it, figuring that either: (a) it cannot fail; or (b) if it does, you are screwed anyway, because obviously the kernel must be buggy. I don’t think it is OK just to ignore it, since a program should always do something sensible with reported errors. Ignoring errors means that you have to deduce what went wrong based on the debris it causes down the road, or worse, that some criminal will exploit your code later on. The one true ideal might appear to be, “Keep consistent and carry on,” but in the real world of connected and interacting programs, you must make a careful determination as to whether it is better to abort the program right away or to soldier on through adversity, only to meet certain ruin later.

Realizing that “I have only a very small head and must live with it,”1 sensible compromises must be made—for example, a tradeoff between the probability of the failure and the effort of writing code to deal with it. There is also a real and valid concern about code readability—handling unlikely exceptions should not dominate the source code.

In Varnish the resulting compromise typically looks like this:


AN is a macro that means Assert Nonzero and AZ means Assert Zero, and if the condition does not hold, the program core-dumps right then and there.

Yes, the red-haired stepchild I want to sell you is the good old assert, which I feel should be used a lot more in today’s complicated programs. Where I judge that the probability of failure is relevant, I use two other variants of those macros, XXXAN and XXXAZ, to signal, “This can actually happen, and if it happens too much, we should handle it better.”

        retval = strdup(of);

This distinction is also made in the dump message, which for AZ() is “Assert error” vs. XXXAZ()’s “Missing error-handling code.”

Where I want to ignore a return value explicitly, I explicitly do so:


Of course, I also use “naked” asserts to make sure there are no buffer overruns:

        assert(size < sma->sz);

or to document important assumptions in the code:

        assert(sizeof (unsigned short) == 2);

But we are not done yet. One very typical issue in C programs is messed-up lifetime control of allocated memory, typically accessing a struct after it has been freed back to the memory pool.

Passing objects through void* pointers, as one is forced to do when simulating object-oriented programming in C, opens another can of worms. Here is my brute-force approach to these problems:

        struct lru {
                unsigned                         magic;
                #define LRU_MAGIC         0x3fec7bb0

        struct lru *l;
        ALLOC_OBJ(l, LRU_MAGIC);

The ALLOC_OBJ and FREE_OBJ macros ensure that the MAGIC field is set to the randomly chosen nonce when that piece of memory contains a struct lru and is set to zero when it does not.

In code that gets called with an lru pointer, another macro checks that the pointer points to what we think it does:

        foo(struct lru *l)
                CHECK_OBJ_NOTNULL(l, LRU_MAGIC);

If the pointer comes in as a void *, then a macro casts it to the desired type and asserts its validity:

        static void *
        vwp_main(void *priv)
                struct vwp *vwp;

                CAST_OBJ_NOTNULL(vwp, priv, VWP_MAGIC);

In terms of numbers, 10 percent of the non-comment source lines in Varnish are protected with one of the asserts just shown, and that is not counting what gets instantiated via macros and inline functions.


All this checking is theoretically redundant, particularly the cases where function A will check a pointer before calling function B with it, only to have function B check it again.

Though it may look like madness, there is reason for it: these asserts also document the assumptions of the code. Traditionally, that documentation appears in comments: “Must be called with a valid pointer to a foobar larger than 16 frobozz” and so on. The problem with comments is that the compiler ignores them and doesn’t complain when they disagree with the code; therefore, experienced programmers don’t trust them either. Documenting assumptions so that the compiler pays attention to them is a much better strategy. All this “pointless checking” grinds a certain kind of performance aficionado up the wall, and more than one has tried stripping Varnish of all this “fat.”

If you try that using the standardized -DNDEBUG mechanism, Varnish does not work at all. If you do it a little bit smarter, then you will find no relevant difference and often not even a statistically significant difference in performance.

Asserts are much cheaper than they used to be for three reasons:

• Compilers have become a lot smarter, and their static analysis and optimization code will happily remove a very large fraction of my asserts, having concluded that they can never trigger. That’s good, as it means that I know how my code works.

• The next reason is the same, only the other way around: the asserts put constraints on the code, which the static analysis and optimizer can exploit to produce better code. That’s particularly good, because it means my asserts actively help the compiler produce better code.

• Finally, the sad fact is that today’s CPUs spend an awful lot of time waiting for stuff to come in from memory—and performing a check on data already in the cache in the meantime is free. I do not claim that asserts are totally free—if nothing else, they do waste a few nanojoules of electricity—but they are not nearly as expensive as most people assume, and they offer a very good bang-for-the-buck in program quality.


In the long term, you should not need to use asserts, at least not as much as I do in Varnish, because at the end of the day, they are just hacks used to paper over deficiencies in programming languages. The holy grail of programming is “intentional programming,” where the programmer expresses his or her exact and complete intention, and the compiler understands it. Looking at today’s programming languages, I still see plenty of time before progress goes too far and we are no longer stuck on compilers, but rather on languages.

Compilers today know things about your code that you probably never realize, because they apply a chess-grandmaster-like analysis to it. Programming languages, however, do not become better vehicles for expressing intent; quite the contrary, in fact.

It used to be that you picked a width for your integer variable from whatever register sizes your computer had: char, short, int, or long. But how could you choose between a short and a long if you didn’t know their actual sizes?

The answer is that you couldn’t, so everybody made assumptions about the sizes, picked variable types, and hoped for the best. I don’t know how this particular mistake happened. We would have been in much better shape if the fundamental types had been int8, int16, int32, and int64 from the start, because then programmers could state their intentions and leave the optimization to the compiler, rather than trying to outguess the compiler.

Some languages—Ada, for example—have done it differently, by allowing range constraints as part of variable declarations:

        Month : Integer range 1..12;

This could be a pretty smooth and easy upgrade to languages such as C and C++ and would provide much-needed constraints to modern compiler analysis. One particularly strong aspect of this format is that you can save space and speed without losing clarity:

       Door_Height: Integer range 150..400;

This fits comfortably in eight bits, and the compiler can apply the required offset where needed, without the programmer even knowing about it.

Instead of such increased granularity of intention, however, 22-plus years of international standardization have yielded <stdint.h> with its uint_least16_t, to which <inttypes.h> contributes PRIuLEAST16, and on the other side <limit.h> with UCHAR_MAX, UINT_MAX, ULONG_MAX, but, inexplicably, USHRT_MAX, which confused even the person who wrote od(1) for The Open Group.

This approach has so many things wrong with it that I barely know where to start. If you feel like exploring it, try to find out how to portably sprintf(3) a pid_t right-aligned into an eight-character string.

The last time I looked, we had not even found a way to specify the exact layout of a protocol packet and the byte-endianess of its fields. But, hey, it’s not like CPUs have instructions for byte swapping or that we ever use packed protocol fields anyway, is it?

Until programming languages catch up, you will find me putting horrors such as the following in my source code, to try to make my compiler understand me:

       #define CTASSERT(x,z) _CTASSERT(x, __LINE__, z)
       #define _CTASSERT(x, y, z)    __CTASSERT(x, y, z)
       #define __CTASSERT(x, y, z)   \
               typedef char __ct_assert ## y ## __ ## z [(x) ? 1 : -1]

       CTASSERT(sizeof(struct wfrtc_proto) == 32, \


1. Dijkstra, E. W. 2010. Programming considered as a human activity;


[email protected]

POUL-HENNING KAMP ([email protected]) has programmed computers for 26 years and is the inspiration behind His software has been widely adopted as “under-the-hood” building blocks in both open source and commercial products. His most recent project is the Varnish HTTP accelerator, which is used to speed up large Web sites such as Facebook.

© 2012 ACM 1542-7730/12/0500 $10.00


Originally published in Queue vol. 10, no. 5
see this item in the ACM Digital Library



Nicole Forsgren, Mik Kersten - DevOps Metrics
Your biggest mistake might be collecting the wrong data.

Alvaro Videla - Metaphors We Compute By
Code is a story that explains how to solve a particular problem.

Ivar Jacobson, Ian Spence, Pan-Wei Ng - Is There a Single Method for the Internet of Things?
Essence can keep software development for the IoT from becoming unwieldy.

Ivar Jacobson, Ian Spence, Ed Seidewitz - Industrial Scale Agile - from Craft to Engineering
Essence is instrumental in moving software development toward a true engineering discipline.


(newest first)

Jon W | Sun, 03 Dec 2017 21:45:28 UTC

"specify the exact layout of a protocol packet and the byte-endianess of its fields" <-- even at the time of writing, the Erlang binary term syntax allowed you to do just that, even including functional pattern matching for components. So perhaps the problem is that you're looking in the wrong place?

Fredrik Skeel Løkke | Sun, 27 May 2012 09:25:28 UTC

Code contracts are expressed in the language itself, they are code artifacts. They are asserted at runtime or in some cases, such as in the .net toolchain, it s possible to statically verify them. This indeed means that any contractual breaches will be caught at compile time. I wouldn't recommend this for most project since its time consuming. Instead I would rely on exhaustive testing to verify my contracts. Preferable via a tool such as pex that dynamically analyses my code, then generates tests for all paths while trying to break my contracts.

Range constraints does not equal dependent types ;)

Poul-Henning Kamp | Sat, 26 May 2012 16:39:16 UTC

I'm not sure I have ever fully understood the buzz behind "Contract Based Programming."

On one side, declaring a variable to be integer is a contract with the compiler about the use we plan to put that variable to vs. how the compiler will have to treat it. I'm fine with that, but don't see what the CBP metaphor brings to the table.

On the other side, writing a lengthy comments about what a function must, should, will & wont do, is at best, like real world contracts, a document you can use to appeal to some higher power that the other guy shafted you, and like real world contracts, you won't actually know what it means, until that higher power has interpreted it. Since nobody but humans can or will interpret these comments I don't see its relevance to programming as a man-machine activity, only to programming as a manage-many-programmers activity.

If we were able to express the contracts comprehensively inside the language, and the compiler would refuse to compile contractual breaches: ("You're attempting to call FroBozz() but you are not on the north side of the Great Flood Control Dam #5") then I would buy into the idea, and would call it a step towards intentional programming, and loose the "contract" metaphor :-)

It is my impression that we have neither attained the ability to express our intentions at that level, much less found out how to make compilers understand and validate them (See also: machine generated proofs)

Range constraints are not mine, I believe they originate in PASCAL or Ada and they have certainly been out of academia for many more years than I have been programming.

That other languages, like C, have not picked up that idea, precisely because it comes from the PASCAL/Ada end of the world, is testament to how far Computer Science is from actual Science.

Fredrik Skeel Løkke | Sat, 26 May 2012 09:25:14 UTC

Your assertions are reminiscent of contract based programming. Here assertions are used to specify the preconditions and postconditions of a function, the 'contract'. Contracts are seen as the specification and the function body as its implementation. I'm personally trying to promote this style in the projects I'm working on. Some of the benefits of contracts: Living documentation, no need for defensive programming, a systematic approach to error handling (only throw when you can't fullfill your contract) and last but not least, contracts as a design tool.

Your range constraints are expressible with dependent types. But they haven't made it out of academia yet..

Poul-Henning Kamp | Thu, 24 May 2012 23:39:42 UTC

First, let me say that you only see a very small part of the picture above, feel free to check the actual Varnish source code to learn more: (

Second, if anybody tries to modify Varnish based on assumptions rather than doing their homework, they're probably in for a interesting ride. Varnish is written to be high-performance on modern hardware, and contains a lot of tricky stuff, including a compiler for a domain-specific language.

Thirdly, varnish assert function does precisely go out of its way to make sure usable debugging information is recorded so we can diagnose and reproduce, including dumping relevant state and data structures and a backtrace.

(Which brings me to another rant-topic: WTF does ISO-C not provide a portable way to get the best estimate of a backtrace the compiler can provide ?)

Brooks Moses | Thu, 24 May 2012 06:27:40 UTC

I strongly agree with most of this article, but I do disagree on one point: the assert() macro should never be used in a way that will cause the program to work incorrectly if compiled with -DNDEBUG, as Varnish reportedly does. The assert() macro has an established meaning that includes the idea that its argument has no side effects -- and other programmers who are working with your code will likely rely on this, in ways beyond simply compiling with NDEBUG defined. Misusing it this way is also simply a bad habit to get into because people do define NDEBUG in other codebases, and there this habit will bite you with bugs that only appear in the "release" build.

In cases where you do want to make an assertion about an expression with necessary side effects, the correct solution is to write your own error-handling code that is not affected by NDEBUG, so that your intention is correctly aligned with the language semantics.

Defining your own always-on assertion handler also has a secondary benefit: it encourages you to think for a moment about how the failure should be handled in the deployed code if it occurs. Perhaps the error will be lost if printed to stderr and should be logged instead or displayed in a dialog box; perhaps some sort of attempt to save the user's unsaved work should be made before aborting. If the assertion handler handles errors in a user-appropriate way, that's one less argument against leaving it turned on.

Leave this field empty

Post a Comment:

© 2018 ACM, Inc. All Rights Reserved.