September/October 2018 issue of acmqueue The September/October issue of acmqueue is out now

Subscribers and ACM Professional members login here

Programming Languages

  Download PDF version of this article PDF

Error 526 Ray ID: 47d18c7af8c821b6 • 2018-11-21 07:47:31 UTC

Invalid SSL certificate








What happened?

The origin web server does not have a valid SSL certificate.

What can I do?

If you're a visitor of this website:

Please try again in a few minutes.

If you're the owner of this website:

The SSL certificate presented by the server did not pass validation. This could indicate an expired SSL certificate or a certificate that does not include the requested domain name. Please contact your hosting provider to ensure that an up-to-date and valid SSL certificate issued by a Certificate Authority is configured for this domain name on the origin server. Additional troubleshooting information here.


Originally published in Queue vol. 16, no. 2
see this item in the ACM Digital Library



Tobias Lauinger, Abdelberi Chaabane, Christo Wilson - Thou Shalt Not Depend on Me
A look at JavaScript libraries in the wild

Robert C. Seacord - Uninitialized Reads
Understanding the proposed revisions to the C language

Carlos Baquero, Nuno Preguiça - Why Logical Clocks are Easy
Sometimes all you need is the right language.

Erik Meijer, Kevin Millikin, Gilad Bracha - Spicing Up Dart with Side Effects
A set of extensions to the Dart programming language, designed to support asynchrony and generator functions


(newest first)

Displaying 10 most recent comments. Read the full list here

Juneyoung Lee | Sat, 27 Oct 2018 17:01:43 UTC

For the unspecified value thing - you may want to add PLDI'17 to your reference? :) ( )

For the provenance thing - this OOPSLA'18 exactly is about the issue! ( )

Roberto Maurizzi | Tue, 16 Oct 2018 02:33:17 UTC

C doesn't guarantee anything about memory protection: the CPU (or better its MMU) does, as anyone that wrote C programs on MMU-less processors can tell you. On those systems (typically single-task but not always, see Commodore Amiga) a program had full access (read and write) to the full address space of the processor and a 'lost pointer' could easily fill all the memory with garbage and crash the OS. What Intel and friends did is even worse actually: for the sake of compatibility they hid the real internal structure of the processor from the 'external' assembly language and architecture, then they didn't emulate this memory protection architecture probably for the sake of speeding up things. They allow things that would be illegal for the processor they're emulating because they wanted speed AND backward compatibility, because Microsoft back in the day was refusing to even think about porting Windows to different architectures.

Tom Sobota | Thu, 12 Jul 2018 13:51:16 UTC

Years ago I programmed a lot on the PDP-11, be it in assembler, Fortran or C. So I find that the author is right when he says that C is a low-level language on those machines. It is, or better, it was. When I started to program for the X86 architecture, back in the eighties, I also couldn't but notice that the code generated by C was not so low-level anymore, since the PDP-11 instructions like pre- or post- increment weren't there, and the address modes were different.

I have nothing against C/C++, I still use them frequently. But I wonder if some new language than could give us that sensation of control over the program execution wouldn't be welcome. Ditto for a processor architecture with execution parallelism controllable from the language. RISC-V or something?

Blue | Fri, 06 Jul 2018 11:37:57 UTC

The author simply assumes the x86 platform then ? A good part of C code in existence isn't written for desktop and server applications anyways, but for sequentially working MCU's which are a lot closer to the original 8086 chip. Also; "For example, in C, processing a large amount of data means writing a loop that processes each element sequentially." isn't always true, depending on the use case (For example Binary search or jump search don't need to iterate through each element of ordered data).

Anon | Thu, 24 May 2018 03:22:29 UTC

I can only assume the author is a big fan of Intels EPIC efforts?

mlwmohawk | Mon, 14 May 2018 21:36:07 UTC

This is an excellent troll and strawman argument. The "C" programming language enforces nothing in the way of memory model or threads or processor layout. The language itself can be applied to almost any hypothetical processor system. All that is required is some sort of sane operational characteristic that can be programmed. I will grant that most developers and generic libraries assume thing like malloc, stacks, cache, serial execution but not C. C takes an expression and translate it to a set of instructions, nothing less and nothing more.

What would be really interesting is if you could describe a programming model that would be better than C *and* not be implemented in C.

Eric S. Raymond | Thu, 10 May 2018 14:22:21 UTC

I have blogged a detailed response at

In brief, I think Chisnall's critique is thought-provoking but his prescription mistaken; there are simply too many key algorithms that are what I call SICK ("Serial, Intrinsically; Cope, Kiddo") for his ideas about pocessor design to play well with real workloads.

John Payson | Wed, 09 May 2018 20:58:19 UTC

Perhaps the simplest rebuttal to the author's primary point is to quote from the introdution to the published rationale for C99:

"C code can be non-portable. Although it strove to give programmers the opportunity to write truly portable programs, the C89 Committee did not want to force programmers into writing portably, to preclude the use of C as a high-level assembler: the ability to write machine-specific code is one of the strengths of C. It is this principle which largely motivates drawing the distinction between strictly conforming program and conforming program."

To be sure, the C Standard does not require that implementations be suitable for low-level program. On the other hand, it does not require that they be suitable for *any* particular purpose. The C89 Rationale notes, in, "While a deficient implementation could probably contrive a program that meets this requirement, yet still succeed in being useless, the Committee felt that such ingenuity would probably require more work than making something useful."

While the C Standard itself makes no reference to "quality", except with regard to the "randomness" produced by rand() and random(), the rationale uses the phrase "quality of implementation" a fair number of times. From Seciton 3 of the C99 Rationale: "The goal of adopting this categorization is to allow a certain variety among implementations which permits quality of implementation to be an active force in the marketplace as well as to allow certain popular extensions, without removing the cachet of conformance to the Standard."

For some reason, it has become fashionable to view the "ingenuity" alluded to in of the C89 Rationale as a good thing, but the text makes it clear it isn't. The only reason the authors didn't explicitly discourage it is that they thought the effort required would be adequate deterrent. Alas, they were mistaken.

Anon | Sun, 06 May 2018 10:33:54 UTC

Holy mother of god. Lets put all the blame of C of all things and just pardon incompetence on all sides.

Hans Jorgensen | Sat, 05 May 2018 17:46:38 UTC

I think that most of these problems actually have to do with x86 and Unix/Windows, not C. The problems you mention - instruction-level parallelism, invisible caches, paging behavior - stem from the process model, which puts programs into monolithic units that think they have the flat memory space to themselves. GPU code, even with much better parallelism and memory usage, is still remarkably C-like (or even explicitly C in the case of Nvidia's CUDA), so I imagine that C (which is still a high-level language) could be adapted to use these alternate paradigms.

I imagine that an architecture with better control of such a fast-designed processor would do the following: - Explicit caching. Basically, instead of using the SRAM memory banks as caches that are invisible to the architecture, expose them as actual memory banks with separate pointer spaces and allocation endpoints (e.g. sbrk_L1(), sbrk_L2(), sbrk_L3(), sbrk_main(), or slightly more abstract and portable names) and let programs use them as they please. - Explicit access to the individual execution units, including cheap threads. - No preemptive multitasking - since we have so many parallel execution units, we can have a few kernel threads always running and watchdogging the other threads, and kill them if they're being onerous. Preemptive multitasking was needed when there was only one processor in the system and a single bad program could bring the whole thing down. - Instead, if a program needs an execution unit, the architecture can just give it one. It can run as long as it wants and yield to the scheduler either to be considerate or to wait for user input or for a lock or condition variable. Most execution units, however, will just terminate their programs quickly, meaning that the expense of running the scheduler is not often incurred (and if it is, it doesn't need to save as much since it can expect the program to save state). - To avoid triggering OS syscalls too much, the OS could avoid triggering a fault on the "new execution unit" instruction unless the execution unit is not allowed to do this or if no more units are available. - Bonus thought: You could even ask for an execution unit on every function call - there is no stack space! The program stack idea is instead implemented as a wait/spawn chain, and all functions run asynchronously with a set contract for returning values (such as writing data to a malloc_L1() pointer). - No attempt on the processor's part to ILP - the architecture will just suck it up and run each instruction in order on the same unit. A high-level language might do execution unit scheduling on its own in the compiler phase, but the assembly will tell you exactly how each execution unit is used. - An ability to bypass the paging system, if we don't completely throw it out. - If the architecture had the ability to actually read its own instruction pointer, paging would be much less useful because we could use relative addressing for everything - just transfer the instruction pointer into the accumulator and calculate any necessary jump points and memory lookups. - Paging is still useful for memory protection, though, so we could still use it even if we expressed it in terms of physical pages without doing any address translation.

As you said, old code would not run well on such an architecture, but that would be because it's old x86 and Unix/Windows code and not strictly because it's old C code. C has been adapted to lots of programming models before, and it could potentially be adapted to one like this, too.

Displaying 10 most recent comments. Read the full list here
Leave this field empty

Post a Comment:

© 2018 ACM, Inc. All Rights Reserved.