The May/June issue of acmqueue is out now
500 Error - Server Error
... but the URL you have requested has resulted in a Server Error.
It is possible that this was a temporary problem and is already corrected so please try to refresh this page.
We apologize for this inconvenience.
If the problem persists please contact us:
[email protected]
![]()
Originally published in Queue vol. 16, no. 2—
see this item in the ACM Digital Library
Tobias Lauinger, Abdelberi Chaabane, Christo Wilson - Thou Shalt Not Depend on Me
A look at JavaScript libraries in the wild
Robert C. Seacord - Uninitialized Reads
Understanding the proposed revisions to the C language
Carlos Baquero, Nuno Preguiça - Why Logical Clocks are Easy
Sometimes all you need is the right language.
Erik Meijer, Kevin Millikin, Gilad Bracha - Spicing Up Dart with Side Effects
A set of extensions to the Dart programming language, designed to support asynchrony and generator functions
(newest first)
Displaying 10 most recent comments. Read the full list hereI can only assume the author is a big fan of Intels EPIC efforts?
What would be really interesting is if you could describe a programming model that would be better than C *and* not be implemented in C.
http://esr.ibiblio.org/?p=7979
In brief, I think Chisnall's critique is thought-provoking but his prescription mistaken; there are simply too many key algorithms that are what I call SICK ("Serial, Intrinsically; Cope, Kiddo") for his ideas about pocessor design to play well with real workloads.
"C code can be non-portable. Although it strove to give programmers the opportunity to write truly portable programs, the C89 Committee did not want to force programmers into writing portably, to preclude the use of C as a high-level assembler: the ability to write machine-specific code is one of the strengths of C. It is this principle which largely motivates drawing the distinction between strictly conforming program and conforming program."
To be sure, the C Standard does not require that implementations be suitable for low-level program. On the other hand, it does not require that they be suitable for *any* particular purpose. The C89 Rationale notes, in 2.4.4.1, "While a deficient implementation could probably contrive a program that meets this requirement, yet still succeed in being useless, the Committee felt that such ingenuity would probably require more work than making something useful."
While the C Standard itself makes no reference to "quality", except with regard to the "randomness" produced by rand() and random(), the rationale uses the phrase "quality of implementation" a fair number of times. From Seciton 3 of the C99 Rationale: "The goal of adopting this categorization is to allow a certain variety among implementations which permits quality of implementation to be an active force in the marketplace as well as to allow certain popular extensions, without removing the cachet of conformance to the Standard."
For some reason, it has become fashionable to view the "ingenuity" alluded to in 2.4.4.1 of the C89 Rationale as a good thing, but the text makes it clear it isn't. The only reason the authors didn't explicitly discourage it is that they thought the effort required would be adequate deterrent. Alas, they were mistaken.
I imagine that an architecture with better control of such a fast-designed processor would do the following: - Explicit caching. Basically, instead of using the SRAM memory banks as caches that are invisible to the architecture, expose them as actual memory banks with separate pointer spaces and allocation endpoints (e.g. sbrk_L1(), sbrk_L2(), sbrk_L3(), sbrk_main(), or slightly more abstract and portable names) and let programs use them as they please. - Explicit access to the individual execution units, including cheap threads. - No preemptive multitasking - since we have so many parallel execution units, we can have a few kernel threads always running and watchdogging the other threads, and kill them if they're being onerous. Preemptive multitasking was needed when there was only one processor in the system and a single bad program could bring the whole thing down. - Instead, if a program needs an execution unit, the architecture can just give it one. It can run as long as it wants and yield to the scheduler either to be considerate or to wait for user input or for a lock or condition variable. Most execution units, however, will just terminate their programs quickly, meaning that the expense of running the scheduler is not often incurred (and if it is, it doesn't need to save as much since it can expect the program to save state). - To avoid triggering OS syscalls too much, the OS could avoid triggering a fault on the "new execution unit" instruction unless the execution unit is not allowed to do this or if no more units are available. - Bonus thought: You could even ask for an execution unit on every function call - there is no stack space! The program stack idea is instead implemented as a wait/spawn chain, and all functions run asynchronously with a set contract for returning values (such as writing data to a malloc_L1() pointer). - No attempt on the processor's part to ILP - the architecture will just suck it up and run each instruction in order on the same unit. A high-level language might do execution unit scheduling on its own in the compiler phase, but the assembly will tell you exactly how each execution unit is used. - An ability to bypass the paging system, if we don't completely throw it out. - If the architecture had the ability to actually read its own instruction pointer, paging would be much less useful because we could use relative addressing for everything - just transfer the instruction pointer into the accumulator and calculate any necessary jump points and memory lookups. - Paging is still useful for memory protection, though, so we could still use it even if we expressed it in terms of physical pages without doing any address translation.
As you said, old code would not run well on such an architecture, but that would be because it's old x86 and Unix/Windows code and not strictly because it's old C code. C has been adapted to lots of programming models before, and it could potentially be adapted to one like this, too.
This can never be true, an OS can not change a page after it has been accessed, that would be completely broken. Lazy recycling in an OS is that you get your memory allocation validated immediately, but you don't get the page actually mapped into your memory until you make the first access, whether that is a read or write; the OS will get a invalid page exception where it will check if your read/write was to a memory address that you actually own, if that is true, the OS will assign a zeroed page to that page offset, or if it had been swapped out, it will read it, map it into your memory map and return control to the program. In no OS that actually works will the data in memory change from under your feet, unless you have hardware errors, or the writer of the OS made something really wrong, or you are reading a hardware register, but this was about memory returned from malloc, so that does not apply.
...that still would have been much better than having compiler writers interpret that 6.5p7 (or its predecessor) was intended to apply even in cases where an lvalue had an otherwise-obvious association with an object of the correct type, with the sole exception of those that even the most obtuse compiler writer would have to admit would otherwise be absurd [e.g. accessing s.x directly using the member-access operator].
For example, given "struct S { int m; } s;", evaluation of "s.m" will invoke Undefined Behavior because it accesses an object of type "struct S" with an lvalue of type "int", and "int" is not one of the types via which N1570 p6.5p7 would allow an object of type "struct S" to be accessed. The equivalent text in C89 had the same problem. Obviously there must be some circumstances where an lvalue can access an object of otherwise-incompatible type, but the Standard fails to say what those are, and nearly all confusion surrounding aliasing is a result of different people trying to figure out when that rule does or does not apply.
This problem could have been remedied with Defect Report #028 if the authors had noted that the rule meant to require that the lvalue used for access must have an active association with an lvalue of a proper type. While that would be a bit vague in the absence of a definition of "active association",
Displaying 10 most recent comments. Read the full list here