January/February 2018 issue of acmqueue

The January/February issue of acmqueue is out now

The Bike Shed

Programming Languages

  Download PDF version of this article PDF

ITEM not available


Originally published in Queue vol. 8, no. 10
see this item in the ACM Digital Library



Robert C. Seacord - Uninitialized Reads
Understanding the proposed revisions to the C language

Carlos Baquero, Nuno Preguiça - Why Logical Clocks are Easy
Sometimes all you need is the right language.

Erik Meijer, Kevin Millikin, Gilad Bracha - Spicing Up Dart with Side Effects
A set of extensions to the Dart programming language, designed to support asynchrony and generator functions

Dave Long - META II: Digital Vellum in the Digital Scriptorium
Revisiting Schorre's 1962 compiler-compiler


(newest first)

Displaying 10 most recent comments. Read the full list here

John Mark Isaac Madison | Mon, 04 Sep 2017 05:16:52 UTC

Can you guess what a: HeatTreatedMilledOatsWaterAndSweetCrystalineSubstanceMixture is?

It's a revolutionary program that made "cake" before people knew what cake, ovens, flour, and sugar were.

Or, you could just call that variable a ß. ß == "Meal" in Japanese. You can then document in the class what ß is composed of.

People might say this "isn't clean code". I disagree. It's very concise and what it is can be described in the class file comments.

The only thing naming it: HeatTreatedMilledOatsWaterAndSweetCrystalineSubstanceMixture will do for you is: 1. Take up too much column space. 2. Allow people to make mis-guided assumptions as to what the class is. Which could lead to coding without reading the source.

Zorba | Mon, 23 Jan 2012 23:27:34 UTC

Nuts. I'd be happy if we could get rid of ALGOL's stupid syntax altogether with its trailing semi-colons and (often) case sensitivity. Of all the languages that had to spawn the modern crop, why did it have to be ALGOL? While you're at it; it should be a law that zeros are slashed in all character sets, under_scores are outlawed, and keyboards have the control key next to the 'A' key where it belongs!

As for ASCII, why not code in Baudot?

clive | Sat, 19 Feb 2011 17:54:15 UTC

Poul-Henning - very compelling article. And very amusing in parts, especially the bit about "the world's second write-only programming language, after APL". It would be great to use all the variety of expression allowed with unicode, but what you're forgetting is the keyboard. Until we speak into our computers to program them - keyboard is king. And on a keyboard with typically 102 keys it's not possible to do much beyond ASCII. (At least I don't want to be holding down 4 keys at once!!)

Mikel | Sat, 06 Nov 2010 05:10:50 UTC

Turns out Dragon NaturallySpeaking doesn't support Unicode, so voice recognition is no solution to OmegaZero yet either.

Christophe de Dinechin | Fri, 05 Nov 2010 08:25:22 UTC

I've looked up this entire page, and the word "semantics" is not written once. I've stopped counting "syntax". But the problem is not the syntax, it's the semantics that makes a language more or less expressive.

The steps forward in programming happened when functions, or objects, or distributed programming, or operator overloading, or exceptions, or concurrency became usable by programmers. It doesn't really change much if you describe "task" with a chinese glyph or the four ASCII characters t-a-s-k, what matters is what it means, not how it looks.

In my own programming language, XL, the syntax is all ASCII, because there's a single "built-in" operator, ->. But you can use it to invent your own notation, as in:

if true then X:code else Y:code -> X if false then X:code else Y:code -> Y

See http://xlr.sourceforge.net/Concept%20Programming%20Presentation.pdf for a more in-depth discussion of this "concepts come first" approach, which I called, obviously enough, "concept programming".

KB | Tue, 02 Nov 2010 19:59:23 UTC

"It could have used just or and bitor, but | and || saved one and three characters, which on an ASR-33 teletype amounts to 1/10 and 3/10 second, respectively." You reversed the tokens for or and bitor voiding much of your "speed" argument -- you do not gain anything by using "||" over "or".

fotis | Tue, 02 Nov 2010 19:26:16 UTC

Here was some good punching on "modern" computer languages deficiencies. After having used nearly any form of programming environment (imperative, OO, functional, "logical", RPN breeds included) the one thing that strikes me is how much inclined I still am on using humble shell scripting to prototype the first ideas, mostly due to the freely available pipelining mechanism, which has certain performance advantages, despite the difficulty in maintaining shell code. Computer languages as we know them need major overhauls both on the front-ends and also on the back-ends. IMHO, GO is not too bad in its front-end aspects, though I would have liked it as a developer to be able to trade the brackets vs indentation business (less LOC => faster r/w)

Hans | Tue, 02 Nov 2010 15:26:30 UTC

Why mix together the use of characters in a programming language with pure editor features like coloring certain regions of code or floating other regions above/beside the next?

How exactly would unicode characters improve a language syntax? We have the () <> [] {} already right? How many more open/close characters can we actually introduce before they start being too similar and give raise to bugs like the ones related to || that you dscribe in the article? And what would be the gain? We might "free up" {} or whatever for use in some other part of the language. But um lets see you seem to be the only one who thinks that we are running out of characters and that it might be a problem. You argue for using "bitor" over || and at the same time you claim that more characters would improve anything. Its quite rediculous.

But please if you would like to submit patches for eclipse, netbeans or qt creator that colors private variables BE MY GUEST. It probably takes as much time as writing this whole pointless article seing as how the syntax coloring infrastructure is already in there.

KPG | Tue, 02 Nov 2010 01:45:56 UTC

Amusement. The system gonked my previous post. Both line breaks AND the serveral Unicode 'A'-like symbols got mangled.

KPG | Tue, 02 Nov 2010 01:42:34 UTC

I spent years dealing with EBCDIC and mapping ASCII to/from EBCDIC. Anyone reading this who experienced Waterloo C tri-glyphs or had to output ASCII via an IBM 1403 print chain knows.

I like ASCII. Like EBCDIC, it's compact (< 1 byte) and small enough to memorize. But better, it's contiguous.

ASCII shines brightest in its ordinal arrangement. The simple rules '0' < '9', 'A' < 'Z', and 'a' < 'z' make describing and implementing collations trivial. Implementing a comparison is far more straightforward with while (*string1++ == *string2++);

Adding more symbols makes a mess. In what order shall we put 'A', '', '', 'Ä', 'Â', 'À', 'Â', 'Å', '&', ':', '†', '‘', and 'Â'? Is it language dependent? In which countries? In what dialects?

Once, pre-Unicode days, I had to manage CJK card catalog entries in an ASCII-only text engine. The solution mapped CJK glyphs onto four-character strings during input/output. Using 0-9 and A-Z, the code space was 1.6MM, large enough. It worked. Perhaps P-H K prefer this more general solution?

Displaying 10 most recent comments. Read the full list here
Leave this field empty

Post a Comment:

© 2018 ACM, Inc. All Rights Reserved.