Columns > The Bikeshed


      view issue

Sir, Please Step Away from the ASR-33!
Download PDF version of this article

by Poul-Henning Kamp | October 25, 2010

Topic: Programming Languages

  • View Comments
  • Print

Sir, Please Step Away from the ASR-33!

To move forward with programming languages we need to break free from the tyranny of ASCII.

Poul-Henning Kamp

One of the naughty details of my Varnish software is that the configuration is written in a domain-specific language that is converted into C source code, compiled into a shared library, and executed at hardware speed. That obviously makes me a programming language syntax designer, and just as obviously I have started to think more about how we express ourselves in these syntaxes.

Rob Pike recently said some very pointed words about the Java programming language, which if you think about it, sounded a lot like the pointed words James Gosling had for C++, and remarkably similar to what Bjarne Stroustrup said about good ol' C.

I have always admired Pike. He was already a giant in the field when I started, and his ability to foretell the future has been remarkably consistent.1 In front of me I have a tough row to hoe, but I will attempt to argue that this time Pike is merely rearranging the deckchairs of the Titanic and that he missed the next big thing by a wide margin.

Pike got fed up with C++ and Java and did what any self-respecting hacker would do: he created his own language—better than Java, better than C++, better than C—and he called it Go.

But did he go far enough?

      package main

      import "fmt"

      func main() {

            fmt.Printf("Hello, World\n")

      }

This does not in any way look substantially different from any of the other programming languages. Fiddle a couple of glyphs here and there and you have C, C++, Java, Python, Tcl, or whatever.

Programmers are a picky bunch when it comes to syntax, and it is a sobering thought that one of the most rapidly adopted programming languages of all time, Perl, barely had one for the longest time. The funny thing is, what syntax designers are really fighting about is not so much the proper and best syntax for the expression of ideas in a machine-understandable programming language as it is the proper and most efficient use of the ASCII table real estate.

IT'S ALL ASCII TO ME...

There used to be a programming language called ALGOL, the lingua franca of computer science back in its heyday. ALGOL was standardized around 1960 and dictated about a dozen mathematical glyphs such as ×, ÷, ¬, and the very readable subscripted 10 symbol, for use in what today we call scientific notation. Back then computers were built by hand and had one-digit serial numbers. Having a teletypewriter customized for your programming language was the least of your worries.

A couple of years later came the APL programming language, which included an extended character set containing a lot of math symbols. I am told that APL still survives in certain obscure corners of insurance and economics modeling.

Then ASCII happened around 1963, and ever since, programming languages have been trying to fit into it. (Wikipedia claims that ASCII grew the backslash [\] specifically to support ALGOL's /\ and \/ Boolean operators. No source is provided for the claim.)

The trouble probably started for real with the C programming language's need for two kinds of and and or operators. It could have used just or and bitor, but | and || saved one and three characters, which on an ASR-33 teletype amounts to 1/10 and 3/10 second, respectively.

It was certainly a fair tradeoff—just think about how fast you type yourself—but the price for this temporal frugality was a whole new class of hard-to-spot bugs in C code.

Niklaus Wirth tried to undo some of the damage in Pascal, and the bickering over begin and end would no } take.

C++ is probably the language that milks the ASCII table most by allowing templates and operator overloading. Until you have inspected your data types, you have absolutely no idea what + might do to them (which is probably why there never was enough interest to stage an International Obfuscated C++ Code Contest, parallel to the IOCCC for the C language).

C++ stops short of allowing the programmer to create new operators. You cannot define :-: as an operator; you have to stick to the predefined set. If Bjarne Stroustrup had been more ambitious on this aspect, C++ could have beaten Perl by 10 years to become the world's second write-only programming language, after APL.

How desperate the hunt for glyphs is in syntax design is exemplified by how Guido van Rossum did away with the canonical scope delimiters in Python, relying instead on indentation for this purpose. What could possibly be of such high value that a syntax designer would brave the controversy this caused? A high-value pair of matching glyphs, { and }, for other use in his syntax could. (This decision also made it impossible to write Fortran programs in Python, a laudable achievement in its own right.)

The best example of what happens if you do the opposite is John Ousterhout's Tcl programming language. Despite all its desirable properties—such as being created as a language to be embedded in tools—it has been widely spurned, often with arguments about excessive use of, or difficult-to-figure-out placement of, {} and [].

My disappointment with Rob Pike's Go language is that the rest of the world has moved on from ASCII, but he did not. Why keep trying to cram an expressive syntax into the straitjacket of the 95 glyphs of ASCII when Unicode has been the new black for most of the past decade?

Unicode has the entire gamut of Greek letters, mathematical and technical symbols, brackets, brockets, sprockets, and weird and wonderful glyphs such as "Dentistry symbol light down and horizontal with wave" (0x23c7). Why do we still have to name variables OmegaZero when our computers now know how to render 0x03a9+0x2080 properly?

The most recent programming language syntax development that had anything to do with character sets apart from ASCII was when the ISO-C standard committee adopted trigraphs to make it possible to enter C source code on computers that do not even have ASCII's 95 characters available—a bold and decisive step in the wrong direction.

While we are at it, have you noticed that screens are getting wider and wider these days, and that today's text processing programs have absolutely no problem with multiple columns, insert displays, and hanging enclosures being placed in that space?

But programs are still decisively vertical, to the point of being horizontally challenged. Why can't we pull minor scopes and subroutines out in that right-hand space and thus make them supportive to the understanding of the main body of code?

And need I remind anybody that you cannot buy a monochrome screen anymore?   Syntax-coloring editors are the default. Why not make color part of the syntax? Why not tell the compiler about protected code regions by putting them on a framed light gray background? Or provide hints about likely and unlikely code paths with a green or red background tint?

For some reason computer people are so conservative that we still find it more uncompromisingly important for our source code to be compatible with a Teletype ASR-33 terminal and its 1963-vintage ASCII table than it is for us to be able to express our intentions clearly.

And, yes, me too: I wrote this in vi(1), which is why the article does not have all the fancy Unicode glyphs in the first place.
Q

Reference

1. Pike, R. 2000. Systems software research is irrelevant; http://herpolhode.com/rob/utah2000.pdf.


LOVE IT, HATE IT? LET US KNOW

feedback@queue.acm.org

Poul-Henning Kamp (phk@FreeBSD.org) has programmed computers for 26 years and is the inspiration behind bikeshed.org. His software has been widely adopted as "under the hood" building blocks in both open source and commercial products. His most recent project is the Varnish HTTP accelerator, which is used to speed up large Web sites such as Facebook.

© 2010 ACM 1542-7730/10/1000 $10.00

acmqueue

Originally published in Queue vol. 8, no. 10
see this item in the ACM Digital Library

Back to top

  • POUL-HENNING KAMP (phk@FreeBSD.org) is one of the primary developers of the FreeBSD operating system, which he has worked on from the very beginning. He is widely unknown for his MD5-based password scrambler, which protects the passwords on Cisco routers, Juniper routers, and Linux and BSD systems. Some people have noticed that he wrote a memory allocator, a device file system, and a disk encryption method that is actually usable. Kamp lives in Denmark with his wife, his son, his daughter, about a dozen FreeBSD computers, and one of the world's most precise NTP (Network Time Protocol) clocks. He makes a living as an independent contractor doing all sorts of stuff with computers and networks.

    For additional information see the ACM Digital Library Author Page for: Poul-Henning Kamp
     

Comments

Displaying 10 most recent comments. Read the full list here
  • Hans Kruse | Tue, 02 Nov 2010 00:17:45 UTC

    Please let us break the terror of the typewriter. In daily life we use symbols in many places because plain text does not work. 
    
    Symbols are a more succinct(less verbose) in expressing, Numbers(0,1,2..9), Math, Music, Chemistry, Religion, army ranks, marketing, smilies, and many other uses. It allow you to see the forest for the trees and not cluttering your  work with common knowledge that can be easily expressed using a symbol, E.g. using ForEach or Sigma symbol instead of using ForEach and Sigma in text over and over again in a formula.  
     
    Road signs exist for a reason. In traffic you need to evaluate them in a short amount of time. Road signs mainly express a message using  shape, size, colour orientation and location. 
    
    Imagine the overnight change of all road signs with plain square white signs with black four word texts. People are no longer able to instantly see the more important signs. As a side effect the economy would grind down to a halt  because people are driving slower due to all the sign reading...  What If I change the road sign texts from English to Chinese.
      
    In Software Engineering the use of Design Pattern is (getting) common knowledge. Patterns such as Iterator do not need to be explained/defined over and over again because software engineers were trained in using them.
    
    Using ASCII to express symbols is often very inefficient, e.g.: ManagerSatisfactionBonusRateFactorReasonerEnum. When literally translated to Chinese using a translation service this gets reduced to fifteen(15) characters.  A lot of people in the world speak Chinese. Maybe for programming in general it may be useful to learn Chinese because it is more expressive characterwise. 
    
    Once taught the symbols of the examples mentioned above allow for a more efficient expression in the domain these symbols are meant for. The question should be what can be gained in expressiveness using symbols versus the effort to teach people/adapt tools to deal with (unicode) symbols? 
    
    Choosing identifier names wisely regarding common practice and the problem domain comes first. Using unicode may be a useful tool in reducing the clutter when education and tooling is taken care of.
  • KPG | Tue, 02 Nov 2010 01:42:34 UTC

    I spent years dealing with EBCDIC and mapping ASCII to/from EBCDIC.  Anyone reading this who experienced Waterloo C tri-glyphs or had to output ASCII via an IBM 1403 print chain knows.
    
    I like ASCII.  Like EBCDIC, it's compact (< 1 byte) and small enough to memorize.  But better, it's contiguous.
    
    ASCII shines brightest in its ordinal arrangement. The simple rules '0' < '9', 'A' < 'Z', and 'a' < 'z' make describing and implementing collations trivial.  Implementing a comparison is far more straightforward with while (*string1++ == *string2++);
    
    Adding more symbols makes a mess.  In what order shall we put 'A', '', '', 'Ä', 'Â', 'À', 'Â', 'Å', '&', ':', '†', '‘', and 'Â'?  Is it language dependent?  In which countries?  In what dialects?
    
    Once, pre-Unicode days, I had to manage CJK card catalog entries in an ASCII-only text engine.  The solution mapped CJK glyphs onto four-character strings during input/output.  Using 0-9 and A-Z, the code space was 1.6MM, large enough.  It worked.  Perhaps P-H K prefer this more general solution?
    
    
  • KPG | Tue, 02 Nov 2010 01:45:56 UTC

    Amusement.  The system gonked my previous post.  Both line breaks AND the serveral Unicode 'A'-like symbols got mangled.
  • Hans | Tue, 02 Nov 2010 15:26:30 UTC

    Why mix together the use of characters in a programming language with pure editor features like coloring certain regions of code or floating other regions above/beside the next?
    
    How exactly would unicode characters improve a language syntax? We have the () <> [] {} already right? How many more open/close characters can we actually introduce before they start being too similar and give raise to bugs like the ones related to || that you dscribe in the article? And what would be the gain? We might "free up" {} or whatever for use in some other part of the language. But um lets see you seem to be the only one who thinks that we are running out of characters and that it might be a problem. You argue for using "bitor" over || and at the same time you claim that more characters would improve anything. Its quite rediculous.
    
    But please if you would like to submit patches for eclipse, netbeans or qt creator that colors private variables BE MY GUEST. It probably takes as much time as writing this whole pointless article seing as how the syntax coloring infrastructure is already in there.
  • fotis | Tue, 02 Nov 2010 19:26:16 UTC

    Here was some good punching on "modern" computer languages deficiencies. After having used nearly any form of programming environment (imperative, OO, functional, "logical", RPN breeds included) the one thing that strikes me is how much inclined I still am on using humble shell scripting to prototype the first ideas, mostly due to the freely available pipelining mechanism, which has certain performance advantages, despite the difficulty in maintaining shell code. Computer languages as we know them need major overhauls both on the front-ends and also on the back-ends. IMHO, GO is not too bad in its front-end aspects, though I would have liked it as a developer to be able to trade the brackets vs indentation business (less LOC => faster r/w)
  • KB | Tue, 02 Nov 2010 19:59:23 UTC

    "It could have used just or and bitor, but | and || saved one and three characters, which on an ASR-33 teletype amounts to 1/10 and 3/10 second, respectively."
    You reversed the tokens for or and bitor voiding much of your "speed" argument -- you do not gain anything by using "||" over "or". 
    
  • Christophe de Dinechin | Fri, 05 Nov 2010 08:25:22 UTC

    I've looked up this entire page, and the word "semantics" is not written once. I've stopped counting "syntax". But the problem is not the syntax, it's the semantics that makes a language more or less expressive.
    
    The steps forward in programming happened when functions, or objects, or distributed programming, or operator overloading, or exceptions, or concurrency became usable by programmers. It doesn't really change much if you describe "task" with a chinese glyph or the four ASCII characters t-a-s-k, what matters is what it means, not how it looks.
    
    In my own programming language, XL, the syntax is all ASCII, because there's a single "built-in" operator, ->. But you can use it to invent your own notation, as in:
    
        if true then X:code else Y:code  -> X
        if false then X:code else Y:code -> Y
    
    See http://xlr.sourceforge.net/Concept%20Programming%20Presentation.pdf for a more in-depth discussion of this "concepts come first" approach, which I called, obviously enough, "concept programming".
    
  • Mikel | Sat, 06 Nov 2010 05:10:50 UTC

    Turns out Dragon NaturallySpeaking doesn't support Unicode, so voice recognition is no solution to OmegaZero yet either.
  • clive | Sat, 19 Feb 2011 17:54:15 UTC

    Poul-Henning - very compelling article.  And very amusing in parts, especially the bit about "the world's second write-only programming language, after APL".  It would be great to use all the variety of expression allowed with unicode, but what you're forgetting is the keyboard.  Until we speak into our computers to program them - keyboard is king.  And on a keyboard with typically 102 keys it's not possible to do much beyond ASCII.  (At least I don't want to be holding down 4 keys at once!!)
  • Zorba | Mon, 23 Jan 2012 23:27:34 UTC

    Nuts. I'd be happy if we could get rid of ALGOL's stupid syntax altogether with its trailing semi-colons and (often) case sensitivity. Of all the languages that had to spawn the modern crop, why did it have to be ALGOL? While you're at it; it should be a law that zeros are slashed in all character sets, under_scores are outlawed, and keyboards have the control key next to the 'A' key where it belongs!
    
    As for ASCII, why not code in Baudot?
Displaying 10 most recent comments. Read the full list here
Leave this field empty

Post a Comment:

(Required)
(Required)
(Required - 4,000 character limit - HTML syntax is not allowed and will be removed)