Download PDF version of this article PDF

Call That Gibberish?

Stan Kelly-Bootle

The Ninth World Multiconference SCI (Systematics, Cybernetics, and Informatics) 2005 has attracted more attention than its vaporific title usually merits by accepting a spoof paper from three MIT graduate students. The Times (of London, by default, of course) ran the eye-catching headline, “How gibberish put scientists to shame” (April 6, 2005). One of the students, Jeremy Stribling, explains how they had developed a computer program to generate random sequences of technobabble in order to confirm their suspicions that papers of dubious academicity were bypassing serious, or indeed, any scrutiny. In fact, the students claim ulterior, financial motives behind this lack of proper peer review. The SCI organizers, it is suggested, solicit admissions via substantial e-mail shots and usually charge a fee for accepted papers to be presented at their conference.

There are many precedents both for computer-generated nonsense and for fooling various academic entities with manually contrived gobbledygook. The former were originally for fun, such as the simple Datamation Jargon Generator, which randomly concatenated several fashionable buzzwords from a standard list. If the object was to shame the marketeers, it has clearly failed. Some linguists suggest that the word system has become grammaticalized1 as a delimiting tag to signal the end of these random strings of jargon. Thus: X is a stable, seamless, portable, multiparadigmatic, polymorphic, user-centric, OS-neutral... executive control language system. The tag persists in other domains, as when “my beliefs” are elevated to “my belief system.”

Of course, there’s also plenty of serious computer-generated nonsense in the form of the source-code and HTML pages emitted peristaltically2 by expensive high-level tools and frameworks. Human programmers are suitably affrighted when inspecting the resulting code glut, yet encouraged that their skills are still in demand to trim the fat! The outrageous chatspace offers living examples of code that invites the exclamation “WTF!?” (An expansion of WTF and its siblings FUBAR and RTFM can be mailed to you in a plain brown envelope upon request).

Human-crafted scientific lampoonery forms a well-attested and enduring genre, exemplified by the JIR (Journal of Irreproducible Results,, founded by Alexander Kohn and Harry J. Lipkin in Israel back in 1955, and still flourishing under Norman Sperling’s editorship from JIR’s HQ in San Mateo, California. The Web site lacked an up-to-date, secure subscription entry form when I resubscribed recently. You can either print and snail-mail your order or e-mail two “half-applications” using two separate target addresses. You enter half of your credit-card data in one e-mail, and the remaining half in the other. Now there’s a challenge for those who answered the Gospel call: “Come, I will make you Phishers of Men...” (Mark 1:17). In the case of my credit rating, I can echo Iago: “Who steals my identity, steals trash.”

An institution that parallels and often converges with the JIR is the Ig Nobel Awards Committee (, although this extends the genre more from deliberate parody to authentic research that has all the hallmarks of spoof and grandiose silliness.

If you’ve witnessed an Ig Nobel Award ceremony, it’s clear that most of the winners who turn up are by no means disgraced by their “anti-prizes.” This, I suppose, is not only a byproduct of our celebrity culture (being famous for being infamous, or vice versa) but also a tribute to the uncruel, teasing humor underlying the whole affair. One might even suspect that some researchers, unlikely ever to get the nod from Stockholm, consciously inflate their prose with half-an-eye on the Ig Nobel. One of my favorites was the 2001 Ig Nobel Peace Prize awarded to the Lithuanian mushroom-soup millionaire Viliumas Malinauskas for his Disney-style amusement theme park called Stalin World. Many of the toppled statues of Lenin and Stalin have found a new home, and visitors can enjoy a “Day in the Life of the Gulag,” sipping cold, weak borscht while the tannoys crackle with Soviet songs and propaganda. Cynics have compared this project favorably with Henry Kissinger’s real Nobel Peace Prize in 1973.

The very fact that “real” and “fake” dissertations are becoming difficult to distinguish is a direct result of ever-growing specialization. Mathematician D. E. Littlewood3 repeats the old saw that specialists get to know more and more about less and less, extrapolated until they know everything about epsilon. After retelling the Old Testament Babel myth, Littlewood observes: “The story is not without relevance to the science of today, which aspires in some respects beyond the heavens. The curse of the confusion of tongues is no less apt. What scientist can read with interest a technical paper in a different branch of science from his own? What mathematician can read with profit research papers on a topic on which he has not specialized knowledge?”

Ironically, one of Littlewood’s pioneering papers on algebraic groups was rejected by a leading mathematical journal on the grounds that it was incomprehensible. This reminds us that the word gibberish and its diverse cognates reveal, etymologically at least, elements of subjective and potentially invalid judgments. Thus, babble is what babes and inarticulate brooks do. Both gibberish and gobbledygook are meaningless turkey-talk, and the Greeks, hearing only sheepish “baa-baas” from their non-Helleniphone neighbors, dubbed them all barbarians. Even jargon started life as the French for “the twittering of birds,” although a less derogatory secondary meaning has evolved: the specialized lexicon of a particular trade or domain.

With this in mind, we return to the duping of the CSI conference reviewers. One of the hoax papers accepted was “Rooter: A Methodology for the Typical Unification of Access Points and Redundancy.” The introduction asserts, “Certainly, the usual methods for the emulation of Smalltalk that paved the way for the investigation of rasterization do not apply here.” This is a wonderful parody for those of us who know the jargon. The pun on root/router is par for MIT-graduate humor, and at least one occurrence of methodology is mandatory. Perhaps, however, the rules of grammar are too closely observed, and a cynic might take this as a hint that the authors are not real scientists! Yet, again, some sort of information is being presented, and it could be that the domain of discourse happens to fall just outside the reader’s knowledge base. After all, Smalltalk and GUIs came from Xerox PARC where, no doubt, rasterizations of all kinds were investigated, implemented, and (nudge-nudge, say-no-more) stolen by Apple or IBM?

One’s puzzlement is increased when the Times quotes the following excerpt, clearly expecting us to roll over in disbelief that such gibberish could slip through the widest of editorial sieves: “We compared throughput on the Microsoft Windows Longhorn, Ultrix, and Microsoft Windows 2000 operating systems.”

The sentence qua sentence is a tad bizarre but quite sound technically. The three named products are indeed real or pending operating systems (Longhorn being the public in-house moniker for Microsoft’s next big thingy) for which throughput comparisons can be (and often are) meaningfully compared.

I suppose the conclusion is that a reliable gibberish filter requires a careful holistic review by several peer domain experts. Each word and each sentence may well prove individually impeccable, although nonsense in toto,4 which probably rules out for many years to come a computerized filter for both human and computer-generated hoaxes.

The D. E. Littlewood problem will remain: a paper so advanced that he was the only expert at the time to understand it. How well I know his angst, shared by all whose submissions have ever been spiked. Perhaps we can live with a hanging Judge Jeffrey’s agenda: Better that a hundred good papers are rejected than that one hoaxer or crank be published.


  1. Here’s a relevant example of how a well-defined technical term, familiar within the trade, can come across to the unversed as an abomination, a gross mangling of decent English, a Bushism even, or at least the product of some damned Yankee. Technically, we have grammatical words and concrete words. Grammatical words, such as then or or, perform (possibly) vital structural and semantic duties without “naming” particular things or concepts. In this context, love and phlogiston are rated as concrete. Amazingly (to outsiders!), words can, over time, shift between (or maybe straddle) these two categories. The more common transition is when a concrete word becomes grammatical. What better than to call this process grammaticalization? John McWhorter (The Story of Human Language, The Teaching Company, 2004) is fond of citing the French pas (concrete step), grammaticalizing into not. An open question: Were all grammatical words, including prefixes and suffixes, originally concrete?
  2. I thought some of you might need help with this one. I learned it from the Cambridge philosopher Simon Blackburn (author of Lust and, out soon, Truth: A Guide for the Perplexed), although peristalsis is a medical euphemism: successive waves of involuntary contractions passing along the walls of a hollow muscular structure, esp. the intestine, and forcing the contents onward. In three words: uncontrolled bowel movement.
  3. Littlewood, D. E. 1949. The Skeleton Key of Mathematics—A Simple Account of Complex Algebraic Theories. Hutchinson & Co. Republished 2002, Dover Publications, Mineola, NY. Dover Publications continues its good works: making earlier classics available at reasonable prices.
  4. A disconcerting example hit me last night, listening to an audio CD of Jane Austen’s Pride and Prejudice with my player inadvertently left in shuffle (random) mode. Fortunately, I know the plot well enough to notice that Elizabeth and Darcy seemed to have shacked up naughtily in Pemberley before he had successfully proposed. But I plan to shuffle through William S. Burroughs’ Naked Lunch as soon as possible.

STAN KELLY-BOOTLE (;, born in Liverpool, England, read pure mathematics at Cambridge in the 1950s before tackling the impurities of computer science on the pioneering EDSAC I. His many books include The Devil’s DP Dictionary (McGraw-Hill, 1981) and Understanding Unix (Sybex, 1994). Software Development Magazine has named him as the first recipient of the new annual Stan Kelly-Bootle ElecTech Award for his “lifetime achievements in technology and letters.” Neither Nobel nor Turing achieved such prized eponymous recognition. Under his nom-de-folk, Stan Kelly, he has enjoyed a parallel career as a singer and songwriter.

© 2005 ACM 1542-7730/05/0700 $5.00


Originally published in Queue vol. 3, no. 6
Comment on this article in the ACM Digital Library

More related articles:

Nicole Forsgren, Eirini Kalliamvakou, Abi Noda, Michaela Greiler, Brian Houck, Margaret-Anne Storey - DevEx in Action
DevEx (developer experience) is garnering increased attention at many software organizations as leaders seek to optimize software delivery amid the backdrop of fiscal tightening and transformational technologies such as AI. Intuitively, there is acceptance among technical leaders that good developer experience enables more effective software delivery and developer happiness. Yet, at many organizations, proposed initiatives and investments to improve DevEx struggle to get buy-in as business stakeholders question the value proposition of improvements.

João Varajão, António Trigo, Miguel Almeida - Low-code Development Productivity
This article aims to provide new insights on the subject by presenting the results of laboratory experiments carried out with code-based, low-code, and extreme low-code technologies to study differences in productivity. Low-code technologies have clearly shown higher levels of productivity, providing strong arguments for low-code to dominate the software development mainstream in the short/medium term. The article reports the procedure and protocols, results, limitations, and opportunities for future research.

Ivar Jacobson, Alistair Cockburn - Use Cases are Essential
While the software industry is a fast-paced and exciting world in which new tools, technologies, and techniques are constantly being developed to serve business and society, it is also forgetful. In its haste for fast-forward motion, it is subject to the whims of fashion and can forget or ignore proven solutions to some of the eternal problems that it faces. Use cases, first introduced in 1986 and popularized later, are one of those proven solutions.

Jorge A. Navas, Ashish Gehani - OCCAM-v2: Combining Static and Dynamic Analysis for Effective and Efficient Whole-program Specialization
OCCAM-v2 leverages scalable pointer analysis, value analysis, and dynamic analysis to create an effective and efficient tool for specializing LLVM bitcode. The extent of the code-size reduction achieved depends on the specific deployment configuration. Each application that is to be specialized is accompanied by a manifest that specifies concrete arguments that are known a priori, as well as a count of residual arguments that will be provided at runtime. The best case for partial evaluation occurs when the arguments are completely concretely specified. OCCAM-v2 uses a pointer analysis to devirtualize calls, allowing it to eliminate the entire body of functions that are not reachable by any direct calls.

© ACM, Inc. All Rights Reserved.