Curmudgeon

  Download PDF version of this article PDF

Words Fail Them

Dedesignating and other linguistic hazards

Stan Kelly-Bootle, Author

A recent announcement on the closing of an English nudist beach (have I captured your attention so early?) concluded with an apology to “all the naturalists” affected. This upset the “bird watchers,” both naturalists and naturists (nudge, nudge), as well as those “word watchers” devoted to gooder English. Miffed and bemused letters appeared in Sally Baker’s London Times Feedback column, the traditional sounding board for disgruntled pop grammarians.

Adding to the amateur wordsmiths’ outrage was the official notification that the nude beach has now been “dedesignated.” This is a perfectly valid construction blessed by what Webster II lists as self-defining compounds. English has its agglutinative moments, even though these are much less flamboyant than in Turkish or Finnish. Prefixes such as un-, non-, a-, dis-, and de- and suffixes such as -less and -free can reverse meanings without fuss.

Ready When You Are

A euphemistic, sneaky variation is the suffix -ready. Thus, early PCs were advertised as “keyboard-ready” and “monitor-ready,” turning the absence of these essential peripherals into a positive sales advantage. What next, we pondered? “CPU-ready?” My TV is labeled “HD-ready,” but I’ve no precise idea what’s missing. The TV FM (fine manual!) has a page titled Jargon Buster, a rather annoying synonym for glossary. It confirms that HD means high definition, not hard disk. I’m quite happy with the quality of my image (not to be confused with my self-image) and wonder whether more and sharper pixels will change my life. Are my eyes and wallet HD-ready? I suppose you could sum up the ideal consumer as being gullible and “next-big-thing-ready.” Here, big is sales-speak for “hyped tiny increment.”

We’ve already met the linguistic hazards of labels designating quantitative improvement. When does “extra special” run out of steam to become “bog standard?” The early supercomputers quickly lost their superiority. You may recall that the IBM Stretch became known as Twang. (Yes, that was Univac jealousy, although its own LARC was descending at that time.) I make an exception for the name Colossus, applied with a touch of Brit humor to the Enigma code-breaking computers developed at the “Newmanry” at Bletchley Park.

Bletchley Park’s achievements were truly “colossal” in helping the Allied victory in World War II. Some say that without that crazy mix of mathematicians, cryptologists, linguists, engineers, and chess players, “We would all be speaking German now.” This is highly unlikely. Hitler’s Aryan policies would have maintained the English language, howbeit after eliminating a vast number of dissident English voices.

The Newmanry was named for Max Newman, within whose group Alan Turing worked his Hut 8 wonders. Newman was briefly my supervisor (algebraic topology) at Warwick University in the late 1960s, having been wooed from semi-retirement by Christopher Zeeman. Even then, some 20 years after the war, all lips were sealed under a draconian Official Secrets Act, strictly enforced on pain of death. In retrospect, this secrecy damaged the UK’s post-war computer industry. The Colossi themselves were stupidly dismantled and destroyed, by direct edict from Churchill, along with most of the documentation. (To be fair, Churchill had his reasons.) Happily, a painstaking reconstruction led by Tony Sales was completed in 2007. If you are ever near Bletchley Park, Milton Keynes, pop into the National Computing Museum and see how our fair trade started.

I must mention two linguistic oddities. You probably know that Turing’s name acquired a spurious umlaut in some reviews of German translations of his pioneering papers on computable functions, presumably by those who were certain that the author must be German. Newman received the same treatment, being called Max Neumann in, of all places, the British Computer Weekly. I have somewhere my letter of protest, which Computer Weekly blushingly printed. But here’s the double take: Max was indeed born Max Neumann, but his family later anglicized the name for what were then sound self-protective motives.

And when, we ask, does a massive array earn that bloated title? What follows massively parallel? We do know that a hyper collider is mooted to replace the now-sluggish super collider. Then what? The rot began when two CPUs in a single cabinet heralded the dawn of multiprocessing. Even today, the cynic’s definition of multi- is “two, maybe more.” With VLSI we progressed from large to very large-scale integration, threatening us with V... VLSI until Planck limits are reached or the components rebel against overcrowding. A world court of component rights is not far away if you’ve been following progress in animal and robotic ethics.

In Einstein’s monist pantheism all entities deserve legal protection, and the onus is, etymologically, “on us!” As the only species, so far, to feel the pain of others (your mileage may vary) and further, to have developed the appropriate litigational schemata, HomSap must act now. We must see the swine flu pandemic from the H1N1 virus’s point of view. We must avoid Titanic-centricity and ask, “What happened to the poor iceberg?” Incidentally, it seems that the H1N1 virus quickly evolves less-virulent variants, which is why the devastating 1918-24 flu pandemic, killing 70 to 100 million, abated as suddenly and mysteriously as it began. The virus is smart enough not to over-kill its host. It needs our “coughs and sneezes to spread diseases.” Similarly, it’s possible that smart malware such as Conficker will know when to reduce its infection rate. Perhaps George Ledin’s malware courses at Sonoma State University (see this column, January/February 2008) will explore the possibility of inducing Conficker mutations in this direction of affable, balanced coexistence.

Balance, that elusive but essential concept at the heart of all moralities, must be our goal. Kathryn Shevelow’s For the Love of Animals: The Rise of the Animal Protection Movement (Henry Holt, 2008) chronicles the ongoing ethical dilemmas facing the pharmaceutical industry. We have, since the key Ill-Treatment of Cattle Act passed by the UK Parliament in 1835, gradually discredited wanton cruelty to animals in both farming and sport. Yet, some painful experiments on animals can be defended by sane and sincere scientists on the grounds that human lives can be saved by such tests. The ancient Greeks argued that animals, lacking a rational or psychic “pneuma” don’t really “feel pain,” and even Descartes argued that the screams of tortured animals were just “the squeaks of a machine.” Still persistent in some quarters is the theological argument, as expounded by Aquinas, that “by divine providence, animals are intended for man’s use... either by killing or in any other way whatever.” 

My search for balance leads me, via the notion that Man is also Animal, to demand an Ill-Treatment of Programmers Act, comparable to that of 1835 protecting cattle. A Martian anthropologist would immediately note the similarities of the abuses inflicted. Programmers are herded into cubicles, deprived of sleep, and force-fed with unhealthy diets of cold pizza and Jolt cola. They are set tasks and deadlines that are provably impossible. Turing showed that the set of computable functions is countable, whereas the set of all functions, such as those presented in customer-requirement specifications and subsequently garbled by systems analysts, are uncountable. Probability of success is notionally (aleph-0/aleph-1) equal to zero (although this does not preclude the occasional lucky solution). QED. Protests are cruelly dismissed on the grounds that programmers lack pneuma and don’t feel pain. Yes, you hear them whine a lot, but so do the disk drives. I plan to recruit the divine Joanna Lumley, who has proved an unbeatable advocate in promoting Ghurka rights in Britain (see http://www.hellomagazine.com/celebrities/2009/04/30/lumley-ghurka-win/).

Returning to my opening theme of semantic reversal, I believe the choice of prefixes is arbitrary, yet some folk worry unduly if they meet unusual combinations such as dedesignated. What Man has designated or listed can be undesignated or delisted. Whatsoever the Lord predicateth, He can taketh away. Gruntled and couth spring to mind as examples where the negative forms have outlived the original positives. Of course, care and dictionary assistance are sometimes needed when the reversed semantics imperfectly match the precision of the Boolean NOT. In literary-theoretic circles, construction and deconstruction present an odd, not-quite-opposing, couple beyond immediate comprehension, and it’s far from clear which came first.

A catchy example where the negative word preceded its positive arose when IBM introduced unbundled software in the 1960s. (I was there, I think! As they say, if you remember the 1960s you probably weren’t there.) Univac and others were forced to use the word bundled in this new context. Previously, software came with the computer and there was no need for such a term. Fish have no word for water. Banksters have no word for sorry, but the analogy is drifting away.

With object-oriented CLs (computer languages), there’s less ambiguity between constructors, which must first create (and optionally initialize) objects, and destructors, which destroy them and free their memory allocations. Whether you need to invoke this “destruction” or leave the garbage collection to a janitorial Java is subject to semi-theological mootation (I’ve submitted this portmanteau-pun to the Urban Dictionary, so don’t tell me there’s “no such word”).

Of course, there’s no logical reason why the terms naturalist and naturist should have diverged in meaning to produce a risible ambiguity.  NLs (natural languages) don’t “do logic,” at least not in the consistent way we expect our CLs to behave. Asking the hearer, “Do you understand?” is a far cry from submitting statements to a standards-conforming compiler. NLs don’t even evolve in any Darwinian sense to provide more efficient, precise, or elegant instruments for discourse.

Living NLs do change of course, by definition, in spite of doomed prescriptionist attempts to maintain some imagined earlier “purity” and “correctness.” As Harvard Professor Eileen Cheng-yin Chow observes, “The utopian impulses behind standardization and simplification of a living language are always understandable. Increased literacy, administrative efficiency, and ease of communication are laudable goals. But those impulses can also strip a language of its wit, whimsy, and play, not to mention its capacity to accommodate new concepts and usages.”

These irresistible NL changes show no general pattern in the direction of “fitness for purpose.” The modern linguist’s spin (e.g., John McWhorter’s) is that all NLs are more or less equally complicated. Yet, paradoxically, also equally simple when you consider that children worldwide acquire their native tongues from mother’s knee at about the same speed, and without the need for grammar books. There’s usually a balance (that word again) of grammatical complexities between one language family and another, not always apparent to native speakers.

Native speakers are remarkably blind to the subtleties of their own language. I’ve heard anglophones claiming that “English doesn’t really have a grammar!” Basques can say the same, not being bothered by dozens of noun inflections, although they may struggle to master what seems like a crazy jungle of ambiguous English prepositions. Phonetically, too, the sounds always seem more challenging on the other side. Yet, again balancing the areas of complexity, those languages with limited ranges of consonants and vowels tend to acquire tonal systems to tax the outsider.

Early hominid hunters learning to cooperate would surely want to avoid the time-wasting challenge of dictionary surfing and contextual disambiguation. Each member of array grunt(i) must have distinct, mutually agreed semantics! Polysemy (identifier overloading) costs lives if, say, grunt(23) invokes a spear-poised debate over which definition applies. “Did you mean zxchroo in the sense of ‘hungry saber-toothed tiger approaching rapidly on your left’ or as in the idiom for ‘You’ve let the fire go out again’?” A more realistic (but possibly apocryphal) example is the Papuan mwamba, meaning small snake. The size of the snake is indicated by reduplication, so that mwamba-mwamba is a medium-size snake, and so on. The obvious snag was that by the time you shouted, “I’m being attacked by a mwamba-mwamba-mwamba-...,” it was probably too late for a rescue. One can imagine the villagers asking each other, “How big did he say that snake was?”

In the other direction,

if strlen(diminutive(x)) > strlen(x) use(x);

In other words, many diminutives are longer than their standard forms (Vanya to Vanushka). Logical languages would have evolved shortcuts such as “mwamba-squared,” saving a precious syllable. Older readers may recall the delightful counter-example: the 1960s David Frost TV satire “That Was The Week That Was” was abbreviated to TWTWTW until it was noticed that the latter required 12 spoken syllables compared with the former’s six. We settled for the five-syllable TW-cubed.

These small savings are not to be sneezed at. (In fact, with the aforementioned H1N1 swine flu virus, sneezing at anything is discouraged.) We know that each printed and screen-displayed character has a measurable, negative impact on the environment, which is why Al Gore has urged a ban on all climate-change publications and Web sites, not to mention an end to those jet-set international conferences [Is this true?—Editor]. Less well known is that spoken syllables expel between 14.15 and 23.67 percent more CO2 molecules than silent exhalations. The range depends on the vowelic-consonantal mix, with Xhosa clicks, Bronx diphthongs, and Liverpool-Scouse fricatives being the worst culprits. Culling the offenders may appear drastic, but global warming calls for stern measures. Studies by unnamed experts at an undisclosed West Coast university (I can reveal only that there’s a San in the name) indicate that sign language is not the answer. Signers use up even more energy.

It’s becoming more difficult daily to follow these conflicting carbon-footprint calculations.  Flying refrigerated lamb from New Zealand is “greener” because European farming methods are so inefficient energy-wise. Better take the car on short shopping trips since walking requires more calories. The new mandatory energy-saving light bulbs are a costly myth when you factor in the whole manufacturing cycle. Electric cars? Windmills? Forget them. And leave your computers running. The off-on-off cycles are killing the planet.

I’ve been discussing with Professor Stephen Blackwell, whose book on Nabokov the scientist is due from The Ohio State University Press in September, the subtleties of disagreement. He monitors a chat list on Nabokovian topics where arguments can flare up over relatively simple matters. The problem is common to most Internet forums where disputatious language has for long been known as flaming. There are blunt ways of telling people they are wrong, and these invariably trigger ruder and ruder exchanges. List monitors must intervene to preserve decorum. They can reject angry submissions with a warning and offer guidelines as to which lines of propriety must not be crossed. In extreme cases, the sender is blacklisted.

So, how can you point out sins (errors) without unduly upsetting the sinner? We do find the avoid-libel-at-all-costs tactic with phrases such as “Stalin’s alleged crimes.” There’s also a variety of mealy-mouthed, weasely euphemisms that I dub the “seems” option. We progress from “Only an idiot could assert X,” to “the assertion X does not take into account evidence showing that Y.” We then weasel this up a few notches with “I may be wrong, but the assertion X seems not to take into account plausible evidence that seems to show that Y.” To which Hamlet might well say, “Seems, madam! Nay it is; I know not ‘seems.’” I also like the disarming: “In X’s otherwise brilliant essay, it seems churlish to pick on what must surely be a typesetter’s slip. It was, of course, Bjarne Stroustrup, and not Bertrand Meyer, who invented C++.”

I conclude with a wonderful example of analogical reasoning from “The Emperor’s New Tools,” by Gary North (http://www.lewrockwell.com/north/north708.html). The point here is that nothing is proved, but if you already believe the conclusion, it seems to validate your belief beyond further discussion.

‘With conventional monetary policy having reached its limit, any further policy stimulus requires a different set of tools.’ —Ben Bernanke, April 3, 2009

I have this mental image of Bernanke and a dozen other Ph.D.-holding economists laboring over a car. Its hood is up. It is stalled at the side of the road. It is about an hour from Yuma, Arizona, the fan-belt capital of the world. Bernanke has a tool kit next to him. It is filled with brand-new metric tools. He is working on a used Plymouth.

The problem is, the car they are working on is not their car. It’s ours.” Q

LOVE IT, HATE IT? LET US KNOW
[email protected]

STAN KELLY-BOOTLE (http://www.feniks.com/skb/; http://www.sarcheck.com), born in Liverpool, England, read pure mathematics at Cambridge in the 1950s before tackling the impurities of computer science on the pioneering EDSAC I. His many books include The Devil’s DP Dictionary (McGraw-Hill, 1981), Understanding Unix (Sybex, 1994), and the recent e-book Computer Language—The Stan Kelly-Bootle Reader. Software Development Magazine has named him as the first recipient of the new annual Stan Kelly-Bootle Eclectech Award for his “lifetime achievements in technology and letters.” Neither Nobel nor Turing achieved such prized eponymous recognition. Under his nom-de-folk, Stan Kelly, he has enjoyed a parallel career as a singer and songwriter. He can be reached at [email protected].

© 2009 ACM 1542-7730/09/0700 $10.00

acmqueue

Originally published in Queue vol. 7, no. 6
Comment on this article in the ACM Digital Library





More related articles:

Nicole Forsgren, Eirini Kalliamvakou, Abi Noda, Michaela Greiler, Brian Houck, Margaret-Anne Storey - DevEx in Action
DevEx (developer experience) is garnering increased attention at many software organizations as leaders seek to optimize software delivery amid the backdrop of fiscal tightening and transformational technologies such as AI. Intuitively, there is acceptance among technical leaders that good developer experience enables more effective software delivery and developer happiness. Yet, at many organizations, proposed initiatives and investments to improve DevEx struggle to get buy-in as business stakeholders question the value proposition of improvements.


João Varajão, António Trigo, Miguel Almeida - Low-code Development Productivity
This article aims to provide new insights on the subject by presenting the results of laboratory experiments carried out with code-based, low-code, and extreme low-code technologies to study differences in productivity. Low-code technologies have clearly shown higher levels of productivity, providing strong arguments for low-code to dominate the software development mainstream in the short/medium term. The article reports the procedure and protocols, results, limitations, and opportunities for future research.


Ivar Jacobson, Alistair Cockburn - Use Cases are Essential
While the software industry is a fast-paced and exciting world in which new tools, technologies, and techniques are constantly being developed to serve business and society, it is also forgetful. In its haste for fast-forward motion, it is subject to the whims of fashion and can forget or ignore proven solutions to some of the eternal problems that it faces. Use cases, first introduced in 1986 and popularized later, are one of those proven solutions.


Jorge A. Navas, Ashish Gehani - OCCAM-v2: Combining Static and Dynamic Analysis for Effective and Efficient Whole-program Specialization
OCCAM-v2 leverages scalable pointer analysis, value analysis, and dynamic analysis to create an effective and efficient tool for specializing LLVM bitcode. The extent of the code-size reduction achieved depends on the specific deployment configuration. Each application that is to be specialized is accompanied by a manifest that specifies concrete arguments that are known a priori, as well as a count of residual arguments that will be provided at runtime. The best case for partial evaluation occurs when the arguments are completely concretely specified. OCCAM-v2 uses a pointer analysis to devirtualize calls, allowing it to eliminate the entire body of functions that are not reachable by any direct calls.





© ACM, Inc. All Rights Reserved.