July/August issue of acmqueue


The July/August issue of acmqueue is out now


Research for Practice

Education

  Download PDF version of this article PDF

Research for Practice: Vigorous Public Debates in Academic Computer Science

Expert-curated Guides to the Best of CS Research

John Regehr

Research for Practice combines the resources of the ACM Digital Library, the largest collection of computer science research in the world, with the expertise of the ACM membership. In every RfP column experts share a short curated selection of papers on a concentrated, practically oriented topic.

This installment of Research for Practice features a special curated selection from John Regehr, who takes us on a tour of great debates in academic computer science research. In case you thought flame wars were reserved for Usenet mailing lists and Twitter, think again: the academic literature is full of dramatic, spectacular, and vigorous debates spanning file systems, operating system kernel design, and formal verification. Please enjoy! - Peter Bailis

Science is often portrayed as a world of cold logic and hard facts, but as human beings we have great difficulty keeping emotion and prejudice out of the picture—whether the issue at hand is inconsequential ("Is Pluto a planet?") or threatening to our existence ("Can we stop global warming?"). Historically, as Galileo's encounters with the Roman Inquisition showed, unorthodoxy could have very serious consequences. Recent scientific debates have tended to be calmer, but being on the wrong side of an issue can still have career-altering consequences (http://www.mcfarlandbooks.com/book-2.php?id=978-0-7864-6301-5). Computer science, perhaps because it is young and well-funded, seems to have been relatively free of real schisms, but it still inspires energetic debates.

Computer science doesn't have a culture of retraction, and in any case many of these debates aren't about the kinds of mistakes that lead to retractions. The useful debates, the ones we can learn from, are those where both sides are publicly available in writing. These are a valuable instructional resource, and I sometimes assign them as reading to my students. They show an important part of science that often gets swept under the rug—students enjoy seeing that things aren't as cut-and-dried as teachers often try to make them sound. It's useful to try to figure out who's right and who's wrong. Alternatively, some debates are more about personalities, and still others feature researchers talking past each other based on their different assumptions and perspectives.

Considered harmful

An early debate, which feels a bit quaint now, began when Edsger Dijkstra published Go-to Statement Considered Harmful (1968), which argued against unstructured control flow in programming languages. Follow-ups included Goto Considered Harmful Considered Harmful and Goto Considered Harmful Considered Harmful Considered Harmful (1987).

Multiple versions of the facts

One of my favorite public debates concerns N-version programming: a software-development method where several implementations of a specification are run in parallel and voting is used to determine the correct result. If independent implementations have independent defects, this method would lead to a substantial increase in software reliability. John C. Knight and Nancy G. Leveson wrote a paper (1986) showing that the assumption of independent faults is suspect. This finding did not sit well with the proponents of n-version programming, and while I cannot find online copies of their rebuttals, Knight and Leveson's reply to the criticisms includes plenty of quotes. This is great reading, a classic of the genre.

Can we at least agree that CATOCS is a great acronym?

Starting in the late 1980s, Ken Birman's group was advocating causal and totally ordered multicast: a primitive for building distributed systems that provides strong guarantees about internode communication in distributed systems. David Cheriton and Dale Skeen were less than impressed and wrote 15 pages to that effect (1993). Birman wrote a long response to the criticisms. Also see Neha Narula's later take on the debate (2013).

File system performance evaluation is hard

A 1991 paper introduced log-based file systems, which increase the performance of writes to files by reducing the number of seeks needed to perform a file update. In 1993 Margo Seltzer et al. published a paper describing and evaluating an implementation of a log-based file system, with a follow-up in 1995. John Ousterhout, one of the authors of the original paper, disagreed with the evaluation. Seltzer and her coauthors rebutted his critique, and Ousterhout had, as far as I know, the last word.

You would not get a high grade for such a design

Another classic debate, Torvalds vs. Tanenbaum (1992), was about how operating systems should be structured: as a monolithic collection of code running in kernel mode, or instead as a group of independent subsystems isolated by the memory management unit. Also see some (one-sided) comments on a reincarnation of the debate. Related to this discussion, in 2005, Steven Hand et al. published Are Virtual Machine Monitors Microkernels Done Right? In response, Gernot Heiser et al. wrote a paper with the same title in 2006 but coming to the opposite conclusion.

A very obnoxious paper?

"Social Processes and Proofs of Theorems and Programs" is a provocative opinion piece written in 1979 by Richard De Millo et al. about the role of formal methods in software development. Dijkstra called it "a very obnoxious paper" (see page 14 of a transcript of an interview with Dijkstra from 2001) and wrote a response called "A Political Pamphlet from the Middle Ages." De Millo et al. replied: "We must begin by refusing to concede that our confidence in a piece of real software has ever been increased by a proof of its correctness..." See also the letters to the editor of CACM responding to this article, Victor Yodaiken's take on the debate, and three more shots fired in 2010 —two by Moshe Vardi and one by the original paper's authors.

This guy's arrogance takes your breath away

Dijkstra and John Backus had an (only partially public) spat in the late '70s, written up here and here.

SWATT or be SWATTed

The computer security research community has an especially strong tradition of refuting published results. For example, SWATT (software-based attestation) offers a protocol for checking that a remote system has the memory image it is supposed to have. A 2009 paper called "On the Difficulty of Software-based Attestation of Embedded Devices" presents concrete attacks on SWATT. SWATT authors Adrian Perrig and Leendert van Doorn did not agree that the attacks were valid, and, finally, the paper's authors, Aurelian Francillon et al., responded to the refutation.

A matter of integrity

CPI (code-pointer integrity) is a technique for avoiding control-flow hijacking caused by memory safety errors in C or C++ code. Missing the Point(er) (2015) presents attacks against CPI, while Getting the Point(er) (2015) argues in favor of the security of CPI.

Acknowledgments

I'd like to thank many blog readers and Twitter users for providing feedback on the original blog post from which this article was derived.

John Regehr is a computer science professor at the University of Utah, USA. He likes to create software tools for making software better.

Copyright   2017 held by owner/author. Publication rights licensed to ACM.

acmqueue

Originally published in Queue vol. 15, no. 3
see this item in the ACM Digital Library


Tweet



Related:

Ellen Chisa - Evolution of the Product Manager
Better education needed to develop the discipline


Jon P. Daries, Justin Reich, Jim Waldo, Elise M. Young, Jonathan Whittinghill, Daniel Thomas Seaton, Andrew Dean Ho, Isaac Chuang - Privacy, Anonymity, and Big Data in the Social Sciences
Quality social science research and the privacy of human subjects requires trust.


Michael J. Lutz, J. Fernando Naveda, James R. Vallino - Undergraduate Software Engineering: Addressing the Needs of Professional Software Development
Addressing the Needs of Professional Software Development



Comments

(newest first)

Nick P | Tue, 22 Aug 2017 16:14:47 UTC

Id love to see regular debates like these in the various subfields as each has stuff worth debating.

I was doing knock-offs of Tannenbaum vs Torvalds pushing microkernel-based systems (esp separation kernels). I get Linuss points since theres definitely sense in them. I advocated Nizza Architecture specifically to restrict how much stuff is done like a distributed system. Yet, timing related errors kept cropping up in Linux kernel (eg between syscalls) showing even the monolith had similar distributed problems. Whereas, on the other side, I found Tannenbaum et al were correct in that most opponents didnt know anything about microkernels past Mach. The field-proven benefits of QNX, BeOS, KeyKOS, and later Minix 3 were unknown to them. Shows their opposition of microkernels had no technical merit so much as following a crowd somewhere.

Another was N-version programming. I pushed this as a counter to both failure and subversion. The first problem I noticed was different solutions to same simple protocol often looked pretty similar. I hypothesized that were taught to use arrays, control flow, stacks, GCs, and so on in similar ways more often than not. My enhancement was to require the tools (esp languages or VMs) to have as different internal structure as possible plus with different coding style. Only the input to output mapping should be the same. On hardware side, different suppliers using different sub-components and process nodes if possible. Same for power supplies and maybe cabling. Far as subversion, I proposed having ideologically opposing people work together with, say, Israeli people coding with Palestenian reviewing trading spaces occasionally. I still think N-Version can help so long as each submission is high quality but internally different as I describe.

Also, shout out to @vyodaiken for making the list. Your countering the formal methodist with NonStop was perfect reply! :) Those systems just run and run and run despite failures at every level. Through formal verification? No, just systematic use of mitigations in every layer developed from at least a thousand instances of failure customers reported in their systems. Empirical method of adapting a solution with combo of informal reasoning and field data. Shows one doesnt need formal methods to make something work nearly perfect in practice.

That said, formal verification of protocols for things similar to NonStop always found errors esp corner cases humans and tests missed. More accurate to say combining the empirical and formal methods can increase correctness over just using one of them. Due to cost, apply it to only most important part like replication or failover in NonStop. Four teams applied it to CPUs of varying sizes with them having no errors during FPGA or silicon testing. That didnt happen for non-formally-verified chips which also were rarely first-pass silicon. So, thats further argument for formal verification able to improve something even as good as NonStop.

Far as Code-Pointer Integrity, that was a great piece of research I pushed as an interim solution in a few places. They wisely used segments like NaCl and security kernels did before them. Lacking a security-focused CPU, one should use any hardware-accelerated forms of isolation available to best of their ability. The CPI technique used it much more efficiently than prior work. The attack paper focused on the watered-down version instead of the strong version to of course find faults. As usual, wed have been much better off if attackers instead threw their brainpower at attacking the strongest stuff. So, CPI people rightly countered saying so.


Leave this field empty

Post a Comment:







© 2017 ACM, Inc. All Rights Reserved.