In the May/June issue of Queue, Eric Allman wrote a tribute to Jim Gray, mentioning that Queue would be running some of Jim’s best works in the months to come. I’m embarrassed to confess that when this idea was first discussed, I assumed these papers would consist largely of Jim’s seminal work on databases—showing only that I (unlike everyone else on the Queue editorial board) never knew Jim. In an attempt to learn more about both his work and Jim himself, I attended the tribute held for him at UC Berkeley in May.
I came away impressed not only that Jim was a big, adventurous thinker, but also that he was one who remained moored to reality. This delicate balance pervades his life’s work: The IBM System R folks talked about how in a year he not only wrote nine papers, but also cut 10,000 lines of code. Database researchers from the University of Wisconsin described how Jim, who had pioneered so much in transaction processing, also pioneered the database benchmarks, developing the precursor for what would become the Transaction Processing Council in a classic Datamation paper. His Tandem colleagues talked about how he reached beyond research and development to engage with the field and with customers to figure out (and publish!) why systems actually failed—which was quite a reach for a company that famously dubbed every product “NonStop”! And Jim’s Microsoft coworkers talked about his driving vision to put on the Web a large database that people would actually use, leading him to develop TerraServer, and his ability to inspire and assist subsequent projects such as the Sloan Digital Sky Survey and Microsoft Research’s WorldWide Telescope—systems that solved hard, abstract problems and delivered appreciable concrete results.
Given that Jim insisted on designing and implementing actual systems in the present while also focusing on big, future-looking ideas, it is no surprise that he and Queue had a natural affinity for one another. While listening to the presentations at Jim’s tribute, I began asking myself a question that I imagine was also occurring to others: if Jim were here and if I could have but one conversation with him, what would it be about? For me, the answer was clear: I would want to talk with him about the coming revolution in flash-based storage, the focus of this month’s issue of Queue. This has been an exciting issue to put together, because as Adam Leventhal’s article, “Flash Storage Today,” discusses, the economics of flash have promoted it from sideshow to main event: flash is growing from mere curiosity to a new tier in the storage hierarchy—perhaps the first such tier since the introduction of the IBM RAMAC in 1956!
This development is exactly the kind of change that Jim clearly savored—so much so that in listening to his tribute, I began to wonder if he had thought of or published anything on flash-based storage prior to his disappearance. With the rapidly changing economics of flash, Jim would have had to be particularly insightful (at the time of his disappearance, flash was more than twice as expensive as it is now), but it seemed within the realm of possibility. His colleagues at Microsoft didn’t know of anything specific he had written on the topic, but directed me to his Web site for his list of papers. Thinking the published work was likely too old and that I had thus probably hit a dead end, I nonetheless went to the site—and what I saw there almost took my breath away: the second-most-recent link was “A Radical View of Flash Disks,” with a link to both a document and a talk. Thanks to the support of Jim’s family, we have been able to publish that paper, “Flash Disk Opportunity for Server Applications,” written with his colleague Bob Fitzgerald, in this issue. It is raw, and the numbers are now out of date—but Jim clearly sees the future in front of him, and it is a singular pleasure for the reader to be hoisted onto his shoulders.
Finding Jim’s work inspired us to add two other papers to this issue. First, since the time that Jim worked on this problem, one important impediment has been removed: the device-level issues that he describes—issues that have plagued consumer-grade flash-based SSDs (solid-state drives)—have been largely dealt with in the new class of enterprise-grade flash-based SSDs. It is important for practitioners to understand these problems and their solutions; STEC’s Pat Wilkison and Mark Moshayedi explain the innards of these important new devices in their article in this issue.
Second, in looking for other work that Jim might have done on flash, we ran across the work of Goetz Graefe, a Hewlett-Packard Fellow who had revisited Jim’s “five-minute rule” (the Jim Gray classic that we pointed to online in the May/June issue), wondering if flash had changed the equation. Goetz, too, saw Jim in this work (he dedicated this work to Jim), and it seems especially fitting that Goetz’s update of Jim’s rule now lives alongside Jim’s radical view of a flash-based future.
Enjoy this collection of articles—and the coming revolution in the storage hierarchy that they describe—and take a moment to mourn the loss of a great computer scientist who would have (once again) been in the thick of it all!
BRYAN CANTRILL is a Distinguished Engineer at Sun Microsystems, where he has spent more than a decade working on system software, from the guts of the kernel to client code on the browser and much in between. Along with colleagues Mike Shapiro and Adam Leventhal, Cantrill designed and implemented DTrace, a facility for dynamic instrumentation of production systems that won the Wall Street Journal’s top Technology Innovation Award in 2006. In 2005, Cantrill was named by MIT’s Technology Review as one of the top 35 technologists under the age of 35, and by InfoWorld as one of its Innovators of the Year. He received an Sc.B magna cum laude with honors in computer science from Brown University.
Originally published in Queue vol. 6, no. 4—
Comment on this article in the ACM Digital Library
Pat Helland - Mind Your State for Your State of Mind
Applications have had an interesting evolution as they have moved into the distributed and scalable world. Similarly, storage and its cousin databases have changed side by side with applications. Many times, the semantics, performance, and failure models of storage and applications do a subtle dance as they change in support of changing business requirements and environmental challenges. Adding scale to the mix has really stirred things up. This article looks at some of these issues and their impact on systems.
Alex Petrov - Algorithms Behind Modern Storage Systems
This article takes a closer look at two storage system design approaches used in a majority of modern databases (read-optimized B-trees and write-optimized LSM (log-structured merge)-trees) and describes their use cases and tradeoffs.
Mihir Nanavati, Malte Schwarzkopf, Jake Wires, Andrew Warfield - Non-volatile Storage
For the entire careers of most practicing computer scientists, a fundamental observation has consistently held true: CPUs are significantly more performant and more expensive than I/O devices. The fact that CPUs can process data at extremely high rates, while simultaneously servicing multiple I/O devices, has had a sweeping impact on the design of both hardware and software for systems of all sizes, for pretty much as long as we’ve been building them.
Thanumalayan Sankaranarayana Pillai, Vijay Chidambaram, Ramnatthan Alagappan, Samer Al-Kiswany, Andrea C. Arpaci-Dusseau, Remzi H. Arpaci-Dusseau - Crash Consistency
The reading and writing of data, one of the most fundamental aspects of any Von Neumann computer, is surprisingly subtle and full of nuance. For example, consider access to a shared memory in a system with multiple processors. While a simple and intuitive approach known as strong consistency is easiest for programmers to understand, many weaker models are in widespread use (e.g., x86 total store ordering); such approaches improve system performance, but at the cost of making reasoning about system behavior more complex and error-prone.