Enterprise Flash Storage

Vol. 6 No. 4 – July/August 2008

Enterprise Flash Storage

Articles

Flash Storage Today

Can flash memory become the foundation for a new tier in the storage hierarchy? The past few years have been an exciting time for flash memory. The cost has fallen dramatically as fabrication has become more efficient and the market has grown; the density has improved with the advent of better processes and additional bits per cell; and flash has been adopted in a wide array of applications. The flash ecosystem has expanded and continues to expand especially for thumb drives, cameras, ruggedized laptops, and phones in the consumer space.

Flash Storage Today

Can flash memory become the foundation for a new tier in the storage hierarchy?

Adam Leventhal, Sun Microsystems

The past few years have been an exciting time for flash memory. The cost has fallen dramatically as fabrication has become more efficient and the market has grown; the density has improved with the advent of better processes and additional bits per cell; and flash has been adopted in a wide array of applications. The flash ecosystem has expanded and continues to expand—especially for thumb drives, cameras, ruggedized laptops, and phones in the consumer space.

One area where flash has seen only limited success, however, is in the primary-storage market. As the price trend for flash became clear in recent years, the industry anticipated its ubiquity for primary storage, with some so bold as to predict the impending demise of rotating media (undeterred, apparently, by the obduracy of magnetic tape). Flash has not lived up to these high expectations, however. The brunt of the effort to bring flash to primary storage has taken the form of SSDs (solid-state disks), flash memory packaged in hard-drive form factors and designed to supplant conventional drives. This technique is alluring because it requires no changes to software or other hardware components, but the cost of flash per gigabyte, while falling quickly, is still far more than hard drives. Only a small number of applications have performance needs that justify the expense.

by Adam Leventhal

The Five-Minute Rule 20 Years Later: and How Flash Memory Changes the Rules

The old rule continues to evolve, while flash memory adds two new rules. In 1987, Jim Gray and Gianfranco Putzolu published their now-famous five-minute rule for trading off memory and I/O capacity. Their calculation compares the cost of holding a record (or page) permanently in memory with the cost of performing disk I/O each time the record (or page) is accessed, using appropriate fractions of prices for RAM chips and disk drives. The name of their rule refers to the break-even interval between accesses. If a record (or page) is accessed more often, it should be kept in memory; otherwise, it should remain on disk and read when needed.

The Five-minute Rule: 20 Years Later and How Flash Memory Changes the Rules

The old rule continues to evolve, while flash memory adds two new rules.

Goetz Graefe, Hewlett-Packard Laboratories

In 1987, Jim Gray and Gianfranco Putzolu published their now-famous five-minute rule1 for trading off memory and I/O capacity. Their calculation compares the cost of holding a record (or page) permanently in memory with the cost of performing disk I/O each time the record (or page) is accessed, using appropriate fractions of prices for RAM chips and disk drives. The name of their rule refers to the break-even interval between accesses. If a record (or page) is accessed more often, it should be kept in memory; otherwise, it should remain on disk and read when needed.

Based on then-current prices and performance characteristics of Tandem equipment, Gray and Putzolu found that the price of RAM to hold a 1-KB record was about equal to the (fractional) price of a disk drive required to access such a record every 400 seconds, which they rounded to five minutes. The break-even interval is about inversely proportional to the record size. Gray and Putzolu gave one hour for 100-byte records and two minutes for 4-KB pages.

by Goetz Graefe

Interviews

A Conversation with Steve Bourne, Eric Allman, and Bryan Cantrill

In part one of a two-part series, three Queue editorial board members discuss the practice of software engineering. In their quest to solve the next big computing problem or develop the next disruptive technology, software engineers rarely take the time to look back at the history of their profession. What's changed? What hasn't changed? In an effort to shed light on these questions, we invited three members of ACM Queue's editorial advisory board to sit down and offer their perspectives on the continuously evolving practice of software engineering.

A Conversation with Steve Bourne, Eric Allman, and Bryan Cantrill

In part one of a two-part series, three Queue editorial board members discuss the practice of software engineering.

In their quest to solve the next big computing problem or develop the next disruptive technology, software engineers rarely take the time to look back at the history of their profession. Whats changed? What hasnt changed? In an effort to shed light on these questions, we invited three members of ACM Queues editorial advisory board to sit down and offer their perspectives on the continuously evolving practice of software engineering. We framed the discussion around the bread and butter of every developers life, tools and technologies, and how the process of software development has changed (or not changed) with the rise of certain popular development methodologies such as Agile and open source. This is part one of their conversation, with part two to follow in a subsequent issue of Queue./

Steve Bourne is chairman of Queues editorial advisory board. Bournes name will be familiar to anyone who uses Unix (or its descendants), as he developed the legendary Bourne shell command-line interface. While a member of the Seventh Edition Unix team at Bell Labs in the 1970s, Bourne also developed the Unix debugger adb. Beyond his contributions to Unix and its associated tooling, Bourne has held senior engineering and management positions at Cisco, Sun Microsystems, Digital Equipment, and Silicon Graphics. He is also past president of ACM, where he continues to be active in various advisory roles. In his current position as CTO of Eldorado Ventures, he evaluates new technologies and advises the firm on technology investments./

Articles

A Pioneer's Flash of Insight

Jim Gray's vision of flash-based storage anchors this issue's theme. In the May/June issue of Queue, Eric Allman wrote a tribute to Jim Gray, mentioning that Queue would be running some of Jim's best works in the months to come. I'm embarrassed to confess that when this idea was first discussed, I assumed these papers would consist largely of Jim's seminal work on databasesshowing only that I (unlike everyone else on the Queue editorial board) never knew Jim. In an attempt to learn more about both his work and Jim himself, I attended the tribute held for him at UC Berkeley in May.

A Pioneers Flash of Insight

Jim Grays vision of flash-based storage anchors this issues theme.

BRYAN CANTRILL, SUN MICROSYSTEMS

In the May/June issue of Queue, Eric Allman wrote a tribute to Jim Gray, mentioning that Queue would be running some of Jims best works in the months to come. Im embarrassed to confess that when this idea was first discussed, I assumed these papers would consist largely of Jims seminal work on databasesshowing only that I (unlike everyone else on the Queue editorial board) never knew Jim. In an attempt to learn more about both his work and Jim himself, I attended the tribute held for him at UC Berkeley in May.

I came away impressed not only that Jim was a big, adventurous thinker, but also that he was one who remained moored to reality. This delicate balance pervades his lifes work: The IBM System R folks talked about how in a year he not only wrote nine papers, but also cut 10,000 lines of code. Database researchers from the University of Wisconsin described how Jim, who had pioneered so much in transaction processing, also pioneered the database benchmarks, developing the precursor for what would become the Transaction Processing Council in a classic Datamation paper. His Tandem colleagues talked about how he reached beyond research and development to engage with the field and with customers to figure out (and publish!) why systems actually failedwhich was quite a reach for a company that famously dubbed every product NonStop! And Jims Microsoft coworkers talked about his driving vision to put on the Web a large database that people would actually use, leading him to develop TerraServer, and his ability to inspire and assist subsequent projects such as the Sloan Digital Sky Survey and Microsoft Researchs WorldWide Telescopesystems that solved hard, abstract problems and delivered appreciable concrete results.

by Bryan Cantrill

Enterprise SSDs

Solid-state drives are finally ready for the enterprise. But beware, not all SSDs are created alike. For designers of enterprise systems, ensuring that hardware performance keeps pace with application demands is a mind-boggling exercise. The most troubling performance challenge is storage I/O. Spinning media, while exceptional in scaling areal density, will unfortunately never keep pace with I/O requirements. The most cost-effective way to break through these storage I/O limitations is by incorporating high-performance SSDs (solid-state drives) into the systems.

Enterprise SSDs

Solid-state drives are finally ready for the enterprise. But beware, not all SSDs are created alike.

Mark Moshayedi and Patrick Wilkison, STEC

For designers of enterprise systems, ensuring that hardware performance keeps pace with application demands is a mind-boggling exercise. The most troubling performance challenge is storage I/O. Spinning media, while exceptional in scaling areal density, will unfortunately never keep pace with I/O requirements. The most cost-effective way to break through these storage I/O limitations is by incorporating high-performance SSDs (solid-state drives) into the systems.

While we often read in the press that SSDs will soon banish HDD (hard-disk drive) technology to the realm of tape storage, the fact is that SSD technology has only recently become ready for the enterprise. Not all SSDs are alike, and very few are appropriate for use as primary storage devices in enterprise computing systems. Using flash storage in media players is fundamentally different from deploying the technology in 24/7 mission-critical operations.

by Mark Moshayedi, Patrick Wilkison

Flash Disk Opportunity for Server Applications

Future flash-based disks could provide breakthroughs in IOPS, power, reliability, and volumetric capacity when compared with conventional disks. NAND flash densities have been doubling each year since 1996. Samsung announced that its 32-gigabit NAND flash chips would be available in 2007. This is consistent with Chang-gyu Hwang's flash memory growth model1 that NAND flash densities will double each year until 2010. Hwang recently extended that 2003 prediction to 2012, suggesting 64 times the current density250 GB per chip. This is hard to credit, but Hwang and Samsung have delivered 16 times since his 2003 article when 2-GB chips were just emerging. So, we should be prepared for the day when a flash drive is a terabyte(!). As Hwang points out in his article, mobile and consumer applications, rather than the PC ecosystem, are pushing this technology.

Flash Disk Opportunity for Server Applications

Future flash-based disks could provide breakthroughs in IOPS, power, reliability, and volumetric capacity when compared with conventional disks.

JIM GRAY AND BOB FITZGERALD

NAND flash densities have been doubling each year since 1996. Samsung announced that its 32-gigabit NAND flash chips would be available in 2007. This is consistent with Chang-gyu Hwangs flash memory growth model1 that NAND flash densities will double each year until 2010. Hwang recently extended that 2003 prediction to 2012, suggesting 64 times the current density250 GB per chip. This is hard to credit, but Hwang and Samsung have delivered 16 times since his 2003 article when 2-GB chips were just emerging. So, we should be prepared for the day when a flash drive is a terabyte(!). As Hwang points out in his article, mobile and consumer applications, rather than the PC ecosystem, are pushing this technology.

Several of these chips can be packaged as a disk replacement. Samsung has a 32-GB flash disk (NSSD-NAND solid-state disk) that is PATA (parallel advanced technology attachment) now and SATA (serial ATA) soon. It comes in standard 1.8-inch and 2.5-inch disk form factors that plug into a PC as a standard small-form-factor disk. Several other vendors offer similar products. Ritek has announced a 16-GB flash disk for $170 and a 32-GB disk later, and SanDisk (which bought msystems, a longtime manufacturer of flash disks for the military) has a 32-GB disk for about $1,000.

by Jim Gray, Bob Fitzgerald

Kode Vicious

Sizing Your System

Dear KV, I'm working on a network server that gets into the situation you called livelock in a previous response to a letter (Queue May/June 2008). Our problem is that our system has only a fixed amount of memory to receive network data, but the system is frequently overwhelmed and can't make progress. When I ask our application engineers about how much data they expect, the only answer I get is "a lot," which isn't much help. How can I figure out how to size our systems appropriately?

Sizing your System

A koder with attitude, KV answers your questions. Miss Manners he aint.

Dear KV,

Im working on a network server that gets into the situation you called livelock in a previous response to a letter (Queue May/June 2008). Our problem is that our system has only a fixed amount of memory to receive network data, but the system is frequently overwhelmed and cant make progress. When I ask our application engineers about how much data they expect, the only answer I get is a lot, which isnt much help. How can I figure out how to size our systems appropriately?

by George Neville-Neil

Curmudgeon

The Fabrication of Reality

Is there an "out there" out there? There are always anniversaries, real or concocted, to loosen the columnist's writer's block and/or justify the intake of alcohol. I'll drink to that to the fact that we are blessed with a reasonably regular solar system providing a timeline of annual increments against which we can enumerate and toast past events. Hic semper hic. When the drinking occurs in sporadic and excessive bursts, it becomes known, disapprovingly, as "bingeing." I'm tempted to claim that this colorful Lincolnshire dialect word binge, meaning soak, was first used in the boozing-bout sense exactly 200 years ago. And that, shurely, calls for a schelebration.

The Fabrication of Reality

Is there an out there out there?

Stan Kelly-Bootle, Author/

There are always anniversaries, real or concocted, to loosen the columnists writers block and/or justify the intake of alcohol. Ill drink to thatto the fact that we are blessed with a reasonably regular solar system providing a timeline of annual increments against which we can enumerate and toast past events. Hic semper hic. When the drinking occurs in sporadic and excessive bursts, it becomes known, disapprovingly, as bingeing. Im tempted to claim that this colorful Lincolnshire dialect word binge, meaning soak, was first used in the boozing-bout sense exactly 200 years ago. And that, shurely, calls for a schelebration.1 When I was lecturing (briefly) in Soviet Union2 pre-perestroika, the anniversary-induced tipple was as richly refined as the Stoli (Stolichnaya) vodka. You might call it the microbrewed anniversary: Exactly 43 years 2 months 6 days ago, Vladimir Ilyitch took delivery of Peoples Blue Rolls-Royce!

I cant be as precise, but I feel that a significant point in my own inscrutable timeline is struggling to assert itself. Therefore, let us celebrate my first encounter with David Deutschs FOR (The Fabric of Reality), published almost exactly 10 years ago, give or take a few Min Planck units.3 At that first scan, I had formed distinctly mixed feelings about my dear old Deutsch. While agreeing with his pro-Karl Popperism and the central importance of the Turing principle and virtual reality computers, I was annoyed by his confused and confusing views on The Nature of Mathematics (chapter 10). An unexpected package in the mail from Bob Toxen last month, containing a slightly foxed copy of FOR, gave me the opportunity to reread and rejudge.

by Stan Kelly-Bootle