File Systems and Storage

RSS
Sort By:

A File System All Its Own

Flash memory has come a long way. Now it's time for software to catch up.

by Adam H. Leventhal | April 13, 2013

CACM This article appears in print in Communications of the ACM, Volume 56 Issue 5

2 comments

Anatomy of a Solid-state Drive

While the ubiquitous SSD shares many features with the hard-disk drive, under the surface they are completely different.

by Michael Cornwell | October 17, 2012

CACM This article appears in print in Communications of the ACM, Volume 55 Issue 12

8 comments

Disks from the Perspective of a File System

Disks lie. And the controllers that run them are partners in crime.

by Marshall Kirk McKusick | September 6, 2012

CACM This article appears in print in Communications of the ACM, Volume 55 Issue 11

14 comments

Keeping Bits Safe:
How Hard Can It Be?

As storage systems grow larger and larger, protecting their data for long-term storage is becoming more and more challenging.

by David S. H. Rosenthal | October 1, 2010

CACM This article appears in print in Communications of the ACM, Volume 53 Issue 11

4 comments

Triple-Parity RAID and Beyond

As hard-drive capacities continue to outpace their throughput, the time has come for a new level of RAID.

by Adam Leventhal | December 17, 2009

CACM This article appears in print in Communications of the ACM, Volume 53 Issue 1

4 comments

GFS: Evolution on Fast-forward

A discussion between Kirk McKusick and Sean Quinlan about the origin and evolution of the Google File System

by Marshall Kirk McKusick, Sean Quinlan | August 7, 2009

9 comments

CTO Roundtable:
Storage Part II

Leaders in the storage industry ponder upcoming technologies and trends.

by Mache Creeger | January 8, 2009

CACM This article appears in print in Communications of the ACM, Volume 51 Issue 9

0 comments

CTO Roundtable:
Storage Part I

Leaders in the storage world offer valuable advice for making more effective architecture and technology decisions.

by Mache Creeger | December 4, 2008

CACM This article appears in print in Communications of the ACM, Volume 51 Issue 8

0 comments

The Five-Minute Rule 20 Years Later:
and How Flash Memory Changes the Rules

The old rule continues to evolve, while flash memory adds two new rules. In 1987, Jim Gray and Gianfranco Putzolu published their now-famous five-minute rule for trading off memory and I/O capacity. Their calculation compares the cost of holding a record (or page) permanently in memory with the cost of performing disk I/O each time the record (or page) is accessed, using appropriate fractions of prices for RAM chips and disk drives. The name of their rule refers to the break-even interval between accesses.

by Goetz Graefe | September 24, 2008

0 comments

Enterprise SSDs

Solid-state drives are finally ready for the enterprise. But beware, not all SSDs are created alike. For designers of enterprise systems, ensuring that hardware performance keeps pace with application demands is a mind-boggling exercise. The most troubling performance challenge is storage I/O. Spinning media, while exceptional in scaling areal density, will unfortunately never keep pace with I/O requirements. The most cost-effective way to break through these storage I/O limitations is by incorporating high-performance SSDs (solid-state drives) into the systems.

by Mark Moshayedi, Patrick Wilkison | September 24, 2008

0 comments

Flash Storage Today

Can flash memory become the foundation for a new tier in the storage hierarchy? The past few years have been an exciting time for flash memory. The cost has fallen dramatically as fabrication has become more efficient and the market has grown; the density has improved with the advent of better processes and additional bits per cell; and flash has been adopted in a wide array of applications. The flash ecosystem has expanded and continues to expand especially for thumb drives, cameras, ruggedized laptops, and phones in the consumer space.

by Adam Leventhal | September 24, 2008

0 comments

Flash Disk Opportunity for Server Applications

Future flash-based disks could provide breakthroughs in IOPS, power, reliability, and volumetric capacity when compared with conventional disks. NAND flash densities have been doubling each year since 1996. Samsung announced that its 32-gigabit NAND flash chips would be available in 2007. This is consistent with Chang-gyu Hwang's flash memory growth model1 that NAND flash densities will double each year until 2010. Hwang recently extended that 2003 prediction to 2012, suggesting 64 times the current density250 GB per chip. This is hard to credit, but Hwang and Samsung have delivered 16 times since his 2003 article when 2-GB chips were just emerging.

by Jim Gray, Bob Fitzgerald | September 24, 2008

0 comments

A Pioneer's Flash of Insight

Jim Gray's vision of flash-based storage anchors this issue's theme. In the May/June issue of Queue, Eric Allman wrote a tribute to Jim Gray, mentioning that Queue would be running some of Jim's best works in the months to come. I'm embarrassed to confess that when this idea was first discussed, I assumed these papers would consist largely of Jim's seminal work on databasesshowing only that I (unlike everyone else on the Queue editorial board) never knew Jim.

by Bryan Cantrill | September 24, 2008

0 comments

BASE: An Acid Alternative

Web applications have grown in popularity over the past decade. Whether you are building an application for end users or application developers (i.e., services), your hope is most likely that your application will find broad adoption and with broad adoption will come transactional growth. If your application relies upon persistence, then data storage will probably become your bottleneck.

by Dan Pritchett | July 28, 2008

11 comments

The Emergence of iSCSI

When most IT pros think of SCSI, images of fat cables with many fragile pins come to mind. Certainly, that's one manifestation - the oldest one. But modern SCSI, as defined by the SCSI-3 Architecture Model, or SAM, really considers the cable and physical interconnections to storage as only one level in a larger hierarchy. By separating the instructions or commands sent to and from devices from the physical layers and their protocols, you arrive at a more generic approach to storage communication.

by Jeffrey S. Goldner | July 14, 2008

0 comments

DAFS:
A New High-Performance Networked File System

This emerging file-access protocol dramatically enhances the flow of data over a network, making life easier in the data center.

by Steve Kleiman | July 14, 2008

0 comments

Storage Virtualization Gets Smart

Over the past 20 years we have seen the transformation of storage from a dumb resource with fixed reliability, performance, and capacity to a much smarter resource that can actually play a role in how data is managed. In spite of the increasing capabilities of storage systems, however, traditional storage management models have made it hard to leverage these data management capabilities effectively. The net result has been overprovisioning and underutilization. In short, although the promise was that smart shared storage would simplify data management, the reality has been different.

by Kostadis Roussos | November 15, 2007

0 comments

Hard Disk Drives:
The Good, the Bad and the Ugly!

HDDs (hard-disk drives) are like the bread in a peanut butter and jelly sandwich—sort of an unexciting piece of hardware necessary to hold the “software.” They are simply a means to an end. HDD reliability, however, has always been a significant weak link, perhaps the weak link, in data storage. In the late 1980s people recognized that HDD reliability was inadequate for large data storage systems so redundancy was added at the system level with some brilliant software algorithms, and RAID (redundant array of inexpensive disks) became a reality. RAID moved the reliability requirements from the HDD itself to the system of data disks.

by Jon Elerath | November 15, 2007

4 comments

Standardizing Storage Clusters

Data-intensive applications such as data mining, movie animation, oil and gas exploration, and weather modeling generate and process huge amounts of data. File-data access throughput is critical for good performance. To scale well, these HPC (high-performance computing) applications distribute their computation among numerous client machines. HPC clusters can range from hundreds to thousands of clients with aggregate I/O demands ranging into the tens of gigabytes per second.

by Garth Goodson, Sai Susharla, Rahul Iyer | November 15, 2007

0 comments

A Conversation with Jeff Bonwick and Bill Moore

This month ACM Queue speaks with two Sun engineers who are bringing file systems into the 21st century. Jeff Bonwick, CTO for storage at Sun, led development of the ZFS file system, which is now part of Solaris. Bonwick and his co-lead, Sun Distinguished Engineer Bill Moore, developed ZFS to address many of the problems they saw with current file systems, such as data integrity, scalability, and administration. In our discussion this month, Bonwick and Moore elaborate on these points and what makes ZFS such a big leap forward.

by John Stanik | November 15, 2007

1 comments

A Conversation with Jim Gray

Sit down, turn off your cellphone, and prepare to be fascinated. Clear your schedule, because once you've started reading this interview, you won't be able to put it down until you've finished it.

July 31, 2003

1 comments

Big Storage: Make or Buy?

We hear it all the time. The cost of disk space is plummeting.

by Josh Coates | July 31, 2003

0 comments

Storage Systems:
Not Just a Bunch of Disks Anymore

The concept of a storage device has changed dramatically from the first magnetic disk drive introduced by the IBM RAMAC in 1956 to today's server rooms with detached and fully networked storage servers. Storage has expanded in both large and small directions - up to multi-terabyte server appliances and down to multi-gigabyte MP3 players that fit in a pocket. All use the same underlying technology - the rotating magnetic disk drive - but they quickly diverge from there.

by Erik Riedel | July 31, 2003

0 comments

You Don't Know Jack about Disks

Magnetic disk drives have been at the heart of computer systems since the early 1960s. They brought not only a significant advantage in processing performance, but also a new level of complexity for programmers. The three-dimensional geometry of a disk drive replaced the simple, linear, address space tape-based programming model.

by Dave Anderson | July 31, 2003

3 comments