Implications of the Datacenter's Shifting Center
Rethinking the Fundamental Abstractions of the File System
Flash memory has come a long way. Now it's time for software to catch up.
While the ubiquitous SSD shares many features with the hard-disk drive, under the surface they are completely different.
Disks lie. And the controllers that run them are partners in crime.
As storage systems grow larger and larger, protecting their data for long-term storage is becoming more and more challenging.
As hard-drive capacities continue to outpace their throughput, the time has come for a new level of RAID.
A discussion between Kirk McKusick and Sean Quinlan about the origin and evolution of the Google File System
Leaders in the storage industry ponder upcoming technologies and trends.
Leaders in the storage world offer valuable advice for making more effective architecture and technology decisions.
The old rule continues to evolve, while flash memory adds two new rules.
Solid-state drives are finally ready for the enterprise. But beware, not all SSDs are created alike.
Can flash memory become the foundation for a new tier in the storage hierarchy?
Future flash-based disks could provide breakthroughs in IOPS, power, reliability, and volumetric capacity when compared with conventional disks.
Jim Gray's vision of flash-based storage anchors this issue's theme.
In partitioned databases, trading some consistency for availability can lead to dramatic improvements in scalability.
When most IT pros think of SCSI, images of fat cables with many fragile pins come to mind. Certainly, that's one manifestation - the oldest one. But modern SCSI, as defined by the SCSI-3 Architecture Model, or SAM, really considers the cable and physical interconnections to storage as only one level in a larger hierarchy. By separating the instructions or commands sent to and from devices from the physical layers and their protocols, you arrive at a more generic approach to storage communication.
This emerging file-access protocol dramatically enhances the flow of data over a network, making life easier in the data center.
Over the past 20 years we have seen the transformation of storage from a dumb resource with fixed reliability, performance, and capacity to a much smarter resource that can actually play a role in how data is managed. In spite of the increasing capabilities of storage systems, however, traditional storage management models have made it hard to leverage these data management capabilities effectively. The net result has been overprovisioning and underutilization. In short, although the promise was that smart shared storage would simplify data management, the reality has been different.
HDDs (hard-disk drives) are like the bread in a peanut butter and jelly sandwich—sort of an unexciting piece of hardware necessary to hold the “software.” They are simply a means to an end. HDD reliability, however, has always been a significant weak link, perhaps the weak link, in data storage. In the late 1980s people recognized that HDD reliability was inadequate for large data storage systems so redundancy was added at the system level with some brilliant software algorithms, and RAID (redundant array of inexpensive disks) became a reality. RAID moved the reliability requirements from the HDD itself to the system of data disks.
Data-intensive applications such as data mining, movie animation, oil and gas exploration, and weather modeling generate and process huge amounts of data. File-data access throughput is critical for good performance. To scale well, these HPC (high-performance computing) applications distribute their computation among numerous client machines. HPC clusters can range from hundreds to thousands of clients with aggregate I/O demands ranging into the tens of gigabytes per second.
This month ACM Queue speaks with two Sun engineers who are bringing file systems into the 21st century. Jeff Bonwick, CTO for storage at Sun, led development of the ZFS file system, which is now part of Solaris. Bonwick and his co-lead, Sun Distinguished Engineer Bill Moore, developed ZFS to address many of the problems they saw with current file systems, such as data integrity, scalability, and administration. In our discussion this month, Bonwick and Moore elaborate on these points and what makes ZFS such a big leap forward.
Sit down, turn off your cellphone, and prepare to be fascinated. Clear your schedule, because once you've started reading this interview, you won't be able to put it down until you've finished it.
We hear it all the time. The cost of disk space is plummeting.
The concept of a storage device has changed dramatically from the first magnetic disk drive introduced by the IBM RAMAC in 1956 to today's server rooms with detached and fully networked storage servers. Storage has expanded in both large and small directions - up to multi-terabyte server appliances and down to multi-gigabyte MP3 players that fit in a pocket. All use the same underlying technology - the rotating magnetic disk drive - but they quickly diverge from there.
Magnetic disk drives have been at the heart of computer systems since the early 1960s. They brought not only a significant advantage in processing performance, but also a new level of complexity for programmers. The three-dimensional geometry of a disk drive replaced the simple, linear, address space tape-based programming model.