September/October 2018 issue of acmqueue The September/October issue of acmqueue is out now

Subscribers and ACM Professional members login here

File Systems and Storage


Download PDF version of this article
This and other acmqueue articles have been translated into Portuguese
ACM Q em Língua Portuguesa

Error 526 Ray ID: 487d8cad5eec0ec1 • 2018-12-12 04:46:50 UTC

Invalid SSL certificate








What happened?

The origin web server does not have a valid SSL certificate.

What can I do?

If you're a visitor of this website:

Please try again in a few minutes.

If you're the owner of this website:

The SSL certificate presented by the server did not pass validation. This could indicate an expired SSL certificate or a certificate that does not include the requested domain name. Please contact your hosting provider to ensure that an up-to-date and valid SSL certificate issued by a Certificate Authority is configured for this domain name on the origin server. Additional troubleshooting information here.


Originally published in Queue vol. 10, no. 9
see this item in the ACM Digital Library



Pat Helland - Mind Your State for Your State of Mind
The interactions between storage and applications can be complex and subtle.

Alex Petrov - Algorithms Behind Modern Storage Systems
Different uses for read-optimized B-trees and write-optimized LSM-trees

Mihir Nanavati, Malte Schwarzkopf, Jake Wires, Andrew Warfield - Non-volatile Storage
Implications of the Datacenter's Shifting Center

Thanumalayan Sankaranarayana Pillai, Vijay Chidambaram, Ramnatthan Alagappan, Samer Al-Kiswany, Andrea C. Arpaci-Dusseau, Remzi H. Arpaci-Dusseau - Crash Consistency
Rethinking the Fundamental Abstractions of the File System


(newest first)

Displaying 10 most recent comments. Read the full list here

Robert Thompson | Tue, 15 Jan 2013 21:15:55 UTC

In terms of desktop drives and NCQ (and often, synch-write as well), it's not uncommon to see desktop-market drives that break the spec by reporting the write successfully completed as soon as it hits the cache layer, rather than delaying until after it hit persistent media. I once got bitten badly by a Samsung Spinpoint that did this...

Many of the older hard drives were conceptualized more as a "fast-seek tape drive" than a sector-oriented disk store like we have come to expect. In many cases, the hard drive option was an (expensive) upgrade to the tape array storage, and needed to be drop-in compatible with software that expected normal tape-drive behavior. I have seen a few old references to certain drives having a specified "N feet of tape, instant-seek" equivalent capacity..

Robert Young | Sun, 25 Nov 2012 00:16:11 UTC

Well, the IBM mainframe standard is CKD (Count-Key-Data) from at least the 370, if not 360. Such drives have no hard sectors, only tracks. From what I've read, IBM has firmware to emulate CKD storage on commodity hard-sectored "PC" drives they now use.

Tom Gardner | Sun, 18 Nov 2012 00:40:35 UTC

The article is incorrect in stating that, "From the time of their first availability in the 1950s until about 2010, the sector size on disks has been 512 bytes." The first disk drive, the RAMAC 350 had a fixed sector size of 100 six bit characters. IBM mainframe disks supported variable sector (i.e., record) size from 1964 into the early 1990s. DEC supported a variety of sector sizes into the 1980s only some of which were 512 bytes. The 512 byte sector became a defacto standard in the 1990s driven by the confluence of the IDE interface success with its 512 byte sector and the change to sampled data servos.

earli | Wed, 26 Sep 2012 10:39:45 UTC

> In the real world, many of the drives targeted to the desktop market do not implement the NCQ specification. What exactly do you mean by that? It could mean that they build SATA disks without even considering that feature. It could also mean that the SATA disks that mention that feature do not comply properly.

For example: I have got standard hard disk with my cheap desktop PC last year. The disk manufacturer tells me: [1] > Since late 2004, most new SATA drive families have supported NCQ. Also the immediate specification papers of my disk mention NCQ as a feature. Does it comply or not?


ChadF | Wed, 12 Sep 2012 07:05:43 UTC

You left out a whole chapter (well section) on how the even older drives lied about their head/track/cylinder layout before there was LBA mode and filesystems would tune their access to optimize rotation timing, which would have been wrong in the "newer" drives of the time.

adrian | Mon, 10 Sep 2012 11:26:52 UTC

Disks may lie but the marketing people are worse as they have been lying about storage capacities since the appearance of the Gigabyte - 2 ^ 30 (1073741824) or 10 ^ 9 anyone and it only gets worse with the Terabyte 2 ^ 40 (1099511627776) or 10 ^ 12.

Kurt Lidl | Sun, 09 Sep 2012 02:56:02 UTC

@John Both LSI and Dell have announced disk controllers that use mram as the non-volatile storage area for the cache. Mram doesn't need a battery backup, it retains state in the spin of the magnetic cells. It also does't degrade the same way that flash memory degrades over time, due to the destructive nature of the block erase operation in flash memory. The downside to mram is the relatively small size of the parts that are available today.

There's a press release from last year here, that gives vague indication of the design wins from the mram manufacturer:

Igor | Sun, 09 Sep 2012 02:32:53 UTC

Very interesting article - thanks! How can the bit responsible for correct behavior of SATA drives with NCQ be set to ensure correct behavior at the disk drive level in case of a power loss (in Linux 2.6)? And how to check that driver is actually using this bit correctly (and what it's set to for a particular drive)?

Marshall Kirk McKusick | Sat, 08 Sep 2012 16:59:31 UTC

@John ``could you give me some examples of sata disks or controllers using the method you stated?'

Nonvolatile memory is mostly found in high-end products such as SAN storage arrays, though I have come across one RAID controller by Adaptec that had battery-backed memory.

I do consider the use of super-capacitors to keep the memory stable long enough to get it written to be a legitimate form of non-volatile memory. I have only seen this approach used for flash-memory-based disks. Probably because it is not practical to store enough energy to keep a traditional disk spinning long enough to get its cache written to it.

Emmanuel Florac | Sat, 08 Sep 2012 14:02:55 UTC

About SandForce SSDs: note that they may be cacheless, but they also implement block deduplication (called "DuraWrite" in marketing speak). Therefore actual failure of a block may impact many different files.

Displaying 10 most recent comments. Read the full list here
Leave this field empty

Post a Comment:

© 2018 ACM, Inc. All Rights Reserved.