(newest first)

  • Robert Thompson | Tue, 15 Jan 2013 21:15:55 UTC

    In terms of desktop drives and NCQ (and often, synch-write as well), it's not uncommon to see desktop-market drives that break the spec by reporting the write successfully completed as soon as it hits the cache layer, rather than delaying until after it hit persistent media. I once got bitten badly by a Samsung Spinpoint that did this...
    Many of the older hard drives were conceptualized more as a "fast-seek tape drive" than a sector-oriented disk store like we have come to expect. In many cases, the hard drive option was an (expensive) upgrade to the tape array storage, and needed to be drop-in compatible with software that expected normal tape-drive behavior. I have seen a few old references to certain drives having a specified "N feet of tape, instant-seek" equivalent capacity..
  • Robert Young | Sun, 25 Nov 2012 00:16:11 UTC

    Well, the IBM mainframe standard is CKD (Count-Key-Data) from at least the 370, if not 360.  Such drives have no hard sectors, only tracks.  From what I've read, IBM has firmware to emulate CKD storage on commodity hard-sectored "PC" drives they now use.
  • Tom Gardner | Sun, 18 Nov 2012 00:40:35 UTC

    The article is incorrect in stating that, "From the time of their first availability in the 1950s until about 2010, the sector size on disks has been 512 bytes."  The first disk drive, the RAMAC 350 had a fixed sector size of 100 six bit characters.  IBM mainframe disks supported variable sector (i.e., record) size from 1964 into the early 1990s.  DEC supported a variety of sector sizes into the 1980s only some of which were 512 bytes.  The 512 byte sector became a defacto standard in the 1990s driven by the confluence of the IDE interface success with its 512 byte sector and the change to sampled data servos.  
  • earli | Wed, 26 Sep 2012 10:39:45 UTC

    > In the real world, many of the drives targeted to the desktop market do not implement the NCQ specification.
    What exactly do you mean by that? It could mean that they build SATA disks without even considering that feature. It could also mean that the SATA disks that mention that feature do not comply properly.
    For example: I have got standard hard disk with my cheap desktop PC last year. The disk manufacturer tells me: [1]
    > Since late 2004, most new SATA drive families have supported NCQ.
    Also the immediate specification papers of my disk mention NCQ as a feature. Does it comply or not?
  • ChadF | Wed, 12 Sep 2012 07:05:43 UTC

    You left out a whole chapter (well section) on how the even older drives lied about their head/track/cylinder layout before there was LBA mode and filesystems would tune their access to optimize rotation timing, which would have been wrong in the "newer" drives of the time.
  • adrian | Mon, 10 Sep 2012 11:26:52 UTC

    Disks may lie but the marketing people are worse as they have been lying about storage capacities since the appearance of the Gigabyte - 2 ^ 30 (1073741824) or 10 ^ 9 anyone and it only gets worse with the Terabyte 2 ^ 40 (1099511627776) or 10 ^ 12.
  • Kurt Lidl | Sun, 09 Sep 2012 02:56:02 UTC

    Both LSI and Dell have announced disk controllers that use mram as the non-volatile storage area for the cache.  Mram doesn't need a battery backup, it retains state in the spin of the magnetic cells.  It also does't degrade the same way that flash memory degrades over time, due to the destructive nature of the block erase operation in flash memory.  The downside to mram is the relatively small size of the parts that are available today.
    There's a press release from last year here, that gives vague indication of the design wins from the mram manufacturer:
  • Igor | Sun, 09 Sep 2012 02:32:53 UTC

    Very interesting article - thanks!
    How can the bit responsible for correct behavior of SATA drives with NCQ be set to ensure correct behavior at the disk drive level in case of a power loss (in Linux 2.6)?
    And how to check that driver is actually using this bit correctly (and what it's set to for a particular drive)?
  • Marshall Kirk McKusick | Sat, 08 Sep 2012 16:59:31 UTC

    ``could you give me some examples of sata disks or controllers using the method you stated?'
    Nonvolatile memory is mostly found in high-end products such as SAN storage arrays, though I have come across one RAID controller by Adaptec that had battery-backed memory.
    I do consider the use of super-capacitors to keep the memory stable long enough to get it written to be a legitimate form of non-volatile memory. I have only seen this approach used for flash-memory-based disks. Probably because it is not practical to store enough energy to keep a traditional disk spinning long enough to get its cache written to it.
  • Emmanuel Florac | Sat, 08 Sep 2012 14:02:55 UTC

    About SandForce SSDs: note that they may be cacheless, but they also implement block deduplication (called "DuraWrite" in marketing speak). Therefore actual failure of a block may impact many different files.
  • Keith McCready | Sat, 08 Sep 2012 02:25:26 UTC

    @ John,
    buffer = "small" cache.
    Even Sandforce uses the method you mention in their eval unit:
    "Power failure protection:  Polymer capacitor or super-capacitor circuit"
    SmartMedia is/was a type of "controller-less" Flash with no buffers (other than the current page write/read buffer) -- but all NAND Flash memories now have at least a one page buffer (and that size has grown over the years), NOR Flash you could write to one byte at a time, I used to do that back in 1991 when Flash cost about $120/MB.
  • John | Fri, 07 Sep 2012 22:00:43 UTC

    SSDs using the "sandforce" controllers are cacheless. most of the big names in ssds have been using these controllers now which means a huge portions of ssds are cacheless. Sandforce has its own data integrity quarks but cache isnt one of them. 
    calling flash more of a lier is a joke. most that understand it know that the SATA or even SCSI/SAS protocols were never developed with that tech in mind. it has to emulate a disk or "lie" or you wouldnt be able to use it. No other option allowed them to get the tech to the market as fast. trying to get ms to rewrite windows storage systems would be a pain... trying to get mobo makers and bios/firmware programmers to create a new spec and make it widely compatible would have also taken forever and most likely given you bigger headakes than we have with SSDs on sata today. 
  • John | Fri, 07 Sep 2012 21:51:50 UTC

    "Some vendors eliminate this problem by using nonvolatile memory for the track cache" it is interesting you stated this as a common solution. Imho most cache protection is implemented by an "optional" backup battery addon for midrange and high range HBA/RAID controllers (even in the SATA world).
    Some ssds and hdds have also used "super capacitors" to allow cache retention for a finite period of time in the event of power loss. I guess you could call that using non volatile memory but it isnt the standard definition. 
    could you give me some examples of sata disks or controllers using the method you stated?
  • Keith McCready | Fri, 07 Sep 2012 21:50:45 UTC

    "...eschew those lying disks and switch to using flash-memory technologyunless, of course, the flash storage starts using the same cost-cutting tricks."
    Flash-based SD Cards (byte addressable) and SDHC (block addressable) have likewise been lying to us for some time now.  Moreover, writable "page sizes" are generally much smaller than erasable "block sizes" (so you can write in smaller increments than you can re-write in.)  SD(HC) Cards and SSDs come with built-in controllers with buffers (cache) that are even more of a "blackbox" than any hard drive.  IMHO, if disks are liars, flash is an even better liar.  See for example NAND block size:
Leave this field empty

Post a Comment:

(Required - 4,000 character limit - HTML syntax is not allowed and will be removed)