July/August 2018 issue of acmqueue The July/August issue of acmqueue is out now
Subscribers and ACM Professional members login here



File Systems and Storage

  Download PDF version of this article PDF

Error 526 Ray ID: 46c08dc6b88c9218 • 2018-10-19 04:38:12 UTC

Invalid SSL certificate

You

Browser

Working
Newark

Cloudflare

Working
deliverybot.acm.org

Host

Error

What happened?

The origin web server does not have a valid SSL certificate.

What can I do?

If you're a visitor of this website:

Please try again in a few minutes.

If you're the owner of this website:

The SSL certificate presented by the server did not pass validation. This could indicate an expired SSL certificate or a certificate that does not include the requested domain name. Please contact your hosting provider to ensure that an up-to-date and valid SSL certificate issued by a Certificate Authority is configured for this domain name on the origin server. Additional troubleshooting information here.

acmqueue

Originally published in Queue vol. 7, no. 11
see this item in the ACM Digital Library


Tweet



Related:

Pat Helland - Mind Your State for Your State of Mind
The interactions between storage and applications can be complex and subtle.


Alex Petrov - Algorithms Behind Modern Storage Systems
Different uses for read-optimized B-trees and write-optimized LSM-trees


Mihir Nanavati, Malte Schwarzkopf, Jake Wires, Andrew Warfield - Non-volatile Storage
Implications of the Datacenter's Shifting Center


Thanumalayan Sankaranarayana Pillai, Vijay Chidambaram, Ramnatthan Alagappan, Samer Al-Kiswany, Andrea C. Arpaci-Dusseau, Remzi H. Arpaci-Dusseau - Crash Consistency
Rethinking the Fundamental Abstractions of the File System



Comments

(newest first)

alex thomasian | Sun, 05 Nov 2017 16:10:19 UTC

Hi Adam Is there a RAID7 array Does it use a Reed Solomon Code or something else Is there a paper describing it Thanks Alex


Michael Casavant | Tue, 20 May 2014 13:50:11 UTC

"Consider how strange this is in the context of RAID-10 (which is absent from the analysis ablve). By the time we get to triple parity we are using almost as much capacity for redundancy as RAID-10, but RAID 10 rebuilds orders of magnitude faster, which short-cuts the problem entirely!"

I'm sorry, but RAID-6 still trumps RAID-10. In RAID 6 you can lose *ANY* 2 drives and still be OK. With RAID 10, if you loose 2 drives in the same mirror side you have total data loss.

The real question when choosing between the two is write speed.


Alex Gerulaitis | Fri, 24 May 2013 02:53:50 UTC

Mr. Leventhal, if we were to summarize the analysis of reliability of RAID6, would it be fair to say it's based purely on the NetApp paper which was "not specific about the bit error rates of the devices tested, the reliability of the drives themselves, or the length of the period over which the probability of data loss is calculated". In your opinion, how trustworthy this comparison is then?

Thank you.


TomK | Fri, 31 Aug 2012 04:31:37 UTC

Something put together that didn't really agree with many of the common themes on the web about RAID. In fact I expected much worse so was more then surprised. Of course, time will tell but curiosity non the less:

http://www.microdevsys.com/WordPress/2012/04/02/linux-htpc-home-backup-mdadm-raid6-lvm-xfs-cifs-and-nfs/

Cheers,


justin | Thu, 02 Aug 2012 17:36:14 UTC

"Typically the RAID stripe widththe number of disks within a single RAID groupfor RAID-6 is double that of a RAID-5 equivalent; thus, the number of data disks remains the same." "Beyond RAID-5 and -6, what are the implications for RAID-1, simple two-way mirroring? RAID-1 can be viewed as a degenerate form of RAID-5, so even if bit error rates improve at the same rate as hard-drive capacities, the time to repair for RAID-1 could become debilitating. How secure would an administrator be running without redundancy for a week-long scrub? "

Did you even READ the whole article?


Clock$peedy | Tue, 17 Jul 2012 00:17:50 UTC

It would be interesting if the author had included RAID-1/10 in his analysis. Parity-based RAID is on a death-march. The case for ever increasing parity depths falls apart when compared to RAID-10.

Consider the argument in favor of the death-march:

"RAID-5 takes too long to rebuild, the chance of a second drive failure increases with rebuild-time, which increases with drive capacity. Therefore we must have double parity (RAID-6), but then drive capacities increase further, and now we need triple-parity....".

Consider how strange this is in the context of RAID-10 (which is absent from the analysis ablve). By the time we get to triple parity we are using almost as much capacity for redundancy as RAID-10, but RAID 10 rebuilds orders of magnitude faster, which short-cuts the problem entirely!

Parity-based RAID is an exercise in tail-chasing.


Leave this field empty

Post a Comment:







© 2018 ACM, Inc. All Rights Reserved.