Sizing Your System:
A koder with attitude, KV answers your questions. Miss Manners he ain’t.
Dear KV, I’m working on a network server that gets into the situation you called livelock in a previous response to a letter (Queue May/June 2008). Our problem is that our system has only a fixed amount of memory to receive network data, but the system is frequently overwhelmed and can’t make progress. When I ask our application engineers about how much data they expect, the only answer I get is "a lot," which isn’t much help. How can I figure out how to size our systems appropriately?
Enterprise SSDs:
Solid-state drives are finally ready for the enterprise. But beware, not all SSDs are created alike.
For designers of enterprise systems, ensuring that hardware performance keeps pace with application demands is a mind-boggling exercise. The most troubling performance challenge is storage I/O. Spinning media, while exceptional in scaling areal density, will unfortunately never keep pace with I/O requirements. The most cost-effective way to break through these storage I/O limitations is by incorporating high-performance SSDs (solid-state drives) into the systems.
The Five-Minute Rule 20 Years Later: and How Flash Memory Changes the Rules:
The old rule continues to evolve, while flash memory adds two new rules.
In 1987, Jim Gray and Gianfranco Putzolu published their now-famous five-minute rule for trading off memory and I/O capacity. Their calculation compares the cost of holding a record (or page) permanently in memory with the cost of performing disk I/O each time the record (or page) is accessed, using appropriate fractions of prices for RAM chips and disk drives. The name of their rule refers to the break-even interval between accesses. If a record (or page) is accessed more often, it should be kept in memory; otherwise, it should remain on disk and read when needed.
A Conversation with Steve Bourne, Eric Allman, and Bryan Cantrill:
In part one of a two-part series, three Queue editorial board members discuss the practice of software engineering.
In part one of a two-part series, three Queue editorial board members discuss the practice of software engineering. In their quest to solve the next big computing problem or develop the next disruptive technology, software engineers rarely take the time to look back at the history of their profession. What’s changed? What hasn’t changed? In an effort to shed light on these questions, we invited three members of ACM Queue’s editorial advisory board to sit down and offer their perspectives on the continuously evolving practice of software engineering.
A Pioneer’s Flash of Insight:
Jim Gray’s vision of flash-based storage anchors this issue’s theme.
In the May/June issue of Queue, Eric Allman wrote a tribute to Jim Gray, mentioning that Queue would be running some of Jim’s best works in the months to come. I’m embarrassed to confess that when this idea was first discussed, I assumed these papers would consist largely of Jim’s seminal work on databases showing only that I (unlike everyone else on the Queue editorial board) never knew Jim. In an attempt to learn more about both his work and Jim himself, I attended the tribute held for him at UC Berkeley in May.
Flash Disk Opportunity for Server Applications:
Future flash-based disks could provide breakthroughs in IOPS, power, reliability, and volumetric capacity when compared with conventional disks.
NAND flash densities have been doubling each year since 1996. Samsung announced that its 32-gigabit NAND flash chips would be available in 2007. This is consistent with Chang-gyu Hwang’s flash memory growth model1 that NAND flash densities will double each year until 2010. Hwang recently extended that 2003 prediction to 2012, suggesting 64 times the current density250 GB per chip. This is hard to credit, but Hwang and Samsung have delivered 16 times since his 2003 article when 2-GB chips were just emerging. So, we should be prepared for the day when a flash drive is a terabyte(!). As Hwang points out in his article, mobile and consumer applications, rather than the PC ecosystem, are pushing this technology.
Flash Storage Today:
Can flash memory become the foundation for a new tier in the storage hierarchy?
The past few years have been an exciting time for flash memory. The cost has fallen dramatically as fabrication has become more efficient and the market has grown; the density has improved with the advent of better processes and additional bits per cell; and flash has been adopted in a wide array of applications. The flash ecosystem has expanded and continues to expand especially for thumb drives, cameras, ruggedized laptops, and phones in the consumer space.
The Fabrication of Reality:
Is there an "out there" out there?
There are always anniversaries, real or concocted, to loosen the columnist’s writer’s block and/or justify the intake of alcohol. I’ll drink to that to the fact that we are blessed with a reasonably regular solar system providing a timeline of annual increments against which we can enumerate and toast past events. Hic semper hic. When the drinking occurs in sporadic and excessive bursts, it becomes known, disapprovingly, as "bingeing." I’m tempted to claim that this colorful Lincolnshire dialect word binge, meaning soak, was first used in the boozing-bout sense exactly 200 years ago. And that, shurely, calls for a schelebration.