Computer Architecture

Vol. 4 No. 10 – December-January 2006-2007

Computer Architecture

Better, Faster, More Secure:
Who’s in charge of the Internet’s future?

Since I started a stint as chair of the IETF in March 2005, I have frequently been asked, “What’s coming next?” but I have usually declined to answer. Nobody is in charge of the Internet, which is a good thing, but it makes predictions difficult. The reason the lack of central control is a good thing is that it has allowed the Internet to be a laboratory for innovation throughout its life—and it’s a rare thing for a major operational system to serve as its own development lab. As the old metaphor goes, we frequently change some of the Internet’s engines in flight.

by Brian Carpenter

Peerless P2P:
A koder with attitude, KV answers your questions. Miss Manners he ain’t.

Dear KV, I’ve just started on a project working with P2P software, and I have a few questions. Now, I know what you’re thinking, and no this isn’t some copyright-violating piece of kowboy kode. It’s a respectable corporate application for people to use to exchange data such as documents, presentations, and work-related information. My biggest issue with this project is security, for example, accidentally exposing our users data or leaving them open to viruses. There must be more things to worry about, but those are the top two. So, I want to ask "What would KV do?"

by George Neville-Neil

The Virtualization Reality:
Are hypervisors the new foundation for system software?

A number of important challenges are associated with the deployment and configuration of contemporary computing infrastructure. Given the variety of operating systems and their many versions—including the often-specific configurations required to accommodate the wide range of popular applications—it has become quite a conundrum to establish and manage such systems.

by Simon Crosby, David Brown

Unlocking Concurrency:
Multicore programming with transactional memory

Multicore architectures are an inflection point in mainstream software development because they force developers to write parallel programs. In a previous article in Queue, Herb Sutter and James Larus pointed out, “The concurrency revolution is primarily a software revolution. The difficult problem is not building multicore hardware, but programming it in a way that lets mainstream applications benefit from the continued exponential growth in CPU performance.” In this new multicore world, developers must write explicitly parallel applications that can take advantage of the increasing number of cores that each successive multicore generation will provide.

by Ali-Reza Adl-Tabatabai, Christos Kozyrakis, Bratin Saha

Will the Real Bots Stand Up?:
From EDSAC to iPod—predictions elude us

When asked which advances in computing technology have most dazzled me since I first coaxed the Cambridge EDSAC into fitful leaps of calculation in the 1950s, I must admit that Apple’s iPod sums up the many unforeseen miracles in one amazing, iconic gadget. Unlike those electrical nose-hair clippers and salt ’n’ pepper mills that gather dust after a few shakes, my iPod lives literally near my heart, on and off the road, in and out of bed like a versatile lover—except when it’s recharging and downloading in the piracy of my own home.

by Stan Kelly-Bootle

A Conversation with John Hennessy and David Patterson:
They wrote the book on computing.

As authors of the seminal textbook, ’Computer Architecture: A Quantitative Approach’, John Hennessy and David Patterson probably don’t need an introduction. You’ve probably read them in college or, if you were lucky enough, even attended one of their classes. Since rethinking, and then rewriting, the way computer architecture is taught, both have remained committed to educating a new generation of engineers with the skills to tackle today’s tough problems in computer architecture, Patterson as a professor at Berkeley and Hennessy as a professor, dean, and now president of Stanford University.