Computer Architecture

Vol. 4 No. 10 – December-January 2006-2007

Computer Architecture

Articles

Better, Faster, More Secure

Since I started a stint as chair of the IETF (Internet Engineering Task Force) in March 2005, I have frequently been asked, “What’s coming next?” but I have usually declined to answer. Nobody is in charge of the Internet, which is a good thing, but it makes predictions difficult (and explains why this article starts with a disclaimer: It represents my views alone and not those of my colleagues at either IBM or the IETF).

BETTER, FASTER, MORE SECURE

Who’s in charge of the Internet’s future?

BRIAN CARPENTER, IBM and INTERNET ENGINEERING TASK FORCE

Since I started a stint as chair of the IETF (Internet Engineering Task Force) in March 2005, I have frequently been asked, “What’s coming next?” but I have usually declined to answer. Nobody is in charge of the Internet, which is a good thing, but it makes predictions difficult (and explains why this article starts with a disclaimer: It represents my views alone and not those of my colleagues at either IBM or the IETF).

The reason the lack of central control is a good thing is that it has allowed the Internet to be a laboratory for innovation throughout its life—and it’s a rare thing for a major operational system to serve as its own development lab. As the old metaphor goes, we frequently change some of the Internet’s engines in flight.

by Brian Carpenter

Kode Vicious

Peerless P2P

Dear KV, I've just started on a project working with P2P software, and I have a few questions. Now, I know what you're thinking, and no this isn't some copyright-violating piece of kowboy kode. It's a respectable corporate application for people to use to exchange data such as documents, presentations, and work-related information. My biggest issue with this project is security, for example, accidentally exposing our users data or leaving them open to viruses. There must be more things to worry about, but those are the top two. So, I want to ask "What would KV do?"

Peerless P2P

A koder with attitude, KV answers your questions. Miss Manners he ain’t.

Peer-to-peer networking (better known as P2P) has two faces: the illegal file-sharing face and the legitimate group collaboration face. While the former, illegal use is still quite prevalent, it gets an undue amount of attention, often hiding the fact that there are developers out there trying to write secure, legitimate P2P applications that provide genuine value in the workplace. While KV probably has a lot to say about file sharing’s dark side, it is to the legal, less controversial incarnation of P2P that he turns his attention to this month. Take it away, Vicious…

by George Neville-Neil

Articles

The Virtualization Reality

A number of important challenges are associated with the deployment and configuration of contemporary computing infrastructure. Given the variety of operating systems and their many versions—including the often-specific configurations required to accommodate the wide range of popular applications—it has become quite a conundrum to establish and manage such systems.

The Virtualization Reality

Are hypervisors the new foundation for system software?

SIMON CROSBY, XENSOURCE and DAVID BROWN, SUN MICROSYSTEMS

A number of important challenges are associated with the deployment and configuration of contemporary computing infrastructure. Given the variety of operating systems and their many versions—including the often-specific configurations required to accommodate the wide range of popular applications—it has become quite a conundrum to establish and manage such systems.

Significantly motivated by these challenges, but also owing to several other important opportunities it offers, virtualization has recently become a principal focus for computer systems software. It enables a single computer to host multiple different operating system stacks, and it decreases server count and reduces overall system complexity. EMC’s VMware is the most visible and early entrant in this space, but more recently XenSource, Parallels, and Microsoft have introduced virtualization solutions. Many of the major systems vendors, such as IBM, Sun, and Microsoft, have efforts under way to exploit virtualization. Virtualization appears to be far more than just another ephemeral marketplace trend. It is poised to deliver profound changes to the way that both enterprises and consumers use computer systems.

by Simon Crosby, David Brown

Unlocking Concurrency

Multicore architectures are an inflection point in mainstream software development because they force developers to write parallel programs. In a previous article in Queue, Herb Sutter and James Larus pointed out, “The concurrency revolution is primarily a software revolution. The difficult problem is not building multicore hardware, but programming it in a way that lets mainstream applications benefit from the continued exponential growth in CPU performance.” 1 In this new multicore world, developers must write explicitly parallel applications that can take advantage of the increasing number of cores that each successive multicore generation will provide.

UNLOCKING CONCURRENCY

Multicore programming with transactional memory

ALI-REZA ADL-TABATABAI, INTEL, CHRISTOS KOZYRAKIS, STANFORD UNIVERSITY, BRATIN SAHA, INTEL

Multicore architectures are an inflection point in mainstream software development because they force developers to write parallel programs. In a previous article in Queue, Herb Sutter and James Larus pointed out, “The concurrency revolution is primarily a software revolution. The difficult problem is not building multicore hardware, but programming it in a way that lets mainstream applications benefit from the continued exponential growth in CPU performance.” 1 In this new multicore world, developers must write explicitly parallel applications that can take advantage of the increasing number of cores that each successive multicore generation will provide.

Parallel programming poses many new challenges to the developer, one of which is synchronizing concurrent access to shared memory by multiple threads. Programmers have traditionally used locks for synchronization, but lock-based synchronization has well-known pitfalls. Simplistic coarse-grained locking does not scale well, while more sophisticated fine-grained locking risks introducing deadlocks and data races. Furthermore, scalable libraries written using fine-grained locks cannot be easily composed in a way that retains scalability and avoids deadlock and data races.

by Ali-Reza Adl-Tabatabai, Christos Kozyrakis, Bratin Saha

Curmudgeon

Will the Real Bots Stand Up?

When asked which advances in computing technology have most dazzled me since I first coaxed the Cambridge EDSAC 1 1 into fitful leaps of calculation in the 1950s, I must admit that Apple’s iPod sums up the many unforeseen miracles in one amazing, iconic gadget. Unlike those electrical nose-hair clippers and salt ’n’ pepper mills (batteries not included) that gather dust after a few shakes, my iPod lives literally near my heart, on and off the road, in and out of bed like a versatile lover—except when it’s recharging and downloading in the piracy of my own home.2

Will the Real Bots Stand Up?

From EDSAC to iPod—predictions elude us

Stan Kelly-Bootle, Author

When asked which advances in computing technology have most dazzled me since I first coaxed the Cambridge EDSAC 1 1 into fitful leaps of calculation in the 1950s, I must admit that Apple’s iPod sums up the many unforeseen miracles in one amazing, iconic gadget. Unlike those electrical nose-hair clippers and salt ’n’ pepper mills (batteries not included) that gather dust after a few shakes, my iPod lives literally near my heart, on and off the road, in and out of bed like a versatile lover—except when it’s recharging and downloading in the piracy of my own home.2

I was an early iPod convert and remain staggered by the fact that I can pop 40 GB of mobile plug-and-play music and words in my shirt pocket. I don’t really mind if the newer models are 80 GB or slightly thinner or can play movies; 40 GB copes easily with my music and e-lecture needs. Podcasts add a touch of potluck and serendipity-doo-dah. Broadcasts from the American public radio stations that I’ve missed since moving back to England now reach my iPod automatically via free subscriptions and Apple’s iTunes software. I’ve learned to live with that pandemic of “i-catching” prefixes to the point where I’ve renamed Robert Graves’s masterwork “iClaudius,” but I digress.

by Stan Kelly-Bootle

Interviews

A Conversation with John Hennessy and David Patterson

As authors of the seminal textbook, Computer Architecture: A Quantitative Approach (4th Edition, Morgan Kaufmann, 2006), John Hennessy and David Patterson probably don’t need an introduction. You’ve probably read them in college or, if you were lucky enough, even attended one of their classes. Since rethinking, and then rewriting, the way computer architecture is taught, both have remained committed to educating a new generation of engineers with the skills to tackle today’s tough problems in computer architecture, Patterson as a professor at Berkeley and Hennessy as a professor, dean, and now president of Stanford University.

As authors of the seminal textbook, 'Computer Architecture: A Quantitative Approach', John Hennessy and David Patterson probably don't need an introduction. You've probably read them in college or, if you were lucky enough, even attended one of their classes. Since rethinking, and then rewriting, the way computer architecture is taught, both have remained committed to educating a new generation of engineers with the skills to tackle today's tough problems in computer architecture, Patterson as a professor at Berkeley and Hennessy as a professor, dean, and now president of Stanford University. In addition to teaching, both have made significant contributions to computer architecture research, most notably in the area of RISC (reduced instruction set computing). Patterson pioneered the RISC project at Berkeley, which produced research on which Sun's Sparc processors (and many others) would later be based. Meanwhile, Hennessy ran a similar RISC project at Stanford in the early 1980s called MIPS. Hennessy would later commercialize this research and found MIPS Computer Systems, whose RISC designs eventually made it into the popular game consoles of Sony and Nintendo.