Multiprocessors

Vol. 3 No. 7 – September 2005

Multiprocessors

Interviews

A Conversation with Roger Sessions and Terry Coatta

In the December/January 2004-2005 issue of Queue, Roger Sessions set off some fireworks with his article about objects, components, and Web services and which should be used when (“Fuzzy Boundaries,” 40-47). Sessions is on the board of directors of the International Association of Software Architects, the author of six books, writes the Architect Technology Advisory, and is CEO of ObjectWatch. He has a very object-oriented viewpoint, not necessarily shared by Queue editorial board member Terry Coatta, who disagreed with much of what Sessions had to say in his article. Coatta is an active developer who has worked extensively with component frameworks. He is vice president of products and strategy at Silicon Chalk, a startup software company in Vancouver, British Columbia. Silicon Chalk makes extensive use of Microsoft COM for building its application. Coatta previously worked at Open Text, where he architected CORBA-based infrastructures to support the company’s enterprise products.

A Conversation with Roger Sessions and Terry Coatta

In the December/January 2004-2005 issue of Queue, Roger Sessions set off some fireworks with his article about objects, components, and Web services and which should be used when (“Fuzzy Boundaries,” 40-47). Sessions is on the board of directors of the International Association of Software Architects, the author of six books, writes the Architect Technology Advisory, and is CEO of ObjectWatch. He has a very object-oriented viewpoint, not necessarily shared by Queue editorial board member Terry Coatta, who disagreed with much of what Sessions had to say in his article. Coatta is an active developer who has worked extensively with component frameworks. He is vice president of products and strategy at Silicon Chalk, a startup software company in Vancouver, British Columbia. Silicon Chalk makes extensive use of Microsoft COM for building its application. Coatta previously worked at Open Text, where he architected CORBA-based infrastructures to support the company’s enterprise products.

We decided to let these two battle it out in a forum that might prove useful to all of our readers. We enlisted another Queue editorial board member, Eric Allman, CTO of Sendmail Inc., to moderate what we expected to be quite a provocative discussion. Our expectations were dead on.

Articles

Extreme Software Scaling

The advent of SMP (symmetric multiprocessing) added a new degree of scalability to computer systems. Rather than deriving additional performance from an incrementally faster microprocessor, an SMP system leverages multiple processors to obtain large gains in total system performance. Parallelism in software allows multiple jobs to execute concurrently on the system, increasing system throughput accordingly. Given sufficient software parallelism, these systems have proved to scale to several hundred processors.

Extreme Software Scaling

Chip multiprocessors have introduced a new dimension in scaling for application developers, operating system designers, and deployment specialists.

RICHARD MCDOUGALL, SUN MICROSYSTEMS

The advent of SMP (symmetric multiprocessing) added a new degree of scalability to computer systems. Rather than deriving additional performance from an incrementally faster microprocessor, an SMP system leverages multiple processors to obtain large gains in total system performance. Parallelism in software allows multiple jobs to execute concurrently on the system, increasing system throughput accordingly. Given sufficient software parallelism, these systems have proved to scale to several hundred processors.

More recently, a similar phenomenon is occurring at the chip level. Rather than pursue diminishing returns by increasing individual processor performance, manufacturers are producing chips with multiple processor cores on a single die. (See “The Future of Microprocessors,” by Kunle Olukotun and Lance Hammond, in this issue.) For example, the AMD Opteron1 processor now uses two entire processor cores per die, providing almost double the performance of a single core chip. The Sun Niagara2 processor, shown in figure 1, uses eight cores per die, where each core is further multiplexed with four hardware threads each.

by Richard McDougall

Curmudgeon

Multicore CPUs for the Masses

Multicore is the new hot topic in the latest round of CPUs from Intel, AMD, Sun, etc. With clock speed increases becoming more and more difficult to achieve, vendors have turned to multicore CPUs as the best way to gain additional performance. Customers are excited about the promise of more performance through parallel processors for the same real estate investment.

Multicore CPUs for the Masses

Mache Creeger, Emergent Technology Associates

Multicore is the new hot topic in the latest round of CPUs from Intel, AMD, Sun, etc. With clock speed increases becoming more and more difficult to achieve, vendors have turned to multicore CPUs as the best way to gain additional performance. Customers are excited about the promise of more performance through parallel processors for the same real estate investment.

For a handful of popular server-based enterprise applications, that may be true, but for desktop applications I wouldn’t depend on that promise being fulfilled anytime soon. The expectation for multicore CPUs on the desktop is to have all our desktop applications fully using all the processor cores on the chip. Each application would gracefully increase its performance as more and more processors became available for use. Just like past increases in clock speed and application bandwidth, increasing the number of processor cores should produce similar performance enhancements. It works for the popular enterprise applications, so why not for desktop applications? Sounds reasonable, right? Don’t count on it.

by Mache Creeger

Articles

The Future of Microprocessors

The performance of microprocessors that power modern computers has continued to increase exponentially over the years for two main reasons. First, the transistors that are the heart of the circuits in all processors and memory chips have simply become faster over time on a course described by Moore’s law,1 and this directly affects the performance of processors built with those transistors. Moreover, actual processor performance has increased faster than Moore’s law would predict,2 because processor designers have been able to harness the increasing numbers of transistors available on modern chips to extract more parallelism from software. This is depicted in figure 1 for Intel’s processors.

The Future of Microprocessors

Chip multiprocessors’ promise of huge performance gains is now a reality.

KUNLE OLUKOTUN AND LANCE HAMMOND, STANFORD UNIVERSITY

The performance of microprocessors that power modern computers has continued to increase exponentially over the years for two main reasons. First, the transistors that are the heart of the circuits in all processors and memory chips have simply become faster over time on a course described by Moore’s law,1 and this directly affects the performance of processors built with those transistors. Moreover, actual processor performance has increased faster than Moore’s law would predict,2 because processor designers have been able to harness the increasing numbers of transistors available on modern chips to extract more parallelism from software. This is depicted in figure 1 for Intel’s processors.

by Kunle Olukotun, Lance Hammond

The Price of Performance

In the late 1990s, our research group at DEC was one of a growing number of teams advocating the CMP (chip multiprocessor) as an alternative to highly complex single-threaded CPUs. We were designing the Piranha system,1 which was a radical point in the CMP design space in that we used very simple cores (similar to the early RISC designs of the late ’80s) to provide a higher level of thread-level parallelism. Our main goal was to achieve the best commercial workload performance for a given silicon budget.

The Price of Performance

An Economic Case for Chip Multiprocessing

LUIZ ANDRÉ BARROSO, GOOGLE

In the late 1990s, our research group at DEC was one of a growing number of teams advocating the CMP (chip multiprocessor) as an alternative to highly complex single-threaded CPUs. We were designing the Piranha system,1 which was a radical point in the CMP design space in that we used very simple cores (similar to the early RISC designs of the late ’80s) to provide a higher level of thread-level parallelism. Our main goal was to achieve the best commercial workload performance for a given silicon budget.

Today, in developing Google’s computing infrastructure, our focus is broader than performance alone. The merits of a particular architecture are measured by answering the following question: Are you able to afford the computational capacity you need? The high-computational demands that are inherent in most of Google’s services have led us to develop a deep understanding of the overall cost of computing, and continually to look for hardware/software designs that optimize performance per unit of cost.

by Luiz André Barroso

Kode Vicious

KV the Konqueror

Dear KV, Suppose I'm a customer of Sincere-and-Authentic's (Kode Vicious Battles On, April 2005:15-17), and suppose the sysadmin at my ISP is an unscrupulous, albeit music-loving, geek. He figured out that I have an account with Sincere-and-Authentic. He put in a filter in the access router to log all packets belonging to a session between me and S&A. He would later mine the logs and retrieve the music--without paying for it. I know this is a far-fetched scenario, but if S&A wants his business secured as watertight as possible, shouldn't he be contemplating addressing it, too? Yes, of course, S&A will have to weigh the risk against the cost of mitigating it, and he may well decide to live with the risk. But I think your correspondents suggestion is at least worthy of a summary debate--not something that should draw disgusted looks! There is, in fact, another advantage to encrypting the payload, assuming that IPsec (Internet Protocol security) isn't being used: decryption will require special clients, and that will protect S&A that much more against the theft of merchandise.

KV the Konqueror

It’s been a couple of months, and Kode Vicious has finally returned from his summer vacation. We asked him about his travels and the only response we got was this: “The South Pole during winter ain’t all it’s cracked up to be!” Fortunately, he made it back in one piece and is embracing the (Northern hemisphere’s) late summer balminess with a fresh installment of koding kwestions. This month, KV follows up on a security question from a previous column and then revisits one of koding’s most divisive issues: language choice. Welcome back!

Dear KV,
Suppose I’m a customer of Sincere-and-Authentic’s (“Kode Vicious Battles On,” April 2005:15-17), and suppose the sysadmin at my ISP is an unscrupulous, albeit music-loving, geek. He figured out that I have an account with Sincere-and-Authentic. He put in a filter in the access router to log all packets belonging to a session between me and S&A. He would later mine the logs and retrieve the music—without paying for it.

by George Neville-Neil

Articles

Software and the Concurrency Revolution

Leveraging the full power of multicore processors demands new tools and new thinking from the software industry.
Concurrency has long been touted as the "next big thing" and "the way of the future," but for the past 30 years, mainstream software development has been able to ignore it. Our parallel future has finally arrived: new machines will be parallel machines, and this will require major changes in the way we develop software. The introductory article in this issue ("The Future of Microprocessors" by Kunle Olukotun and Lance Hammond) describes the hardware imperatives behind this shift in computer architecture from uniprocessors to multicore processors, also known as CMPs (chip multiprocessors). (For related analysis, see "The Free Lunch Is Over: A Fundamental Turn Toward Concurrency in Software.")

Software and the Concurrency Revolution

Leveraging the full power of multicore processors demands new tools and new thinking from the software industry.

by Herb Sutter, James Larus