Multiprocessors

Vol. 3 No. 7 – September 2005

Multiprocessors

A Conversation with Roger Sessions and Terry Coatta:
The difference between objects and components? That’s debatable.

In the December/January 2004-2005 issue of Queue, Roger Sessions set off some fireworks with his article about objects, components, and Web services and which should be used when (“Fuzzy Boundaries,” 40-47). Sessions is on the board of directors of the International Association of Software Architects, the author of six books, writes the Architect Technology Advisory, and is CEO of ObjectWatch. He has a very object-oriented viewpoint, not necessarily shared by Queue editorial board member Terry Coatta, who disagreed with much of what Sessions had to say in his article. Coatta is an active developer who has worked extensively with component frameworks. He is vice president of products and strategy at Silicon Chalk, a startup software company in Vancouver, British Columbia. Silicon Chalk makes extensive use of Microsoft COM for building its application. Coatta previously worked at Open Text, where he architected CORBA-based infrastructures to support the company’s enterprise products.

Extreme Software Scaling:
Chip multiprocessors have introduced a new dimension in scaling for application developers, operating system designers, and deployment specialists.

The advent of SMP (symmetric multiprocessing) added a new degree of scalability to computer systems. Rather than deriving additional performance from an incrementally faster microprocessor, an SMP system leverages multiple processors to obtain large gains in total system performance. Parallelism in software allows multiple jobs to execute concurrently on the system, increasing system throughput accordingly. Given sufficient software parallelism, these systems have proved to scale to several hundred processors.

by Richard McDougall

Multicore CPUs for the Masses:
Will increased CPU bandwidth translate into usable desktop performance?

Multicore is the new hot topic in the latest round of CPUs from Intel, AMD, Sun, etc. With clock speed increases becoming more and more difficult to achieve, vendors have turned to multicore CPUs as the best way to gain additional performance. Customers are excited about the promise of more performance through parallel processors for the same real estate investment.

by Mache Creeger

The Future of Microprocessors:
Chip multiprocessors’ promise of huge performance gains is now a reality.

The performance of microprocessors that power modern computers has continued to increase exponentially over the years for two main reasons. First, the transistors that are the heart of the circuits in all processors and memory chips have simply become faster over time on a course described by Moore’s law, and this directly affects the performance of processors built with those transistors. Moreover, actual processor performance has increased faster than Moore’s law would predict, because processor designers have been able to harness the increasing numbers of transistors available on modern chips to extract more parallelism from software.

by Kunle Olukotun, Lance Hammond

The Price of Performance:
An Economic Case for Chip Multiprocessing

In the late 1990s, our research group at DEC was one of a growing number of teams advocating the CMP (chip multiprocessor) as an alternative to highly complex single-threaded CPUs. We were designing the Piranha system,1 which was a radical point in the CMP design space in that we used very simple cores (similar to the early RISC designs of the late ’80s) to provide a higher level of thread-level parallelism. Our main goal was to achieve the best commercial workload performance for a given silicon budget. Today, in developing Google’s computing infrastructure, our focus is broader than performance alone. The merits of a particular architecture are measured by answering the following question: Are you able to afford the computational capacity you need? The high-computational demands that are inherent in most of Google’s services have led us to develop a deep understanding of the overall cost of computing, and continually to look for hardware/software designs that optimize performance per unit of cost.

by Luiz André Barroso

KV the Konqueror:
A koder with attitude, KV answers your questions. Miss Manners he ain’t.

Suppose I’m a customer of Sincere-and-Authentic’s (“Kode Vicious Battles On,” April 2005:15-17), and suppose the sysadmin at my ISP is an unscrupulous, albeit music-loving, geek. He figured out that I have an account with Sincere-and-Authentic. He put in a filter in the access router to log all packets belonging to a session between me and S&A. He would later mine the logs and retrieve the music—without paying for it.

by George Neville-Neil

Software and the Concurrency Revolution:
Leveraging the full power of multicore processors demands new tools and new thinking from the software industry.

Concurrency has long been touted as the "next big thing" and "the way of the future," but for the past 30 years, mainstream software development has been able to ignore it. Our parallel future has finally arrived: new machines will be parallel machines, and this will require major changes in the way we develop software. The introductory article in this issue describes the hardware imperatives behind this shift in computer architecture from uniprocessors to multicore processors, also known as CMPs.

by Herb Sutter, James Larus