Download PDF version of this article PDF

CPUs with 2,000 MIPS per Watt, Anyone?
Mike MacFaden, Queue Advisory Board Member

Making a living as an IT professional, I always get a terrible sinking feeling right in the midsection when that background hum of automation suddenly becomes quiet. The recent events with the Eastern power grid provide an ominous reminder of our huge dependence on electrical power.

Electrical power breathes life into every display pixel, CPU, and disk drive—and soon will do the same to Ethernet ports delivering power to devices along with data packets. Power demands our consideration in just about every implicit or explicit decision we make; and when we get it wrong, computer systems overheat and fail.

For a large population of software engineers and IT professionals, however, power is often an afterthought.

All of a sudden it turns out, old software design issues, such as minimizing the number of disk accesses and avoiding busy wait loops, may be worth revisiting. It is with great pleasure, therefore, that Randy Harr, myself, and the rest of the ACM Queue Advisory Board present the following collection of articles on power.

We begin with a seminal conversation between chip designer Dave Ditzel, chief technical officer of Transmeta, and Daniel Dobberpuhl, legendary chip designer and recipient of the 2003 IEEE Solid-State Circuits Technical Field Award. Dobberpuhl warns us that power dissipation has begun to threaten the steady rise in CPU performance. He describes what might be required to derive better performance using a less power-hungry system and chip designs. He asks Ditzel about Transmeta’s unique approach to low-power x86 architecture chip design.

Next, Shekhar Borkar from Intel digs deeper into the problems Dobberpuhl and Ditzel describe in “Getting Gigascale Chips: Challenges and Opportunities in Continuing Moore’s Law.” The phrase I once heard Dobberpuhl use, “We have hit the wall,” sums up the situation, which is plainly visible in Borkar’s data. He points out that we won’t be doing architectures, systems, and application software the same way when it comes to delivering trillions-of-instructions-per-second (TIPS) systems. The laws of physics just get in the way.

In “The Inevitability of Reconfigurable Systems,” Nick Tredennick and Brion Shimamoto argue that future computer hardware design should emphasize cost-performance-per-watt over cost performance. Their thinking is rather controversial among fellow chip designers. Mark Horowitz takes critical aim at their predictions.

Three articles shift focus from CPUs to systems. Marc Viredaz looks at energy management and handheld devices and describes the overall power management problem that system designers confront today. From the opposite standpoint, Wu-chun Feng makes a case for energy-efficient supercomputing, describing alternative metrics for evaluating the total cost of ownership for large computing facilities—factoring in the increasingly significant amount of energy usage per CPU, as well as the cooling and space needed to house hot-running systems. In the last article of this group, Andrew Grover of Intel describes Advanced Configuration and Power Interface (ACPI) technology that makes it possible for operating systems such as Linux and Microsoft Windows to control physical characteristics, including power and the thermal of a computer system, in a safe and consistent way across many different systems.

Electrical power topics are not all that should concern software developers these days. This issue of Queue also contains Caspar Boekhoudt’s reflections on several possible outcomes for the ever-expanding universe of IDEs. And, lest we forget the basics, we have included a down-to-earth discussion by Diomidis Spinellis on the fundamental importance of reading and writing code. When all is said and done, it is coding that must withstand the test of time.

Enjoy.

MIKE MacFADEN has spent the past 15 years programming professionally. A graduate of California Polytechnic State University, San Luis Obispo, he is presently a co-chairperson of the ACM Membership Committee and an ACM Queue Editorial Board Member. His interests include systems management/command and control software, and renewable energy systems.

acmqueue

Originally published in Queue vol. 1, no. 7
Comment on this article in the ACM Digital Library





More related articles:

Andy Woods - Cooling the Data Center
Power generation accounts for about 40 to 45 percent of the primary energy supply in the US and the UK, and a good fraction is used to heat, cool, and ventilate buildings. A new and growing challenge in this sector concerns computer data centers and other equipment used to cool computer data systems. On the order of 6 billion kilowatt hours of power was used in data centers in 2006 in the US, representing about 1.5 percent of the country’s electricity consumption.


David J. Brown, Charles Reams - Toward Energy-Efficient Computing
By now, most everyone is aware of the energy problem at its highest level: our primary sources of energy are running out, while the demand for energy in both commercial and domestic environments is increasing, and the side effects of energy use have important global environmental considerations. The emission of greenhouse gases such as CO, now seen by most climatologists to be linked to global warming, is only one issue.


Eric Saxe - Power-Efficient Software
The rate at which power-management features have evolved is nothing short of amazing. Today almost every size and class of computer system, from the smallest sensors and handheld devices to the "big iron" servers in data centers, offers a myriad of features for reducing, metering, and capping power consumption. Without these features, fan noise would dominate the office ambience, and untethered laptops would remain usable for only a few short hours (and then only if one could handle the heat), while data-center power and cooling costs and capacity would become unmanageable.


Alexandra Fedorova, Juan Carlos Saez, Daniel Shelepov, Manuel Prieto - Maximizing Power Efficiency with Asymmetric Multicore Systems
In computing systems, a CPU is usually one of the largest consumers of energy. For this reason, reducing CPU power consumption has been a hot topic in the past few years in both the academic community and the industry. In the quest to create more power-efficient CPUs, several researchers have proposed an asymmetric multicore architecture that promises to save a significant amount of power while delivering similar performance to conventional symmetric multicore processors.





© ACM, Inc. All Rights Reserved.