Power Management

Vol. 1 No. 7 – 10-01-2003

Power Management

Interviews

A Conversation with Dan Dobberpuhl

The computer industry has always been about power. The development of the microprocessors that power computers has been a relentless search for more power, higher speed, and better performance, usually in smaller and smaller packages. But when is enough enough?

A Conversation with Dan Dobberpuhl

The computer industry has always been about power. The development of the microprocessors that power computers has been a relentless search for more power, higher speed, and better performance, usually in smaller and smaller packages. But when is enough enough?

Two veteran microprocessor designers discuss chip power and the direction microprocessor design is going. Dan Dobberpuhl is responsible for the design of many high-performance microprocessors, including the PDP-11, uVax, Alpha, and StrongARM. He worked at Digital Equipment Corporation as one of five senior corporate consulting engineers, Digital’s highest technical positions, directing the company’s Palo Alto Design Center. After leaving Digital, Dobberpuhl founded SiByte Inc., later acquired by Broadcom. In an October 1998 article, EE Times named him one of “40 forces that will shape the semiconductor industry of tomorrow.” He has written numerous technical papers and is coauthor of the text, Design and Analysis of VLSI Circuits, well known to a generation of electrical engineering students. Dobberpuhl is the named inventor on nine issued U.S. patents and has several more pending patent applications in various areas of circuit design.

Articles

Energy Management on Handheld Devices

Handheld devices are becoming ubiquitous and as their capabilities increase, they are starting to displace laptop computers - much as laptop computers have displaced desktop computers in many roles. Handheld devices are evolving from today's PDAs, organizers, cellular phones, and game machines into a variety of new forms. Although partially offset by improvements in low-power electronics, this increased functionality carries a corresponding increase in energy consumption. Second, as a consequence of displacing other pieces of equipment, handheld devices are seeing more use between battery charges. Finally, battery technology is not improving at the same pace as the energy requirements of handheld electronics. Therefore, energy management, once in the realm of desired features, has become an important design requirement and one of the greatest challenges in portable computing, and it will remain so for a long time to come.

Energy Management on Handheld Devices
MARC A. VIREDAZ, LAWRENCE S. BRAKMO,
and WILLIAM R. HAMBURGEN, HEWLETT-PACKARD LABORATORIES

Whatever their origin, all handheld devices share the same Achilles’ heel: the battery.

Handheld devices are becoming ubiquitous and as their capabilities increase, they are starting to displace laptop computers—much as laptop computers have displaced desktop computers in many roles. Handheld devices are evolving from today’s PDAs, organizers, cellular phones, and game machines into a variety of new forms. Although partially offset by improvements in low-power electronics, this increased functionality carries a corresponding increase in energy consumption. Second, as a consequence of displacing other pieces of equipment, handheld devices are seeing more use between battery charges. Finally, battery technology is not improving at the same pace as the energy requirements of handheld electronics. Therefore, energy management, once in the realm of desired features, has become an important design requirement and one of the greatest challenges in portable computing, and it will remain so for a long time to come.

Among today’s rechargeable batteries, lithium-ion cells offer the highest capacity. Introduced commercially by Sony in 1991, their capacity has improved by about 10 percent per year in recent years [1]. This rate of improvement is leveling off, however, and even with alternative materials and novel cell structures, major future improvement in rechargeable batteries is unlikely.

by Marc A Viredaz, Lawrence S Brakmo, William R Hamburgen

Getting Gigascale Chips: Challenges and Opportunities in Continuing Moore's Law

Processor performance has increased by five orders of magnitude in the last three decades, made possible by following Moore's law - that is, continued technology scaling, improved transistor performance to increase frequency, additional (to avoid repetition) integration capacity to realize complex architectures, and reduced energy consumed per logic operation to keep power dissipation within limits. Advances in software technology, such as rich multimedia applications and runtime systems, exploited this performance explosion, delivering to end users higher productivity, seamless Internet connectivity, and even multimedia and entertainment.

Getting Gigascale Chips: Challenges and Opportunities in Continuing Moore’s Law
SHEKHAR BORKAR, INTEL

TIPS-level performance will be delivered only if engineers and developers learn to exploit emerging paradigm shifts.

Processor performance has increased by five orders of magnitude in the last three decades, made possible by following Moore’s law—that is, continued technology scaling, improved transistor performance to increase frequency, additional (to avoid repetition) integration capacity to realize complex architectures, and reduced energy consumed per logic operation to keep power dissipation within limits. Advances in software technology, such as rich multimedia applications and runtime systems, exploited this performance explosion, delivering to end users higher productivity, seamless Internet connectivity, and even multimedia and entertainment.

The “technology treadmill” will continue, providing integration capacity of billions of transistors; however, several fundamental physics issues will pose barriers. In this article, we will examine these barriers, describe how they are changing the landscape, discuss ways to get around them, and predict how future advances in software technology could help continue the technology treadmill.

by Shekhar Borkar

Making a Case for Efficient Supercomputing

A supercomputer evokes images of "big iron" and speed; it is the Formula 1 racecar of computing. As we venture forth into the new millennium, however, I argue that efficiency, reliability, and availability will become the dominant issues by the end of this decade, not only for supercomputing, but also for computing in general.

Making a case for Efficient Supercomputing
WU-CHUN FENG, LOS ALAMOS NATIONAL LABORATORY

It’s time for the computing community to use alternative metrics for evaluating performance.

A supercomputer evokes images of “big iron” and speed; it is the Formula 1 racecar of computing. As we venture forth into the new millennium, however, I argue that efficiency, reliability, and availability will become the dominant issues by the end of this decade, not only for supercomputing, but also for computing in general.

Over the past few decades, the supercomputing industry has focused on and continues to focus on performance in terms of speed and horsepower, as evidenced by the annual Gordon Bell Awards for performance at Supercomputing (SC). Such a view is akin to deciding to purchase an automobile based primarily on its top speed and horsepower. Although this narrow view is useful in the context of achieving “performance at any cost,” it is not necessarily the view that one should use to purchase a vehicle. The frugal consumer might consider fuel efficiency, reliability, and acquisition cost. Translation: Buy a Honda Civic, not a Formula 1 racecar. The outdoor adventurer would likely consider off-road prowess (or off-road efficiency). Translation: Buy a Ford Explorer sport-utility vehicle, not a Formula 1 racecar. Correspondingly, I believe that the supercomputing (or more generally, computing) community ought to have alternative metrics to evaluate supercomputers—specifically metrics that relate to efficiency, reliability, and availability, such as the total cost of ownership (TCO), performance/power ratio, performance/space ratio, failure rate, and uptime.

by Wu-chun Feng

Modern System Power Management

The Advanced Configuration and Power Interface (ACPI) is the most widely used power and configuration interface for laptops, desktops, and server systems. It is also very complex, and its current specification weighs in at more than 500 pages. Needless to say, operating systems that choose to support ACPI require significant additional software support, up to and including fundamental OS architecture changes. The effort that ACPI's definition and implementation has entailed is worth the trouble because of how much flexibility it gives to the OS (and ultimately the user) to control power management policy and implementation.

Modern System Power Management
ANDREW GROVER, INTEL’S MOBILE PRODUCTS GROUP

Increasing demands for more power and increased efficiency are pressuring software and hardware developers to ask questions and look for answers.

The Advanced Configuration and Power Interface (ACPI) is the most widely used power and configuration interface for laptops, desktops, and server systems. It is also very complex, and its current specification weighs in at more than 500 pages. Needless to say, operating systems that choose to support ACPI require significant additional software support, up to and including fundamental OS (operating system) architecture changes. The effort that ACPI’s definition and implementation has entailed is worth the trouble because of how much flexibility it gives to the OS (and ultimately the user) to control power management policy and implementation.

ACPI AND ITS PRECURSORS

Of course, power management wasn’t initially part of the PC platform at all. Early mobile computers were mobile only in the loosest sense. Some didn’t even have batteries. If they did, they could be operated only briefly away from an AC outlet. From a software perspective, DOS, the PC’s first operating system, was generally unaware that it was running on a mobile PC at all. Very early on, manufacturers added value to their systems by implementing support for “suspend-to-RAM” functionality and LCD screen blanking. Early CPUs with mobile features, starting with the Intel 80386SL, added a special system management mode (SMM), in which the firmware (also referred to as the BIOS) could more easily perform power management and other functions without requiring OS support.

by Andrew Grover

Reading, Writing, and Code

Forty years ago, when computer programming was an individual experience, the need for easily readable code wasn't on any priority list. Today, however, programming usually is a team-based activity, and writing code that others can easily decipher has become a necessity. Creating and developing readable code is not as easy as it sounds.

Reading, Writing, and Code
DIOMIDIS SPINELLIS, ATHENS UNIVERSITY OF ECONOMICS AND BUSINESS

The key to writing readable code is developing good coding style.

Forty years ago, when computer programming was an individual experience, the need for easily readable code wasn’t on any priority list. Today, however, programming usually is a team-based activity, and writing code that others can easily decipher has become a necessity. Creating and developing readable code is not as easy as it sounds.

EASIER WRITTEN THAN READ

There’s a theory explaining why computer code that is sometimes so easy to write is so hard to read, and it goes this way:

by Diomidis Spinellis

Reconfigurable Future

The Ability to Produce Cheaper, More Compact Chips is a Double-edged Sword.

Reconfigurable Future

The ability to produce cheaper, more compact chips is a double-edged sword.

Mark Horowitz, Stanford University

Predicting the future is notoriously hard. Sometimes I feel that the only real guarantee is that the future will happen, and that someone will point out how it's not like what was predicted. Nevertheless, we seem intent on trying to figure out what will happen, and worse yet, recording these views so they can be later used against us. So here I go...

Scaling has been driving the whole electronics industry, allowing it to produce chips with more transistors at a lower cost. But this trend is a double-edged sword: We not only need to figure out more complex devices, which people want, but we also must determine which complex devices lots of people want, as we have to sell many, many chips to amortize the significant design cost.

by Mark Horowitz

Opinion

Stand and Deliver: Why I Hate Stand-Up Meetings

Stand-up meetings are an important component of the 'whole team', which is one of the fundamental practices of extreme programming (XP).

Stand and Deliver: Why I Hate Stand-Up Meetings
Phillip A. Laplante, Penn State University

Stand-up meetings are an important component of the “whole team,” which is one of the fundamental practices of extreme programming (XP).

According to the Extreme Programming Web site, the stand-up meeting is one part of the rules and practices of extreme programming: “Communication among the entire team is the purpose of the stand-up meeting. They should take place every morning in order to communicate problems, solutions, and promote team focus. The idea is that everyone stands up in a circle in order to avoid long discussions. It is more efficient to have one short meeting that everyone is required to attend than many meetings with a few developers each.” [1]

by Phillip A Laplante

Articles

The Big Bang Theory of IDEs

Remember the halcyon days when development required only a text editor, a compiler, and some sort of debugger (in cases where the odd printf() or two alone didn't serve)? During the early days of computing, these were independent tools used iteratively in development's golden circle. Somewhere along the way we realized that a closer integration of these tools could expedite the development process. Thus was born the integrated development environment (IDE), a framework and user environment for software development that's actually a toolkit of instruments essential to software creation. At first, IDEs simply connected the big three (editor, compiler, and debugger), but nowadays most go well beyond those minimum requirements. In fact, in recent years, we have witnessed an explosion in the constituent functionality of IDEs.

The Big Bang Theory of IDEs
CASPAR BOEKHOUDT, INFORMATION METHODOLOGIES

Pondering the vastness of the ever-expanding universe of IDEs, you might wonder, “Is a usable IDE too much to ask for?”

Remember the halcyon days when development required only a text editor, a compiler, and some sort of debugger (in cases where the odd printf() or two alone didn’t serve)? During the early days of computing, these were independent tools used iteratively in development’s golden circle. Somewhere along the way we realized that a closer integration of these tools could expedite the development process. Thus was born the integrated development environment (IDE), a framework and user environment for software development that’s actually a toolkit of instruments essential to software creation. At first, IDEs simply connected the big three (editor, compiler, and debugger), but nowadays most go well beyond those minimum requirements. In fact, in recent years, we have witnessed an explosion in the constituent functionality of IDEs.

Doesn’t this make you speculate on where this is all leading? I’ve wondered whether it’s perhaps analogous to the Big Bang. That theory postulates that the universe began with a fiery explosion that hurled matter into space, resulting in the ongoing expansion of the universe we now observe. But what of its future? There are many theories: Some believe that it will continue expanding without end; others believe that the expansion will slow and eventually stop, reaching its equilibrium; yet another group believes in an oscillatory behavior in which the universe will begin collapsing again (sometimes called the Big Crunch) after reaching a point of maximum expansion. Important and profound additions to the mix are the considerations of energy, entropy, and chaos—each of which is all too apparent in the developers’ world of today.

by Caspar Boekhoudt

The Inevitability of Reconfigurable Systems

The introduction of the microprocessor in 1971 marked the beginning of a 30-year stall in design methods for electronic systems. The industry is coming out of the stall by shifting from programmed to reconfigurable systems. In programmed systems, a linear sequence of configuration bits, organized into blocks called instructions, configures fixed hardware to mimic custom hardware. In reconfigurable systems, the physical connections among logic elements change with time to mimic custom hardware. The transition to reconfigurable systems will be wrenching, but this is inevitable as the design emphasis shifts from cost performance to cost performance per watt. Here's the story.

The Inevitability of Reconfigurable Systems
NICK TREDENNICK, GILDER TECHNOLOGY REPORT
BRION SHIMAMOTO, INDEPENDENT CONSULTANT

The transition from instruction-based to reconfigurable circuits won’t be easy, but has its time come?

The introduction of the microprocessor in 1971 marked the beginning of a 30-year stall in design methods for electronic systems. The industry is coming out of the stall by shifting from programmed to reconfigurable systems. In programmed systems, a linear sequence of configuration bits, organized into blocks called instructions, configures fixed hardware to mimic custom hardware. In reconfigurable systems, the physical connections among logic elements change with time to mimic custom hardware. The transition to reconfigurable systems will be wrenching, but this is inevitable as the design emphasis shifts from cost performance to cost performance per watt. Here’s the story.

SCRATCH-BUILT CIRCUITS

Until the 1940s, solving problems meant building hardware. The engineer selected the algorithm and the hardware components, and embedded the algorithm in the hardware to suit one application: fixed hardware resources and fixed algorithms. The range of applications amenable to hardware solutions depended on the cost and performance of hardware components.

by Nick Tredennick, Brion Shimamoto

Curmudgeon

Wireless Networking Considered Flaky

You know what bugs me about wireless networking? Everyone thinks it's so cool and never talks about the bad side of things. Oh sure, I can get on the 'net from anywhere at Usenix or the IETF (Internet Engineering Task Force), but those are _hostile_ _nets_. Hell, all wireless nets are hostile. By their very nature, you don't know who's sharing the ether with you. But people go on doing their stuff, confident that they are OK because they're behind the firewall.

Wireless Networking Considered Flaky
Eric Allman, Sendmail

You know what bugs me about wireless networking? Everyone thinks it’s so cool and never talks about the bad side of things. Oh sure, I can get on the ’net from anywhere at Usenix or the IETF (Internet Engineering Task Force), but those are _hostile_ _nets_. Hell, all wireless nets are hostile. By their very nature, you don’t know who’s sharing the ether with you. But people go on doing their stuff, confident that they are OK because they’re behind the firewall.

Let’s face it: WEP (Wired Equivalent Privacy) is a joke. There’s no privacy on a wireless net. When you type your password, it’s there for the world to see—and take, and abuse. A lot of places don’t even bother with WEP, even behind firewalls. You want free ‘net access? Drive into a random parking lot in Silicon Valley and pull up next to one of those big, two-story “ranch house” style buildings that seem to be ubiquitous there. You’ll have a shockingly good chance of being on the ‘net. But not just the Internet: their _internal_ network. And if you sniff that network you might just get a password or two. Or maybe several dozen. You’ll probably even trip over some root passwords.

by Eric Allman