Power Management

Sort By:

Power to the People:
Reducing datacenter carbon footprints

By designing rack-level architectures, huge improvements can be made for power efficiency over conventional servers, since PSUs will be less oversized, more consolidated, and redundant for the rack versus per server. While the hyperscalers have benefited from these gains in power efficiency, most of the industry is still waiting. The Open Compute Project was started as an effort to allow other companies running datacenters to benefit from the power efficiencies as well. If more organizations run rack-scale architectures in their datacenters, the wasted carbon emissions caused by conventional servers can be lessened.

by Jessie Frazelle | May 23, 2020


Cooling the Data Center:
What can be done to make cooling systems in data centers more energy efficient?

Power generation accounts for about 40 to 45 percent of the primary energy supply in the US and the UK, and a good fraction is used to heat, cool, and ventilate buildings. A new and growing challenge in this sector concerns computer data centers and other equipment used to cool computer data systems. On the order of 6 billion kilowatt hours of power was used in data centers in 2006 in the US, representing about 1.5 percent of the country’s electricity consumption.

by Andy Woods | March 10, 2010


Toward Energy-Efficient Computing:
What will it take to make server-side computing more energy efficient?

By now, most everyone is aware of the energy problem at its highest level: our primary sources of energy are running out, while the demand for energy in both commercial and domestic environments is increasing, and the side effects of energy use have important global environmental considerations. The emission of greenhouse gases such as CO, now seen by most climatologists to be linked to global warming, is only one issue.

by David J. Brown, Charles Reams | February 17, 2010


A Conversation with Steve Furber:
The designer of the ARM chip shares lessons on energy-efficient computing.

If you were looking for lessons on energy-efficient computing, one person you would want to speak with would be Steve Furber, principal designer of the highly successful ARM (Acorn RISC Machine) processor. Currently running in billions of cellphones around the world, the ARM is a prime example of a chip that is simple, low power, and low cost. Furber led development of the ARM in the 1980s while at Acorn, the British PC company also known for the BBC Microcomputer, which Furber played a major role in developing.

February 1, 2010


Power-Efficient Software:
Power-manageable hardware can help save energy, but what can software developers do to address the problem?

The rate at which power-management features have evolved is nothing short of amazing. Today almost every size and class of computer system, from the smallest sensors and handheld devices to the "big iron" servers in data centers, offers a myriad of features for reducing, metering, and capping power consumption. Without these features, fan noise would dominate the office ambience, and untethered laptops would remain usable for only a few short hours (and then only if one could handle the heat), while data-center power and cooling costs and capacity would become unmanageable.

by Eric Saxe | January 8, 2010


Maximizing Power Efficiency with Asymmetric Multicore Systems:
Asymmetric multicore systems promise to use a lot less energy than conventional symmetric processors. How can we develop software that makes the most out of this potential?

In computing systems, a CPU is usually one of the largest consumers of energy. For this reason, reducing CPU power consumption has been a hot topic in the past few years in both the academic community and the industry. In the quest to create more power-efficient CPUs, several researchers have proposed an asymmetric multicore architecture that promises to save a significant amount of power while delivering similar performance to conventional symmetric multicore processors.

by Alexandra Fedorova, Juan Carlos Saez, Daniel Shelepov, Manuel Prieto | November 20, 2009


Powering Down:
Smart power management is all about doing more with the resources we have.

Power management is a topic of interest to everyone. In the beginning there was the desktop computer. It ran at a fixed speed and consumed less power than the monitor it was plugged into. Where computers were portable, their sheer size and weight meant that you were more likely to be limited by physical strength than battery life. It was not a great time for power management. Now consider the present. Laptops have increased in speed by more than 5,000 times. Battery capacity, sadly, has not. With hardware becoming increasingly mobile, however, users are demanding that battery life start matching the way they work.

by Matthew Garrett | January 17, 2008


Modern System Power Management:
Increasing demands for more power and increased efficiency are pressuring software and hardware developers to ask questions and look for answers.

The Advanced Configuration and Power Interface (ACPI) is the most widely used power and configuration interface for laptops, desktops, and server systems. It is also very complex, and its current specification weighs in at more than 500 pages. Needless to say, operating systems that choose to support ACPI require significant additional software support, up to and including fundamental OS architecture changes. The effort that ACPI’s definition and implementation has entailed is worth the trouble because of how much flexibility it gives to the OS (and ultimately the user) to control power management policy and implementation.

by Andrew Grover | December 5, 2003


Making a Case for Efficient Supercomputing:
It is time for the computing community to use alternative metrics for evaluating performance.

A supercomputer evokes images of “big iron“ and speed; it is the Formula 1 racecar of computing. As we venture forth into the new millennium, however, I argue that efficiency, reliability, and availability will become the dominant issues by the end of this decade, not only for supercomputing, but also for computing in general.

by Wu-chun Feng | December 5, 2003


Energy Management on Handheld Devices:
Whatever their origin, all handheld devices share the same Achilles heel: the battery.

Handheld devices are becoming ubiquitous and as their capabilities increase, they are starting to displace laptop computers - much as laptop computers have displaced desktop computers in many roles. Handheld devices are evolving from today’s PDAs, organizers, cellular phones, and game machines into a variety of new forms. Although partially offset by improvements in low-power electronics, this increased functionality carries a corresponding increase in energy consumption. Second, as a consequence of displacing other pieces of equipment, handheld devices are seeing more use between battery charges. Finally, battery technology is not improving at the same pace as the energy requirements of handheld electronics.

by Marc A Viredaz, Lawrence S Brakmo, William R Hamburgen | December 5, 2003


The Inevitability of Reconfigurable Systems:
The transition from instruction-based to reconfigurable circuits will not be easy, but has its time come?

The introduction of the microprocessor in 1971 marked the beginning of a 30-year stall in design methods for electronic systems. The industry is coming out of the stall by shifting from programmed to reconfigurable systems. In programmed systems, a linear sequence of configuration bits, organized into blocks called instructions, configures fixed hardware to mimic custom hardware. In reconfigurable systems, the physical connections among logic elements change with time to mimic custom hardware. The transition to reconfigurable systems will be wrenching, but this is inevitable as the design emphasis shifts from cost performance to cost performance per watt. Here’s the story.

by Nick Tredennick, Brion Shimamoto | December 5, 2003


A Conversation with Dan Dobberpuhl:
The computer industry has always been about power.

The development of the microprocessors that power computers has been a relentless search for more power, higher speed, and better performance, usually in smaller and smaller packages. But when is enough enough?

by David Ditzel | December 5, 2003


CPUs with 2,000 MIPS per Watt, Anyone?:
The recent events with the Eastern power grid provide an ominous reminder of our huge dependence on electrical power.

Making a living as an IT professional, I always get a terrible sinking feeling right in the midsection when that background hum of automation suddenly becomes quiet. Electrical power breathes life into every display pixel, CPU, and disk drive—and soon will do the same to Ethernet ports delivering power to devices along with data packets. Power demands our consideration in just about every implicit or explicit decision we make; and when we get it wrong, computer systems overheat and fail.

by Mike MacFaden | December 5, 2003