Download PDF version of this article PDF

Modern System Power Management
ANDREW GROVER, INTEL’S MOBILE PRODUCTS GROUP

Increasing demands for more power and increased efficiency are pressuring software and hardware developers to ask questions and look for answers.

The Advanced Configuration and Power Interface (ACPI) is the most widely used power and configuration interface for laptops, desktops, and server systems. It is also very complex, and its current specification weighs in at more than 500 pages. Needless to say, operating systems that choose to support ACPI require significant additional software support, up to and including fundamental OS (operating system) architecture changes. The effort that ACPI’s definition and implementation has entailed is worth the trouble because of how much flexibility it gives to the OS (and ultimately the user) to control power management policy and implementation.

ACPI AND ITS PRECURSORS

Of course, power management wasn’t initially part of the PC platform at all. Early mobile computers were mobile only in the loosest sense. Some didn’t even have batteries. If they did, they could be operated only briefly away from an AC outlet. From a software perspective, DOS, the PC’s first operating system, was generally unaware that it was running on a mobile PC at all. Very early on, manufacturers added value to their systems by implementing support for “suspend-to-RAM” functionality and LCD screen blanking. Early CPUs with mobile features, starting with the Intel 80386SL, added a special system management mode (SMM), in which the firmware (also referred to as the BIOS) could more easily perform power management and other functions without requiring OS support.

The Advanced Power Management (APM) interface, created by Intel and Microsoft in 1992, marked the first time the firmware exposed power management interfaces to the operating system in a well-defined manner. This allowed the OS to do user-friendly things like having battery-remaining information displayed in the user interface, and allowed the OS to control some aspects of power management, such as requesting that the system enter a suspend state. Other functions, such as screen blanking, continued to be handled solely by the firmware, and in all cases, the firmware performed the actual hardware-specific mechanisms by which the system entered suspend.

APM’s advantages to the OS implementer mirrored its disadvantages. Although it was simple to support APM, the interface was not very flexible. (For example, APM’s battery status interface aggregates battery information, possibly from multiple batteries, into a single “minutes remaining” value.) APM placed few requirements upon the OS, but did everything in an opaque manner, with no OS oversight (such as suspend). Finally, the APM interface required real-mode BIOS calls. As OSes were quickly adopting 32-bit protected mode exclusively, this was a significant problem.

ACPI 1.0 was developed in 1996 by Intel, Microsoft, and Toshiba. It was designed to address APM’s shortcomings, but it didn’t stop there. It also subsumed the functionality of a number of other system configuration interfaces. Before ACPI, the system would report memory to the OS via a call to the firmware, enumerate CPU and interrupt information via a table the firmware placed in memory (the multiprocessing specification table), and enumerate other devices and buses via plug-and-play BIOS. ACPI replaced all of these. It also hides platform power-management details

ACPI’s most ambitious change, however, was in response to the problems inherent in APM’s reliance on opaque calls into the firmware. These were seen as harmful because they meant the OS was no longer in full control of the system: A call to the firmware could take a long time to return, or do something unexpected, or even never return. Firmware, because of its variation from system to system, as well as being written by the system manufacturer instead of the OS vendor, has always been considered suspect. By making calls to it, the OS’s stability and reliability had become dependent on the quality of the firmware.

ACPI didn’t make it possible to break the OS’s reliance on the firmware, but it did make it possible to rely on it to a much lesser degree. The key way it did this was through the introduction of ACPI source language (ASL) and ACPI machine language (AML). ASL and AML allowed the firmware to communicate to the OS the steps necessary to perform actions on its platform, but then the OS was responsible for actually executing them.

This may seem vaguely reminiscent of what Java does, and it should. It is possible to draw many analogies between that other, more famous, bytecode-interpreted language and AML. A program written in Java is compiled into machine instructions for the Java Virtual Machine (JVM). When run, these op codes are interpreted and executed in an environment (the Java runtime environment) that abstracts the hardware from the program and prevents it from harming the rest of the system. The environment provides interfaces for the Java program to use system resources safely.

ASL is the human-readable source code, like Java source code. AML is the compiled version of the ASL, which just like Java is compiled into bytecodes. AML is interpreted, just like Java bytecode, in a sandbox. The two technologies differ greatly in their purpose, however. The Java environment’s goal is to hide OS- and hardware-specific details from the interpreted bytecode. AML serves the exact opposite function. It is completely platform-specific, and its goal is to hide hardware-specific details from the OS. In fact, AML was designed specifically to describe hardware and the steps to access it.

Both Java and AML abstract something, but they abstract different things (see figure 1): Java hides OS interfaces from applications behind a standard interface; AML hides platform hardware details from the OS behind standard control method names.

For example, let’s say a given system’s battery status was obtainable by reading from an I/O port located at X. A different system might have this port located at a different address or might have an altogether different method to obtain battery status. Under APM, the OS would call the APM battery status function, and the firmware would know to read port X and then return the value read, giving control back to the OS. Using ACPI, the firmware would describe the steps to get the battery status in a “control method” with a defined name, _BST. The OS would execute the _BST control method in its AML interpreter. In this case, the AML would describe a read of port X, and the interpreter would do so. Finally, the interpreter would return the battery status. On a different system, the _BST control method could very well end up doing something completely different to get the same information.

THE ADVANTAGES OF INTERPRETED AML

This may seem like a trivial shifting of responsibility, since both APM and ACPI rely on the firmware, but interpreting op codes described by the firmware instead of calling the firmware has several important advantages.

ACPI AND OSPM

All this is a very clever mechanism for abstracting the OS from the system’s specific implementation details, but it’s still just an interface—it doesn’t do anything by itself. The OS agent that actually controls the system’s power policy is referred to as the operating system’s power management (OSPM), in whatever form it takes. For example, it would be OSPM’s job to call the _BST control method that returns the battery status, and actually do something based upon that information. Many OSes may implement the battery-specific code in a battery driver, but it’s still conceptually part of the overall OS power policy, also known as OSPM.

APM started the trend of involving the OS in power-policy decisions; ACPI gives the OS all the responsibility. This turns out to make a lot of sense, because the OS generally is in the best position to know the general operating status of the system. If a device is idle and could potentially be turned off to save power, chances are it is the OS, or that device’s driver, which is in the best position to notice this and actually turn it off. The OS may choose to present a UI (user interface) so the user may specify a particular power preference, or it may not, but the OS has the final say.

One particular area of interest has to do with suspending the system to memory or disk. ACPI implements a mechanism for putting the system to sleep, but unlike its predecessors, it relies on OSPM to actually deactivate and save the state for all the devices on the system, in preparation for sleep. This gives the OS a lot of flexibility in optimizing the sleep entry and exit process, but this is a capability that most OSes tend to lack when starting to implement ACPI support. Therefore, this feature must be added, which can be quite an undertaking.

IMPLEMENTING OSPM: MICROSOFT WINDOWS

Microsoft Windows 2000 was the first OS to support ACPI and OS-directed power management fully. Windows 2000, which was built upon the Windows NT 4 code base, significantly revamped the existing NT device driver programming interfaces. Their new driver API, called the Windows Driver Model (WDM), relieved driver writers from a number of requirements that were present under the old model, but added other requirements in their place, most notably supporting power management and plug-and-play (PnP) operation. The additional code for these was generally manageable, but this change required driver writers to familiarize themselves with the new model and update all existing drivers to support the new interfaces.

Although WDM treats them as logically distinct, adding plug-and-play support to the driver model was a prerequisite for power management. The OS must have complete knowledge of device parent-child relationships so it can sequence device power-downs in the correct order to sleep and wake up properly. For example, a device on a PCI bus should be turned off before the parent bus’s power when sleeping, and turned on only after the parent is active again, when waking from sleep.

Windows NT’s driver model was not sufficient for this task. Although it did have a layered driver model, it was used for purposes other than modeling the system’s physical device relationships. Windows 2000 changed that. Instead of a driver looking for its own devices, WDM adds the special kind of driver called a bus driver. A WDM bus driver is responsible for enumerating its child devices. For example, the PCI bus driver would find all PCI devices attached to a PCI bus. Instead of the drivers probing for their own devices, the bus driver tells them when an instance of their device is found on the bus.

This results in the OS having a tree of all the devices on the system. The bus drivers comprise the inner nodes of the tree, and the leaf nodes are the actual functional device drivers for the system’s devices. The OS now has enough information to properly sequence power events across the devices on the system.

Although the transition to WDM was not easy, it was also a remarkable tribute to the original NT driver model’s design that more did not have to change to accommodate the dramatic changes in basic functionality that WDM involved.

IMPLEMENTING OSPM: LINUX

Linux has very different design philosophies than Windows. Most notably in contrast to Windows (whose driver interface was designed), the Linux driver model has developed through a long, evolutionary process. In addition, almost the entire driver source is released along with the core kernel routines, and source releases are done regularly.

This has had some positive results. First, there is no need to maintain legacy interfaces. If an interface needs to be changed, then both it and all the places where it is used can also be changed relatively easily. Second, changes are quickly tested on a variety of machines, and bugs are found and reported quickly. Thus, the next round of changes can incorporate feedback from those reports.

These advantages have proven vital, because initially Linux lacked a universal device driver interface. Individual subsystems, such as PCI and USB, had by necessity started to develop subsystem-specific PnP-like interfaces, but there was no systemwide way to see all the devices on the system and how they attached to each other.

The soon-to-be-released Linux kernel 2.6 will include the framework needed to tie all the system’s devices together in a unified device tree. This consists of entirely new interfaces, implemented almost from scratch, but the rapid development process and its freedom to abolish obsolete interfaces have resulted in much progress in a short amount of time. While it is not yet complete, the main task remaining is one that Windows also had to endure: converting all the drivers to the new driver model.

OSPM RESPONSIBILITIES

On a contemporary system, to be considered fully functional, OSPM must handle a variety of devices and general responsibilities:

CPU POWER MANAGEMENT: A CRITICAL JOB FOR OSPM

CPU power management deserves special attention because it is crucial to obtaining good battery life on a mobile system. The CPU draws significant power while executing, as well as dissipating heat into its immediate environment. OSPM CPU power management must not only keep CPU power as low as possible, but also consider the performance and thermal impacts that its policy has on the system. OSPM can use a number of techniques to control CPU power and thus extend battery life.

The first is via processor power states, called C states. These can be entered when the system is idle, and power down the CPU to various degrees. Originally, three C states were defined: C1 through C3, with C0 being defined as “running.” These offered progressively greater power savings, but the entry and exit times to enter these states also increased. Thus, using C states saves power while the CPU is idle, but it is important not to be too aggressive; entering too deep a C state and then being woken up immediately results in worse performance, as well as no power savings. Typically, when idle, a policy of entering shallow C states such as C1 and entering progressively deeper states then if not interrupted, works well to maximize power savings while not hurting performance. ACPI 2.0 gives CPU vendors more flexibility in defining the capability and number of C states their products support.

The second method of controlling CPU power is via throttling. Throttling signals a special pin to the CPU, which causes the CPU to execute at a lower effective frequency. Because power consumption increases linearly with frequency, reducing the frequency through throttling also saves power.

Usually, if given a choice, it is better for OSPM to choose to use C states instead of throttling to save power. This is because while throttling reduces power, it also diminishes CPU performance to the same degree. On a 50 percent throttled CPU, this would result in a CPU workload taking twice as long to complete. It is usually better to “race to halt”—try to complete the work as soon as possible and then enter a C state sooner. Nonetheless, throttling can be necessary to control an overheating CPU.

The newest tool available to control CPU power is performance states. In the last couple of years, CPU vendors have started offering a feature on mobile microprocessors that offers multiple combinations of supply voltage and frequency. This voltage scaling is a very important feature, and generally the marketing for CPUs that implement a version of it are not shy about touting it.

The reason why this works is that higher voltage to a chip can enable it to be clocked faster. Correspondingly, if a lower speed is acceptable, then the chip does not require as high an input voltage. Thus, dropping the speed and the voltage results in both the power decrease from the frequency decrease (just as throttling would), as well as the power decrease from the voltage drop. This is further magnified because physics dictates that the power consumption by the chip is proportional to the square of the voltage. Therefore, dropping from 1.3V to 1.0V doesn’t net a 23 percent power savings, it nets a roughly 40 percent power savings.

Voltage scaling adds another wrinkle to balancing power and performance. Now that CPU power policy has a slew of performance states to choose from, it is not always easy to determine the optimal way to balance power and performance. Work continues on optimizing this area of power policy.

CHALLENGES FOR THE FUTURE

Computer hardware has not stood still since ACPI’s introduction, and innovation shows no sign of slowing down. This poses some questions for ACPI and the future of mobile PCs.

In the past five years, we have seen a tenfold increase in the raw frequency of the processor. If all else remained constant, this would have resulted in at least a tenfold power usage increase. Other system components also draw more power. Thankfully, Moore’s law has been a great help in mitigating this (smaller transistors use less power), as have mobile-specific chip efforts, but the power trend is up. The main worry for the future is leakage power. As opposed to the power that is used productively when transistors do work, leakage power is consumed just by the circuit receiving power. The smaller the transistor, the more leakage there is. It is now growing to be a considerable percentage of the total CPU power, and will only get worse. (For more on this, see Caspar Boekhoudt’s “The Big Bang Theory of IDEs” on page 74 of this issue.)

But what about bigger, better batteries? New energy storage technologies are on the horizon, most notably fuel-cell technology. These are viewed with considerable excitement, but obstacles to their adoption will need to be overcome. First, while working demonstration fuel cells have been getting smaller and smaller, they still are not quite small enough to fit in a reasonably sized notebook. Second, current fuel cells are strong in providing a long, steady power supply, but cannot yet handle the large, transient draws that a laptop typically demands of its battery. Hybrid battery/fuel-cell options may work, but these may be even more bulky. Finally, there is the question of how people will feel about a nonelectric power source. Will people be willing to pay to “fill up” their laptop’s tank in exchange for more mobility?

In addition to these hardware issues, software—and ACPI specifically—will play a crucial part in enabling future systems to continue improving in feature set, performance, and battery life. ACPI has enabled much more aggressive power management of the processor and other components, but new ideas in OS power policy, as well as changes to ACPI itself, will be necessary for OSPM to manage system power resources in the most intelligent way possible.

Finally, it appears that the evolution of responsibility for power policy from the firmware to the OS may not be the final shift; with modern processors, CPU thermal conditions occur so quickly that software cannot deal with them soon enough. Hardware must cooperate with the OS to manage such devices, but the need for this wasn’t envisioned when ACPI was first conceived. One challenge will be handling this new transition to shared responsibility smoothly.

Mobile computing has become an integral part of many people’s work and personal lives, and its importance continues to grow. Demands for more performance, better battery life, innovative form factors, and wireless connectivity are driving both hardware and software development at a very fast pace, and will continue to do so in the future. Q

ANDREW GROVER is a senior software engineer in Intel’s Mobile Products Group. He has worked on implementing many power management features for both Microsoft Windows and Linux, and he is the current maintainer of the Linux ACPI driver. He has a bachelor’s degree in computer science from Emory University.

 

acmqueue

Originally published in Queue vol. 1, no. 7
Comment on this article in the ACM Digital Library





More related articles:

Andy Woods - Cooling the Data Center
Power generation accounts for about 40 to 45 percent of the primary energy supply in the US and the UK, and a good fraction is used to heat, cool, and ventilate buildings. A new and growing challenge in this sector concerns computer data centers and other equipment used to cool computer data systems. On the order of 6 billion kilowatt hours of power was used in data centers in 2006 in the US, representing about 1.5 percent of the country’s electricity consumption.


David J. Brown, Charles Reams - Toward Energy-Efficient Computing
By now, most everyone is aware of the energy problem at its highest level: our primary sources of energy are running out, while the demand for energy in both commercial and domestic environments is increasing, and the side effects of energy use have important global environmental considerations. The emission of greenhouse gases such as CO, now seen by most climatologists to be linked to global warming, is only one issue.


Eric Saxe - Power-Efficient Software
The rate at which power-management features have evolved is nothing short of amazing. Today almost every size and class of computer system, from the smallest sensors and handheld devices to the "big iron" servers in data centers, offers a myriad of features for reducing, metering, and capping power consumption. Without these features, fan noise would dominate the office ambience, and untethered laptops would remain usable for only a few short hours (and then only if one could handle the heat), while data-center power and cooling costs and capacity would become unmanageable.


Alexandra Fedorova, Juan Carlos Saez, Daniel Shelepov, Manuel Prieto - Maximizing Power Efficiency with Asymmetric Multicore Systems
In computing systems, a CPU is usually one of the largest consumers of energy. For this reason, reducing CPU power consumption has been a hot topic in the past few years in both the academic community and the industry. In the quest to create more power-efficient CPUs, several researchers have proposed an asymmetric multicore architecture that promises to save a significant amount of power while delivering similar performance to conventional symmetric multicore processors.





© ACM, Inc. All Rights Reserved.