A Conversation with Dan Dobberpuhl:
The computer industry has always been about power.
The development of the microprocessors that power computers has been a relentless search for more power, higher speed, and better performance, usually in smaller and smaller packages. But when is enough enough?
Energy Management on Handheld Devices:
Whatever their origin, all handheld devices share the same Achilles heel: the battery.
Handheld devices are becoming ubiquitous and as their capabilities increase, they are starting to displace laptop computers - much as laptop computers have displaced desktop computers in many roles. Handheld devices are evolving from today’s PDAs, organizers, cellular phones, and game machines into a variety of new forms. Although partially offset by improvements in low-power electronics, this increased functionality carries a corresponding increase in energy consumption. Second, as a consequence of displacing other pieces of equipment, handheld devices are seeing more use between battery charges. Finally, battery technology is not improving at the same pace as the energy requirements of handheld electronics. Therefore, energy management, once in the realm of desired features, has become an important design requirement and one of the greatest challenges in portable computing, and it will remain so for a long time to come.
Getting Gigascale Chips:
Challenges and Opportunities in Continuing Moore’s Law
Processor performance has increased by five orders of magnitude in the last three decades, made possible by following Moore’s law - that is, continued technology scaling, improved transistor performance to increase frequency, additional (to avoid repetition) integration capacity to realize complex architectures, and reduced energy consumed per logic operation to keep power dissipation within limits. Advances in software technology, such as rich multimedia applications and runtime systems, exploited this performance explosion, delivering to end users higher productivity, seamless Internet connectivity, and even multimedia and entertainment.
Making a Case for Efficient Supercomputing:
It is time for the computing community to use alternative metrics for evaluating performance.
A supercomputer evokes images of “big iron“ and speed; it is the Formula 1 racecar of computing. As we venture forth into the new millennium, however, I argue that efficiency, reliability, and availability will become the dominant issues by the end of this decade, not only for supercomputing, but also for computing in general.
Modern System Power Management:
Increasing demands for more power and increased efficiency are pressuring software and hardware developers to ask questions and look for answers.
The Advanced Configuration and Power Interface (ACPI) is the most widely used power and configuration interface for laptops, desktops, and server systems. It is also very complex, and its current specification weighs in at more than 500 pages. Needless to say, operating systems that choose to support ACPI require significant additional software support, up to and including fundamental OS architecture changes. The effort that ACPI’s definition and implementation has entailed is worth the trouble because of how much flexibility it gives to the OS (and ultimately the user) to control power management policy and implementation.
Reading, Writing, and Code:
The key to writing readable code is developing good coding style.
Forty years ago, when computer programming was an individual experience, the need for easily readable code wasn’t on any priority list. Today, however, programming usually is a team-based activity, and writing code that others can easily decipher has become a necessity. Creating and developing readable code is not as easy as it sounds.
Reconfigurable Future:
The ability to produce cheaper, more compact chips is a double-edged sword.
Predicting the future is notoriously hard. Sometimes I feel that the only real guarantee is that the future will happen, and that someone will point out how it’s not like what was predicted. Nevertheless, we seem intent on trying to figure out what will happen, and worse yet, recording these views so they can be later used against us. So here I go... Scaling has been driving the whole electronics industry, allowing it to produce chips with more transistors at a lower cost. But this trend is a double-edged sword: We not only need to figure out more complex devices, which people want, but we also must determine which complex devices lots of people want, as we have to sell many, many chips to amortize the significant design cost.
Stand and Deliver: Why I Hate Stand-Up Meetings:
Stand-up meetings are an important component of the ’whole team’, which is one of the fundamental practices of extreme programming (XP).
According to the Extreme Programming Web site, the stand-up meeting is one part of the rules and practices of extreme programming: “Communication among the entire team is the purpose of the stand-up meeting. They should take place every morning in order to communicate problems, solutions, and promote team focus. The idea is that everyone stands up in a circle in order to avoid long discussions. It is more efficient to have one short meeting that everyone is required to attend than many meetings with a few developers each.”
The Big Bang Theory of IDEs:
Pondering the vastness of the ever-expanding universe of IDEs, you might wonder whether a usable IDE is too much to ask for.
Remember the halcyon days when development required only a text editor, a compiler, and some sort of debugger (in cases where the odd printf() or two alone didn’t serve)? During the early days of computing, these were independent tools used iteratively in development’s golden circle. Somewhere along the way we realized that a closer integration of these tools could expedite the development process. Thus was born the integrated development environment (IDE), a framework and user environment for software development that’s actually a toolkit of instruments essential to software creation. At first, IDEs simply connected the big three (editor, compiler, and debugger), but nowadays most go well beyond those minimum requirements. In fact, in recent years, we have witnessed an explosion in the constituent functionality of IDEs.
The Inevitability of Reconfigurable Systems:
The transition from instruction-based to reconfigurable circuits will not be easy, but has its time come?
The introduction of the microprocessor in 1971 marked the beginning of a 30-year stall in design methods for electronic systems. The industry is coming out of the stall by shifting from programmed to reconfigurable systems. In programmed systems, a linear sequence of configuration bits, organized into blocks called instructions, configures fixed hardware to mimic custom hardware. In reconfigurable systems, the physical connections among logic elements change with time to mimic custom hardware. The transition to reconfigurable systems will be wrenching, but this is inevitable as the design emphasis shifts from cost performance to cost performance per watt. Here’s the story.
Wireless Networking Considered Flaky:
You know what bugs me about wireless networking? Everyone thinks it’s so cool and never talks about the bad side of things.
Oh sure, I can get on the ’net from anywhere at Usenix or the IETF, but those are _hostile_ _nets_. Hell, all wireless nets are hostile. By their very nature, you don’t know who’s sharing the ether with you. But people go on doing their stuff, confident that they are OK because they’re behind the firewall.