Download PDF version of this article PDF

Digitally Assisted Analog Integrated Circuits

Closing the gap between analog and digital

In past decades, “Moore’s law”1 has governed the revolution in microelectronics. Through continuous advancements in device and fabrication technology, the industry has maintained exponential progress rates in transistor miniaturization and integration density. As a result, microchips have become cheaper, faster, more complex, and more power efficient.

We will show, however, that digital performance metrics have grown significantly faster than corresponding measures for analog circuits, especially ADCs (analog-to-digital converters). Since most DSP (digital signal processor) projects depend on A/D conversion in the interfaces, this growing disparity in relative performance increase has the potential to threaten the rate of progress of DSP hardware.

Much of the performance gain in digital hardware can be traced to corresponding advances in integrated circuit technology. For example, over the past 30 years we have witnessed a steady decrease in transistor channel length by two orders of magnitude from 10µm in 1970 to less than 0.1µm today. While this reduction of feature size benefits analog and digital circuits alike, overall analog circuit performance is compromised by other trends such as reduced supply voltages.

Figure 1 shows the relative performance of microprocessors and analog-to-digital converters over the last 15 years. While analog and digital system performance increases exponentially over time, microprocessor performance increased more than a thousandfold compared with an increase of only 10 times for ADCs. As the relative performance gap widens, applications such as digital audio, video, and RF (radio frequency) communication are increasingly limited not by the available digital processing power, but by their analog interfaces.

The digitally assisted analog circuits discussed in this article avoid such trade-offs by delegating analog precision requirements to a digital processor. The relaxed analog requirements translate into improved metrics such as reduced power dissipation or increased speed. Key to this approach is a statistical system identification technique that continuously monitors analog imperfections. Judicious selection of the algorithms employed in the digital processor translates into a negligible area and power penalty from the DSP that further benefits from the continued technology scaling.


Despite the success of digital computing, analog interfaces are still needed. In fact, as computing power increases, digital processors need more sophisticated analog circuits to interface with the real world. Figure 2 shows a variety of applications and their interface requirements. For example, digital audio demands analog interfaces with at least 16-bit resolution and operating at up to 100 kHz to convert signals from microphones and other sources to a digital representation. Likewise, DACs (digital-to-analog converters) are used to render the digital data stream to an amplifier and speaker for listening.

The benefits of digital processing and storing audio signals are obvious but contingent on the availability of powerful ADC and DAC interfaces. Such converters are also used in hard-disk drives (HDDs), where sophisticated digital processing replaced simpler analog algorithms to increase the achievable density significantly. Again, although key signal processing functions were relegated from the analog to the digital domain, the challenge shifted for the analog designer to provide converters with a hundreds-of-MHz sampling rate.

With the increasing trend toward battery-powered devices, power dissipation is an important consideration when choosing an ADC. In most portable applications the power budget for an ADC is limited to a fraction of a watt. As shown in figure 2, this dictates a very strict upper limit on performance that depends only weakly on technology. Power dissipation is a showstopper for an increasing number of otherwise attractive applications, such as so-called “software radios.” These devices use very high-performance ADCs to directly convert the signal at the antenna of, say, a cellular phone. The demodulation is then performed by a digital processor, which can be reprogrammed easily when standards change. Unfortunately, a full-featured software radio calls for converters with 16-bit or better precision operating at speeds in excess of 1 GHz. Even if such a converter could somehow be built, its power dissipation of several hundred watts would render it useless in most applications.

In all these applications, analog circuits play a key role determining functionality and feasibility. Without innovations to reduce the large and growing performance gap between analog and digital circuits, analog circuit capabilities are threatening to become the bottleneck in overall system performance.


Reduced feature size combined with lower supply voltage are the key technology drivers resulting in reduced cost, higher speed, and lower power consumption of digital integrated circuits. For analog circuits, scaling is a mixed blessing. Unlike digital circuits, analog functions are constrained by electronic noise and accuracy requirements, factors that only conditionally benefit from technology scaling and can even deteriorate for very low supply voltages. The result is a complex trade-off, and the net benefit of technology scaling in analog circuits is usually a strong function of implementation and architecture.

Analog circuit design trades speed and precision requirements for power dissipation. While power tends to grow linearly with speed, its relation to precision constraints is far more complex. From a general perspective, precision can be subdivided into three main components. The first and most fundamental limit in accuracy is given by the thermal noise of circuit elements. For example, the available signal headroom and the so-called thermal or “kT/C noise”2 determine the dynamic range in an analog sampled data circuit.

In precision analog circuits, typically those with equivalent resolution of 12 bits or more, the noise-power trade-off is extremely steep: Reducing the standard deviation of the noise by a factor of two requires quadrupling the effective capacitance in the circuit. At constant speed, this necessitates a fourfold increase in transconductance, and hence a 4x increase in power dissipation. In other words, the addition of a single bit to a precision ADC can increase power dissipation up to fourfold. In comparison, the power dissipation of a digital circuit increases only slightly faster than linearly with word length. Power, supply voltage, and capacitor area constraints further limit the maximum achievable analog circuit resolution to 18 to 20 bits in current integrated circuit technologies. Additional supply voltage reductions may reduce this bound further.

In circuits that are limited by component matching, increasing the precision also translates into a power penalty. To first order, matching accuracy is inversely proportional to component area.3 Therefore, additional precision requires larger components with larger capacitance and a resulting net increase in power dissipation. In contrast to thermal noise, however, matching errors are not fundamental in the sense that they can be addressed without necessarily increasing component size. In many situations, it is possible to overcome matching errors using some form of trimming or calibration. In state-of-the-art ADCs, digital correction and calibration techniques4, 5 are routinely used both to avoid a matching-induced power penalty and to improve accuracy beyond technology limits.

A third and significant challenge in precise analog signal processing arises from the need for highly linear amplification. In most electronic circuits, precisely linear operation is achieved by using high-gain amplifiers in a negative feedback loop configuration. In some sense, the use of electronic feedback parallels the approach of increasing component size to minimize mismatch. Achieving sufficient gain usually necessitates the use of complex amplifiers that tend to be suboptimal in terms of speed and noise. Just as in the case of mismatch, however, distortion and gain inaccuracy limitations are not fundamental. Resulting errors can be compensated upstream or downstream, preferably through some digital compensation mechanism as well.

From this point of view, it is most interesting to investigate the potential advantage and power savings that are possible by lifting linearity requirements in analog amplifiers.


Analog circuits can take advantage of the high performance of scaled digital circuits by delegating critical design constraints to a digital processor. The potential advantage of such “digital assistance” has been recognized and documented in numerous publications on the subject.6, 7, 8, 9, 10, 11, 12, 13 For example, converters routinely use digital circuits for canceling offset voltages. An even more significant and previously unexploited advantage results from removing the linearity constraint from an analog circuit. For example, the mapping from voltage to digital codes in an ADC should be perfectly linear. As pointed out in the previous section, this constraint adds significant complexity and power dissipation to the analog design that could be saved in the digitally assisted solution.

Provided the converter has sufficient resolution, a digital processor could in principle map the nonlinearly distorted digital converter output to the correct codes using a simple remapping. In a modern process the size and power penalty from the added digital circuits is minimal compared with the power saved by the simpler analog circuits. An ADC serves as a vehicle to substantiate this claim.


Pipelined converters have become the predominant topology for ADC resolutions of 8 to 14 bits and conversion rates of 10 to 200 megasamples per second (MS/s). Applications include disk drives, digital cameras, wireless receivers, and base stations, just to name a few. Pipelined converters are available as stand-alone parts but are more often embedded in mixed analog-digital chips.

Figure 3 shows a conceptual block diagram of this converter topology. Several converter stages are cascaded and process the analog input sequentially, analogous to flip-flops propagating a bit stream in a digital shift register.

Each stage performs a sample and hold operation and a coarse A/D conversion. The local quantization result is converted back into analog form and used to compute the error in the coarse digital approximation D. The locally computed and amplified quantization error, usually called the residuum (Vres), propagates through subsequent stages, which resolve less significant digital information of the initial input sample. After the signal has passed through all stages, the sub-quantization results are combined to yield the final digital output word.

The main advantage of this architecture is that because of stage pipelining, its throughput rate is set by the time needed to perform a single sub-A/D and D/A conversion. The fact that the signal needs to propagate though all stages until the final conversion result becomes available results only in conversion latency, which is tolerable in many signal-processing applications.

The architecture is inherently tolerant of errors in the sub-ADCs, provided that the DACs and amplifiers are accurate. Conventional implementations use electronic feedback to meet the amplifier precision requirements. As discussed previously, however, the cost of this desirable feature is an excessive voltage gain requirement. In the front end of high-resolution pipelines,14 complicated multistage amplifiers with open-loop gain of greater than 100,000 are often needed to meet the stringent accuracy requirements. This results in excessive power dissipation. As a result, residue amplifiers dominate the power dissipation of pipelined ADCs, with a typical contribution of 40 to 70 percent of the total ADC power.


Replacing precision residue amplifiers with simple open-loop stages and correcting for the resulting errors digitally is an excellent opportunity to mitigate both of the above-mentioned issues. Figure 4 compares a typical high-gain two-stage amplifier with the proposed single-stage solution. The high-gain amplifier uses a combination of two cascaded gain stages and a helper amplifier to achieve the required voltage gain of more than 100,000. The proposed solution consists of a simple resistively loaded differential pair.

The advantage of the low-gain amplifier is not limited to the obviously reduced transistor count and design complexity. Simply counting the number of current-carrying branches suggests reduced power dissipation, as does the elimination of the helper amplifiers. Reduced noise and increased signal range at the output of the single-stage amplifier translate into a further significant reduction of power dissipation for a given operating speed.

When applied to the first stage of a 12-bit 75-MS/s pipelined ADC, the power dissipation of the open-loop amplifier was a third of that of the high-gain solution.


Open-loop amplification results in significantly reduced power dissipation or higher sampling speed but depends on a digital postprocessor to correct amplifier nonlinearity. INL (integral nonlinearity) measures the deviation of the ADC characteristic from a straight line normalized to converter resolution. Many applications demand linearity within one quantizer step or better. Figure 5 shows the measured nonlinearity for a converter with open-loop amplification.

While resulting in significantly reduced power dissipation, open-loop amplification introduces an intolerable amount of nonlinearity. This error can be corrected digitally, but doing so requires accurate correction parameters. Unfortunately, the error signature and, hence, the correction parameters, are different for every part. Moreover, they depend on the temperature. It is therefore imperative that the correction parameters are continuously measured by the converter itself.

One solution is to disconnect the converter from the input signal periodically and instead convert a known calibration signal. The digital postprocessor then calculates the correction parameters from the deviations of the converted signal and the expected value. Though simple, this solution has several drawbacks. In many applications, the converter must be available continuously and periodic recalibration sequences are not acceptable. A second challenge is the need to generate an accurate calibration signal, thereby reintroducing the need for analog circuit accuracy, which this approach tries to avoid.

Instead, a statistical approach is used that employs the input signal of the converter itself to extract the calibration information without interrupting the operation of the converter. Figure 6 illustrates the concept: Nonlinearity in a converter can be detected easily by comparing the change DDout at the output with a fixed input voltage step DVin for different input voltages Va and Vb. Deviations of the two step heights indicate nonlinearity. A feedback loop updates the correction parameters to minimize the difference of the output step height.

The actual implementation cannot rely on a known input step DVin. Instead, it switches randomly between Va and Vb and counts the number of occurrences in each case for which the converted output is less than a particular threshold. Since the random control signal and the unknown converter input are uncorrelated, the two counts will be equal if the converter is linear. Of course, keeping track of these counts with the digital postprocessor is straightforward.15

Figure 7 shows the integral nonlinearity of the converter again with digital correction enabled. The INL drops to less than one converter step size, comparable to the performance achieved with the high-gain amplifier, albeit at lower power dissipation.


Technology scaling has resulted in a more than two-orders-of-magnitude relative performance gap between analog and digital circuit performance over two decades. Although analog circuits benefit from scaling, the advantage is partially offset by reduced power supply voltage and intrinsic transistor voltage gain. Since overall system functionality depends on analog and digital circuit performance, analog circuits increasingly determine overall capabilities.

Accuracy requirements are the most prominent and challenging difference between analog and digital circuits and a significant factor in the relatively lower performance of analog circuits. Digitally assisted analog circuits avoid the disadvantages of technology scaling by delegating accuracy requirements to a digital processor. The relaxed analog circuit constraints translate into higher overall system performance, such as reduced power dissipation or increased circuit speed.

Demonstrated with a state-of-the-art ADC, the concept is applicable to a broad class of mixed analog-digital circuits. For example, digital processing can be used to linearize the power amplifiers or entire communications channels. The approach exploits the vast processing power of modern digital devices that implement functions at negligible cost, which only recently were unfeasible to realize. This suggests a paradigm shift from high-precision analog circuits to mixed analog and digital solutions capable of more broadly taking advantage of modern integrated circuits technology.


1. Intel, Moore’s Law, 2003;

2. Gray, P. R., et al. Analysis and Design of Analog Integrated Circuits, 4th edition. John Wiley & Sons, New York: NY, 2001.

3. Pelgrom, M. J. M., et al. Matching properties of MOS transistors. IEEE Journal of Solid-State Circuits 24, 5 (Oct. 1989), 1433–1439.

4. Lewis, S. H., et al. A 10-b 20-Msample/s analog-to-digital converter, IEEE Journal of Solid-State Circuits 27, 3 (Mar. 1992), 351–358.

5. Karanicolas, A. N., et al. A 15-b 1-Msample/s digitally self-calibrated pipeline ADC. IEEE Journal of Solid-State Circuits 28, 12 (Dec. 1993), 1207–1215.

6. Murmann, B., and Boser, B. E. A 12-bit 75-MS/s pipelined ADC using open-loop residue amplification. IEEE Journal Solid-State Circuits 38, 12, (Dec. 2003), 2040–2050.

7. Jamal, S. M., et al. A 10-b 120-Msample/s time-interleaved analog-to-digital converter with digital background calibration. IEEE Journal of Solid-State Circuits 37, 12 (Dec. 2002), 1618–1627.

8. Yu, P. C., et al. A 14 b 40 MSample/s pipelined ADC with DFCA. ISSCC Digest of Technical Papers (Feb. 2001), 136–137.

9. Elbornsson, J. Blind estimation and error correction in a CMOS ADC. Proceedings of the ASIC/SOC Conference (Sept. 2000), 124–128.

10. Blecker, E. B., et al. Digital background calibration of an algorithmic analog-to-digital converter using a simplified queue. IEEE Journal of Solid-State Circuits 38, 6 (June 2003), 1059–1062.

11. Galton, I. Digital cancellation of D/A converter noise in pipelined A/D converters. IEEE Transactions on Circuits and Systems II 47, 3 (Mar. 2000), 185–196.

12. Ming, J., and Lewis, S. H. An 8-bit 80-Msample/s pipelined analog-to-digital converter with background calibration, IEEE Journal of Solid-State Circuits 36, 10 (Oct. 2001) 1489–1497.

13. Li, J., and Moon, U.-K. Background calibration techniques for multistage pipelined ADCs with digital redundancy. IEEE Transactions on Circuits and Systems II 50, 9 (Sept. 2003), 531–538.

14. Yang, W., et al. A 3-V 340-mW 14-b 75-Msample/s CMOS ADC with 85-dB SFDR at Nyquist input. IEEE Journal of Solid-State Circuits 36, 12 (Dec. 2001) 1931–1936.

15. See reference 6.

BORIS MURMANN is an assistant professor in the department of electrical engineering at Stanford University. His research is in the area of analog and mixed signal circuits, with special emphasis on analog-digital interfaces and analog/digital co-design. Before entering graduate school at Santa Clara, he worked at Neutron Mikrolektronik GmbH, Hanau, Germany, where he was involved in the design of high-voltage, smart-power, and low-power ASICs in CMOS technology. He received his degree in communications engineering from FH Dieburg, Germany, an M.S. degree in electrical engineering from Santa Clara University, and a Ph.D. in electrical engineering from the University of California, Berkeley.

BERNHARD BOSER is a faculty member in the department of electrical engineering and computer sciences at the University of California, Berkeley, where he also serves as a director of the Berkeley Sensor and Actuator Center. His research is in the area of analog and mixed signal circuits, with special emphasis on analog-digital interface circuits and micromechanical sensors and actuators. He earned a degree in electrical engineering from the Swiss Federal Institute of Technology and his M.S. and Ph.D. from Stanford University. Before joining UC Berkeley, he was on the technical staff in the adaptive systems department at AT&T Bell Laboratories. He has served on the program committees of the International Solid-State Circuits Conference, the Transducers Conference, and the VLSI Symposium, and is currently the editor of the IEEE Journal of Solid-State Circuits.


Originally published in Queue vol. 2, no. 1
Comment on this article in the ACM Digital Library

More related articles:

Michael Mattioli - FPGAs in Client Compute Hardware
FPGAs (field-programmable gate arrays) are remarkably versatile. They are used in a wide variety of applications and industries where use of ASICs (application-specific integrated circuits) is less economically feasible. Despite the area, cost, and power challenges designers face when integrating FPGAs into devices, they provide significant security and performance benefits. Many of these benefits can be realized in client compute hardware such as laptops, tablets, and smartphones.

Christoph Lameter - NUMA (Non-Uniform Memory Access): An Overview
NUMA (non-uniform memory access) is the phenomenon that memory at various points in the address space of a processor have different performance characteristics. At current processor speeds, the signal path length from the processor to memory plays a significant role. Increased signal path length not only increases latency to memory but also quickly becomes a throughput bottleneck if the signal path is shared by multiple processors. The performance differences to memory were noticeable first on large-scale systems where data paths were spanning motherboards or chassis. These systems required modified operating-system kernels with NUMA support that explicitly understood the topological properties of the system’s memory (such as the chassis in which a region of memory was located) in order to avoid excessively long signal path lengths.

Bill Hsu, Marc Sosnick-Pérez - Realtime GPU Audio
Today’s CPUs are capable of supporting realtime audio for many popular applications, but some compute-intensive audio applications require hardware acceleration. This article looks at some realtime sound-synthesis applications and shares the authors’ experiences implementing them on GPUs (graphics processing units).

David Bacon, Rodric Rabbah, Sunil Shukla - FPGA Programming for the Masses
When looking at how hardware influences computing performance, we have GPPs (general-purpose processors) on one end of the spectrum and ASICs (application-specific integrated circuits) on the other. Processors are highly programmable but often inefficient in terms of power and performance. ASICs implement a dedicated and fixed function and provide the best power and performance characteristics, but any functional change requires a complete (and extremely expensive) re-spinning of the circuits.

© ACM, Inc. All Rights Reserved.