Opinion

  Download PDF version of this article PDF

All-Optical Computing and All-Optical Networks are Dead

Anxiously awaiting the arrival of all-optical computing? Don't hold your breath.

Charles Beeler, El Dorado Ventures and Craig Partridge, BBN

We're a venture capitalist and a communications researcher, and we come bearing bad news: optical computers and all-optical networks aren't going to happen anytime soon. All those well-intentioned stories about computers operating at the speed of light, computers that would free us from Internet delays and relieve us from the tyranny of slow and hot electronic devices were, alas, overoptimistic. We won't be computing or routing at the speed of light anytime soon. (In truth, we probably should have told you this about two years ago, but we only recently met, compared notes, and realized our experiences were consistent.)

You see, building an optical computer or router entails a critical step called optical regeneration, which nobody knows how to do. After at least two decades of research and well over a billion dollars of venture-capital spending on promising potential breakthroughs, it is pretty clear that we've tried all the obvious ways and a fair number of the nonobvious ways to do optical regeneration. It appears that solving the problem is not a matter of top-class engineering; rather, it's beginning to look like Nobel-Prize-winning physics, such as high-temperature superconductivity where ingenuity leads to a completely new approach. Those kinds of results are rare and unpredictable, and thus all signs suggest optical computing is an innovation that will likely have to wait a generation or two or three to come to fruition.

Some Details

The past two decades have seen a profusion of optical logic that can serve as memories, comparators, or other similar bits of logic that we would need to build an optical computer. Much like traditional silicon logic, this optical logic suffers from signal loss—that is, in the process of doing the operation or computation, some number of decibels is lost. In optics, the loss is substantial—an optical signal can traverse only a few circuits before it must be amplified.

We know how to optically amplify a signal. Indeed, optical amplification is one of the great innovations of the past 20 years and has tremendously increased the distances over which we can send an optical signal.

Unfortunately, amplifying a signal adds noise. After a few amplifications, we need to regenerate the signal: we need a device that receives a noisy signal and emits a crisp clean signal. Currently the only way to build regenerators is to build an OEO (optical-electronic-optical) device: the inbound signal is translated from the optical domain into a digitized sample; the electronic component removes the noise from the digitized sample and then uses the cleaned-up digitized sample to drive a laser that emits a clean signal in the optical domain. OEO regenerators work just fine, but they slow us down by forcing us to work at the speed of electronics.

There has been no shortage of attempts to create all-optical regenerators. Many approaches have shown some promise in the laboratory, but ultimately, to date, all have failed the transition from promise to product. As we observed in the introduction, all the likely—and many unlikely—approaches have been tried.

So, if we are going to assemble the wonderful optical circuits into an optical computer, right now and for the foreseeable future, for every handful of circuits we will need a regenerator, and the only regenerators we have require slow electronics. Oops!

Compounding the frustration is that the photonic logic community has recently achieved breakthroughs in PICs (photonic integrated circuits). Until recently, each optical circuit was its own chip (much as we relied on individual transistors in electronic devices in the 1960s), but now we can lay out densely packed optical chips. We can envision replacing electronic chips with optical chips—but the optical chips won't run any faster because every few circuits, inside the chips, we'll have to do electrical regeneration.

Similar problems crop up in building all-optical networks. There have to be some optical switches in that network to direct the data. The optical logic in those optical switches has the same problems as optical computers: every few circuits you need OEO regeneration.

Many people had been anticipating a future of all-optical computers connected via all-optical networks, a nirvana of high performance combined with low error rates and lower power consumption and heat dissipation. We're sorry to be naysayers, but you can stop holding your breath waiting for this to happen.

Where Next?

All the same, one should not lose heart. There are plenty of opportunities to exploit the wonderful characteristics of optics in a hybrid electronic-optical world.

First, PICs have unleashed a tremendous surge in innovation. In 2005, the optical research community wrote a report for the National Science Foundation on research problems for the next five years and next 10 years. Three years later, some of those research problems are solved and in products! As a result, the amount of data we can push through an individual fiber is increasing sharply. We're also able to manage that capacity with increasing sophistication. These results are probably only the low-hanging fruit of what PICs have enabled, and we're likely to see more innovation in coming years. If your biggest concern is getting lots of bandwidth with low error rates, the future looks very good indeed.

Second, optical logic continues to develop new capabilities. We offer two examples to show the range of work. At Harvard a few years ago, researchers were able to slow and then stop (hold stationary) a pulse of light. The immediately visible opportunities are for better optical memories and to manage data rates inside a device. More opportunities will no doubt appear. A much more concrete effort is the DARPA-funded OAWG (Optical Arbitrary Waveform Generation) program. OAWG seeks to build radically improved optical transceivers, capable of producing optical pulses that are more coherent and have less noise. These transceivers would allow us to pack more optical channels into a fiber, because we would need smaller gaps between channel frequencies to protect ourselves from cross-channel noise.

In summary, the future of optical technology is bright. It just isn't taking the path to the future that many of us imagined or hoped for.

LOVE IT, HATE IT? LET US KNOW
[email protected]

CHARLES BEELER is a venture-capital investor with El Dorado Ventures, where his primary focus is on companies with the potential to radically change the capabilities, cost/performance, and energy efficiency of data centers and enterprise computing environments. Beeler has also served as a partner at Piper Jaffray Ventures, helping to manage technology funds, and at Scripps Ventures. He received a bachelor's degree in economics from Colby College and an MBA from the University of Pennsylvania's Wharton School.

DR. CRAIG PARTRIDGE is a former chair of ACM SIGCOMM and, long ago, was editor-in-chief of ACM Computer Communication Review. An ACM Fellow, he is chief scientist for networking research at BBN Technologies.

© 2009 ACM 1542-7730 /09/0200 $5.00

acmqueue

Originally published in Queue vol. 7, no. 3
Comment on this article in the ACM Digital Library





More related articles:

Geoffrey H. Cooper - Device Onboarding using FDO and the Untrusted Installer Model
Automatic onboarding of devices is an important technique to handle the increasing number of "edge" and IoT devices being installed. Onboarding of devices is different from most device-management functions because the device's trust transitions from the factory and supply chain to the target application. To speed the process with automatic onboarding, the trust relationship in the supply chain must be formalized in the device to allow the transition to be automated.


Brian Eaton, Jeff Stewart, Jon Tedesco, N. Cihan Tas - Distributed Latency Profiling through Critical Path Tracing
Low latency is an important feature for many Google applications such as Search, and latency-analysis tools play a critical role in sustaining low latency at scale. For complex distributed systems that include services that constantly evolve in functionality and data, keeping overall latency to a minimum is a challenging task. In large, real-world distributed systems, existing tools such as RPC telemetry, CPU profiling, and distributed tracing are valuable to understand the subcomponents of the overall system, but are insufficient to perform end-to-end latency analyses in practice.


David Crawshaw - Everything VPN is New Again
The VPN (virtual private network) is 24 years old. The concept was created for a radically different Internet from the one we know today. As the Internet grew and changed, so did VPN users and applications. The VPN had an awkward adolescence in the Internet of the 2000s, interacting poorly with other widely popular abstractions. In the past decade the Internet has changed again, and this new Internet offers new uses for VPNs. The development of a radically new protocol, WireGuard, provides a technology on which to build these new VPNs.


Yonatan Sompolinsky, Aviv Zohar - Bitcoin’s Underlying Incentives
Incentives are crucial for the Bitcoin protocol’s security and effectively drive its daily operation. Miners go to extreme lengths to maximize their revenue and often find creative ways to do so that are sometimes at odds with the protocol. Cryptocurrency protocols should be placed on stronger foundations of incentives. There are many areas left to improve, ranging from the very basics of mining rewards and how they interact with the consensus mechanism, through the rewards in mining pools, and all the way to the transaction fee market itself.





© ACM, Inc. All Rights Reserved.