Networks

Vol. 7 No. 3 – April 2009

Networks

Articles

Fighting Physics: A Tough Battle

Thinking of doing IPC over the long haul? Think again. The laws of physics say you're hosed.

Fighting Physics: A Tough Battle

Thinking of doing IPC over the long haul? Think again. The laws of physics say you're hosed.

Jonathan M. Smith, University of Pennsylvania

Over the past several years, SaaS (software as a service) has become an attractive option for companies looking to save money and simplify their computing infrastructures. SaaS is an interesting group of techniques for moving computing from the desktop to the cloud; however, as it grows in popularity, engineers should be aware of some of the fundamental limitations they face when developing these kinds of distributed applications—in particular, the finite speed of light.

Consider a company that wants to build a distributed application that does IPC (interprocess communication) over the long haul. The obvious advice is "just say no"—don't do it. If you're going far outside your local networking environment, the physics of distance and the speed of light, combined with the delays that come from the Internet's routing infrastructure, tell us that it will be much too slow. These concepts are not generally understood, however, and even when they are, they're sometimes forgotten.

by Jonathan M. Smith

Network Front-end Processors, Yet Again

The history of NFE processors sheds light on the tradeoffs involved in designing network stack software.

HIGH-PERFORMANCE NETWORKS

Network Front-end Processors, Yet Again

Mike O'Dell, New Enterprise Associates

"This time for sure, Rocky!" Bullwinkle J. Moose

The history of NFE processors sheds light on the tradeoffs involved in designing network stack software.

The history of the NFE (network front-end) processor, currently best known as a TOE (TCP offload engine), extends all the way back to the Arpanet IMP (interface message processor) and possibly before. The notion is beguilingly simple: partition the work of executing communications protocols from the work of executing the "applications" that require the services of those protocols. That way, the applications and the network machinery can achieve maximum performance and efficiency, possibly taking advantage of special hardware performance assistance. While this looks utterly compelling on the whiteboard, architectural and implementation realities intrude, often with considerable force.

This article will not attempt to discern whether the NFE is a heavenly gift or a manifestation of evil incarnate. Rather, it will follow the evolution starting from a pure host-based implementation of a network stack and then moving the network stack farther from that initial position, observing the issues that arise. The goal is insight into the tradeoffs that influence the location choice for network stack software in a larger systems context. As such, it is an attempt to prevent old mistakes from being reinvented, while harvesting as much clean grain as possible.

by Mike O'Dell

Opinion

All-Optical Computing and All-Optical Networks are Dead

Anxiously awaiting the arrival of all-optical computing? Don't hold your breath.

All-Optical Computing and All-Optical Networks are Dead

Anxiously awaiting the arrival of all-optical computing? Don't hold your breath.

Charles Beeler, El Dorado Ventures and Craig Partridge, BBN

We're a venture capitalist and a communications researcher, and we come bearing bad news: optical computers and all-optical networks aren't going to happen anytime soon. All those well-intentioned stories about computers operating at the speed of light, computers that would free us from Internet delays and relieve us from the tyranny of slow and hot electronic devices were, alas, overoptimistic. We won't be computing or routing at the speed of light anytime soon. (In truth, we probably should have told you this about two years ago, but we only recently met, compared notes, and realized our experiences were consistent.)

You see, building an optical computer or router entails a critical step called optical regeneration, which nobody knows how to do. After at least two decades of research and well over a billion dollars of venture-capital spending on promising potential breakthroughs, it is pretty clear that we've tried all the obvious ways and a fair number of the nonobvious ways to do optical regeneration. It appears that solving the problem is not a matter of top-class engineering; rather, it's beginning to look like Nobel-Prize-winning physics, such as high-temperature superconductivity where ingenuity leads to a completely new approach. Those kinds of results are rare and unpredictable, and thus all signs suggest optical computing is an innovation that will likely have to wait a generation or two or three to come to fruition.

by Charles Beeler, Craig Partridge