Networks

RSS
Sort By:

You Don't Know Jack about Bandwidth:
If you're an ISP and all your customers hate you, take heart. This is now a solvable problem.

Bandwidth probably isn't the problem when your employees or customers say they have terrible Internet performance. Once they have something in the range of 50 to 100 Mbps, the problem is latency, how long it takes for the ISP's routers to process their traffic. If you're an ISP and all your customers hate you, take heart. This is now a solvable problem, thanks to a dedicated band of individuals who hunted it down, killed it, and then proved out their solution in home routers.

by David Collier-Brown | July 8, 2024

0 comments

Device Onboarding using FDO and the Untrusted Installer Model:
FDO's untrusted model is contrasted with Wi-Fi Easy Connect to illustrate the advantages of each mechanism.

Automatic onboarding of devices is an important technique to handle the increasing number of "edge" and IoT devices being installed. Onboarding of devices is different from most device-management functions because the device's trust transitions from the factory and supply chain to the target application. To speed the process with automatic onboarding, the trust relationship in the supply chain must be formalized in the device to allow the transition to be automated.

by Geoffrey H. Cooper | November 9, 2023

0 comments

Distributed Latency Profiling through Critical Path Tracing:
CPT can provide actionable and precise latency analysis.

Low latency is an important feature for many Google applications such as Search, and latency-analysis tools play a critical role in sustaining low latency at scale. For complex distributed systems that include services that constantly evolve in functionality and data, keeping overall latency to a minimum is a challenging task. In large, real-world distributed systems, existing tools such as RPC telemetry, CPU profiling, and distributed tracing are valuable to understand the subcomponents of the overall system, but are insufficient to perform end-to-end latency analyses in practice.

by Brian Eaton, Jeff Stewart, Jon Tedesco, N. Cihan Tas | March 29, 2022

0 comments

Everything VPN is New Again:
The 24-year-old security model has found a second wind.

The VPN (virtual private network) is 24 years old. The concept was created for a radically different Internet from the one we know today. As the Internet grew and changed, so did VPN users and applications. The VPN had an awkward adolescence in the Internet of the 2000s, interacting poorly with other widely popular abstractions. In the past decade the Internet has changed again, and this new Internet offers new uses for VPNs. The development of a radically new protocol, WireGuard, provides a technology on which to build these new VPNs.

by David Crawshaw | November 23, 2020

0 comments

GAN Dissection and Datacenter RPCs:
Visualizing and understanding generative adversarial networks; datacenter RPCs can be general and fast.

Image generation using GANs (generative adversarial networks) has made astonishing progress over the past few years. While staring in wonder at some of the incredible images, it’s natural to ask how such feats are possible. "GAN Dissection: Visualizing and Understanding Generative Adversarial Networks" gives us a look under the hood to see what kinds of things are being learned by GAN units, and how manipulating those units can affect the generated images. February saw the 16th edition of the Usenix Symposium on Networked Systems Design and Implementation. Kalia et al. blew me away with their work on fast RPCs (remote procedure calls) in the datacenter.

by Adrian Colyer | May 2, 2019

0 comments

Toward a Network of Connected Things:
A look into the future of IoT deployments and their usability

While the scale of data presents new avenues for improvement, the key challenges for the everyday adoption of IoT systems revolve around managing this data. First, we need to consider where the data is being processed and stored and what the privacy and systems implications of these policies are. Second, we need to develop systems that generate actionable insights from this diverse, hard-to-interpret data for non-tech users. Solving these challenges will allow IoT systems to deliver maximum value to end users.

by Deepak Vasisht | February 13, 2018

0 comments

Bitcoin’s Underlying Incentives:
The unseen economic forces that govern the Bitcoin protocol

Incentives are crucial for the Bitcoin protocol’s security and effectively drive its daily operation. Miners go to extreme lengths to maximize their revenue and often find creative ways to do so that are sometimes at odds with the protocol. Cryptocurrency protocols should be placed on stronger foundations of incentives. There are many areas left to improve, ranging from the very basics of mining rewards and how they interact with the consensus mechanism, through the rewards in mining pools, and all the way to the transaction fee market itself.

by Yonatan Sompolinsky, Aviv Zohar | November 28, 2017

0 comments

Private Online Communication; Highlights in Systems Verification:
The importance of private communication will continue to grow. We need techniques to build larger verified systems from verified components.

First, Albert Kwon provides an overview of recent systems for secure and private communication. Second, James Wilcox takes us on a tour of recent advances in verified systems design.

by Albert Kwon, James Wilcox | October 4, 2017

0 comments

Network Applications Are Interactive:
The network era requires new models, with interactions instead of algorithms.

The miniaturization of devices and the prolific interconnectedness of these devices over high-speed wireless networks is completely changing how commerce is conducted. These changes (a.k.a. digital) will profoundly change how enterprises operate. Software is at the heart of this digital world, but the software toolsets and languages were conceived for the host-based era. The issues that already plague software practice (such as high defects, poor software productivity, information vulnerability, poor software project success rates, etc.) will be more profound with such an approach. It is time for software to be made simpler, secure, and reliable.

by Antony Alappatt | September 27, 2017

2 comments

Cache Me If You Can:
Building a decentralized web-delivery model

The world is more connected than it ever has been before, and with our pocket supercomputers and IoT (Internet of Things) future, the next generation of the web might just be delivered in a peer-to-peer model. It’s a giant problem space, but the necessary tools and technology are here today. We just need to define the problem a little better.

by Jacob Loveless | August 30, 2017

0 comments

Cold, Hard Cache:
On the implementation and maintenance of caches

Dear KV, Our latest project at work requires a large number of slightly different software stacks to deploy within our cloud infrastructure. With modern hardware, I can test this deployment on a laptop. The problem I keep running up against is that our deployment system seems to secretly cache some of my files and settings and not clear them, even when I repeatedly issue the command to do so. I’ve resorted to repeatedly using the find command so that I can blow away the offending files. What I’ve found is that the system caches data in many places so I’ve started a list.

by George Neville-Neil | August 22, 2017

1 comments

Time, but Faster:
A computing adventure about time through the looking glass

The first premise was summed up perfectly by the late Douglas Adams in The Hitchhiker’s Guide to the Galaxy: "Time is an illusion. Lunchtime doubly so." The concept of time, when colliding with decoupled networks of computers that run at billions of operations per second, is... well, the truth of the matter is that you simply never really know what time it is. That is why Leslie Lamport’s seminal paper on Lamport timestamps was so important to the industry, but this article is actually about wall-clock time, or a reasonably useful estimation of it.

by Theo Schlossnagle | January 4, 2017

0 comments

Are You Load Balancing Wrong?:
Anyone can use a load balancer. Using them properly is much more difficult.

A reader contacted me recently to ask if it is better to use a load balancer to add capacity or to make a service more resilient to failure. The answer is: both are appropriate uses of a load balancer. The problem, however, is that most people who use load balancers are doing it wrong.

by Thomas A. Limoncelli | December 20, 2016

0 comments

Research for Practice: Distributed Transactions and Networks as Physical Sensors:
Expert-curated Guides to the Best of CS Research

First, Irene Zhang delivers a whirlwind tour of recent developments in distributed concurrency control. If you thought distributed transactions were prohibitively expensive, Irene’s selections may prompt you to reconsider: the use of atomic clocks, clever replication protocols, and new means of commit ordering all improve performance at scale. Second, Fadel Adib provides a fascinating look at using computer networks as physical sensors. It turns out that the radio waves passing through our environment and bodies are subtly modulated as they do so.

by Irene Zhang, Fadel Adib | December 7, 2016

0 comments

BBR: Congestion-Based Congestion Control:
Measuring bottleneck bandwidth and round-trip propagation time

When bottleneck buffers are large, loss-based congestion control keeps them full, causing bufferbloat. When bottleneck buffers are small, loss-based congestion control misinterprets loss as a signal of congestion, leading to low throughput. Fixing these problems requires an alternative to loss-based congestion control. Finding this alternative requires an understanding of where and how network congestion originates.

by Neal Cardwell, Yuchung Cheng, C. Stephen Gunn, Soheil Hassas Yeganeh, Van Jacobson | December 1, 2016

0 comments

Faucet: Deploying SDN in the Enterprise:
Using OpenFlow and DevOps for rapid development

While SDN as a technology continues to evolve and become even more programmable, Faucet and OpenFlow 1.3 hardware together are sufficient to realize benefits today. This article describes specifically how to take advantage of DevOps practices to develop and deploy features rapidly. It also describes several practical deployment scenarios, including firewalling and network function virtualization.

by Josh Bailey, Stephen Stuart | November 7, 2016

0 comments

A Purpose-built Global Network: Google’s Move to SDN:
A discussion with Amin Vahdat, David Clark, and Jennifer Rexford

Everything about Google is at scale, of course -- a market cap of legendary proportions, an unrivaled talent pool, enough intellectual property to keep armies of attorneys in Guccis for life, and, oh yeah, a private WAN (wide area network) bigger than you can possibly imagine that also happens to be growing substantially faster than the Internet as a whole.

by Amin Vahdat, David Clark, Jennifer Rexford | December 11, 2015

0 comments

Securing the Network Time Protocol:
Crackers discover how to use NTP as a weapon for abuse.

In the late 1970s David L. Mills began working on the problem of synchronizing time on networked computers, and NTP (Network Time Protocol) version 1 made its debut in 1980. This was at a time when the net was a much friendlier place - the ARPANET days. NTP version 2 appeared approximately a year later, about the same time as CSNET (Computer Science Network). NSFNET (National Science Foundation Network) launched in 1986. NTP version 3 showed up in 1993.

by Harlan Stenn | January 8, 2015

0 comments

Port Squatting:
Don’t irk your local sysadmin.

Dear KV, A few years ago you upbraided some developers for not following the correct process when requesting a reserved network port from IETF (Internet Engineering Task Force). While I get that squatting a used port is poor practice, I wonder if you, yourself, have ever tried to get IETF to allocate a port. We recently went through this with a new protocol on an open-source project, and it was a nontrivial and frustrating exercise.

by George Neville-Neil | September 28, 2014

0 comments

The Network is Reliable:
An informal survey of real-world communications failures

The network is reliable tops Peter Deutsch’s classic list, "Eight fallacies of distributed computing", "all [of which] prove to be false in the long run and all [of which] cause big trouble and painful learning experiences." Accounting for and understanding the implications of network behavior is key to designing robust distributed programs; in fact, six of Deutsch’s "fallacies" directly pertain to limitations on networked communications.

by Peter Bailis, Kyle Kingsbury | July 23, 2014

1 comments

Multipath TCP:
Decoupled from IP, TCP is at last able to support multihomed hosts.

The Internet relies heavily on two protocols. In the network layer, IP (Internet Protocol) provides an unreliable datagram service and ensures that any host can exchange packets with any other host. Since its creation in the 1970s, IP has seen the addition of several features, including multicast, IPsec (IP security), and QoS (quality of service). The latest revision, IPv6 (IP version 6), supports 16-byte addresses.

by Christoph Paasch, Olivier Bonaventure | March 4, 2014

0 comments

The Road to SDN:
An intellectual history of programmable networks

Designing and managing networks has become more innovative over the past few years with the aid of SDN (software-defined networking). This technology seems to have appeared suddenly, but it is actually part of a long history of trying to make computer networks more programmable.

by Nick Feamster, Jennifer Rexford, Ellen Zegura | December 30, 2013

5 comments

Passively Measuring TCP Round-trip Times:
A close look at RTT measurements with TCP

Measuring and monitoring network RTT (round-trip time) is important for multiple reasons: it allows network operators and end users to understand their network performance and help optimize their environment, and it helps businesses understand the responsiveness of their services to sections of their user base.

by Stephen D. Strowes | October 28, 2013

2 comments

Toward Higher Precision:
An introduction to PTP and its significance to NTP practitioners

It is difficult to overstate the importance of synchronized time to modern computer systems. Our lives today depend on the financial transactions, telecommunications, power generation and delivery, high-speed manufacturing, and discoveries in "big physics," among many other things, that are driven by fast, powerful computing devices coordinated in time with each other.

by Rick Ratzel, Rodney Greenstreet | August 27, 2012

1 comments

OpenFlow: A Radical New Idea in Networking:
An open standard that enables software-defined networking

Computer networks have historically evolved box by box, with individual network elements occupying specific ecological niches as routers, switches, load balancers, NATs (network address translations), or firewalls. Software-defined networking proposes to overturn that ecology, turning the network as a whole into a platform and the individual network elements into programmable entities. The apps running on the network platform can optimize traffic flows to take the shortest path, just as the current distributed protocols do, but they can also optimize the network to maximize link utilization, create different reachability domains for different users, or make device mobility seamless.

by Thomas A. Limoncelli | June 20, 2012

5 comments

Controlling Queue Delay:
A modern AQM is just one piece of the solution to bufferbloat.

Nearly three decades after it was first diagnosed, the "persistently full buffer problem" recently exposed as part of "bufferbloat", is still with us and made increasingly critical by two trends. First, cheap memory and a "more is better" mentality have led to the inflation and proliferation of buffers. Second, dynamically varying path characteristics are much more common today and are the norm at the consumer Internet edge. Reasonably sized buffers become extremely oversized when link rates and path delays fall below nominal values.

by Kathleen Nichols, Van Jacobson | May 6, 2012

16 comments

A Guided Tour through Data-center Networking:
A good user experience depends on predictable performance within the data-center network.

The magic of the cloud is that it is always on and always available from anywhere. Users have come to expect that services are there when they need them. A data center (or warehouse-scale computer) is the nexus from which all the services flow. It is often housed in a nondescript warehouse-sized building bearing no indication of what lies inside. Amidst the whirring fans and refrigerator-sized computer racks is a tapestry of electrical cables and fiber optics weaving everything together -- the data-center network.

by Dennis Abts, Bob Felderman | May 3, 2012

0 comments

Home Bufferbloat Demonstration Videos:
Under common loads, your real Internet "speed" can easily drop by a factor of ten due to bufferbloat.

While bufferbloat is regularly present in computers and routers throughout the Internet, we frequently suffer its effects most directly at home--and it is at home where it can easily be investigated. The videos presented here demonstrate two instances of "typical" bufferbloat found in ordinary, modern broadband equipment and home routers. Under common loads, your real Internet "speed" can easily drop by a factor of ten due to bufferbloat.

by Jim Gettys | February 5, 2012

0 comments

The Network Protocol Battle:
A tale of hubris and zealotry

Dear KV, I’ve been working on a personal project that involves creating a new network protocol. Out of curiosity, I tried to find out what would be involved in getting an official protocol number assigned for my project and discovered that it could take a year and could mean a lot of back and forth with the powers that be at the IETF. I knew this wouldn’t be as simple as clicking something on a Web page, but a year seems excessive, and really it’s not a major part of the work, so it seems like this would mainly be a distraction.

by George V. Neville-Neil | January 5, 2012

24 comments

BufferBloat: What’s Wrong with the Internet?:
A discussion with Vint Cerf, Van Jacobson, Nick Weaver, and Jim Gettys

Internet delays are now as common as they are maddening. That means they end up affecting system engineers just like all the rest of us. And when system engineers get irritated, they often go looking for what’s at the root of the problem. Take Jim Gettys, for example. His slow home network had repeatedly proved to be the source of considerable frustration, so he set out to determine what was wrong, and he even coined a term for what he found: bufferbloat.

by Vint Cerf, Van Jacobson, Nick Weaver, Jim Gettys | December 7, 2011

16 comments

Bufferbloat: Dark Buffers in the Internet:
Networks without effective AQM may again be vulnerable to congestion collapse.

Today’s networks are suffering from unnecessary latency and poor system performance. The culprit is bufferbloat, the existence of excessively large and frequently full buffers inside the network. Large buffers have been inserted all over the Internet without sufficient thought or testing. They damage or defeat the fundamental congestion-avoidance algorithms of the Internet’s most common transport protocol. Long delays from bufferbloat are frequently attributed incorrectly to network congestion, and this misinterpretation of the problem leads to the wrong solutions being proposed.

by Jim Gettys, Kathleen Nichols | November 29, 2011

17 comments

Arrogance in Business Planning:
Technology business plans that assume no competition (ever)

In the Internet addressing and naming market there’s a lot of competition, margins are thin, and the premiums on good planning and good execution are nowhere higher. To survive, investors and entrepreneurs have to be bold. Some entrepreneurs, however, go beyond "bold" and enter the territory of "arrogant" by making the wild assumption that they will have no competitors if they create a new and profitable niche. So it is with those who would unilaterally supplant or redraw the existing Internet resource governance or allocation systems.

by Paul Vixie | July 20, 2011

7 comments

The Robustness Principle Reconsidered:
Seeking a middle ground

In 1981, Jon Postel formulated the Robustness Principle, also known as Postel’s Law, as a fundamental implementation guideline for the then-new TCP. The intent of the Robustness Principle was to maximize interoperability between network service implementations, particularly in the face of ambiguous or incomplete specifications. If every implementation of some service that generates some piece of protocol did so using the most conservative interpretation of the specification and every implementation that accepted that piece of protocol interpreted it using the most generous interpretation, then the chance that the two services would be able to talk with each other would be maximized.

by Eric Allman | June 22, 2011

0 comments

Successful Strategies for IPv6 Rollouts. Really.:
Knowing where to begin is half the battle.

The design of TCP/IP began in 1973 when Robert Kahn and I started to explore the ramifications of interconnecting different kinds of packet-switched networks. We published a concept paper in May 1974, and a fairly complete specification for TCP was published in December 1974. By the end of 1975, several implementations had been completed and many problems were identified. Iteration began, and by 1977 it was concluded that TCP (by now called Transmission Control Protocol) should be split into two protocols: a simple Internet Protocol that carried datagrams end to end through packet networks interconnected through gateways; and a TCP that managed the flow and sequencing of packets exchanged between hosts on the contemplated Internet.

by Thomas A. Limoncelli, Vinton G. Cerf | March 10, 2011

5 comments

Bound by the Speed of Light:
There’s only so much you can do to optimize NFS over a WAN.

I’ve been asked to optimize our NFS (network file system) set up for a global network, but NFS doesn’t work the same over a long link as it does over a LAN. Management keeps yelling that we have a multigigabit link between our remote sites but what our users experience when they try to access their files over the WAN link is truly frustrating. Is this just an impossible task?

by George V. Neville-Neil | December 14, 2010

3 comments

Principles of Robust Timing over the Internet:
The key to synchronizing clocks over networks is taming delay variability.

Everyone, and most everything, needs a clock, and computers are no exception. Clocks tend to drift off if left to themselves, however, so it is necessary to bring them to heel periodically through synchronizing to some other reference clock of higher accuracy. An inexpensive and convenient way to do this is over a computer network.

by Julien Ridoux, Darryl Veitch | April 21, 2010

4 comments

What DNS Is Not:
DNS is many things to many people - perhaps too many things to too many people.

DNS (Domain Name System) is a hierarchical, distributed, autonomous, reliable database. The first and only of its kind, it offers realtime performance levels to a global audience with global contributors. Every TCP/IP traffic flow including every World Wide Web page view begins with at least one DNS transaction. DNS is, in a word, glorious.

by Paul Vixie | November 5, 2009

42 comments

Whither Sockets?:
High bandwidth, low latency, and multihoming challenge the sockets API.

One of the most pervasive and longest-lasting interfaces in software is the sockets API. Developed by the Computer Systems Research Group at the University of California at Berkeley, the sockets API was first released as part of the 4.1c BSD operating system in 1982. While there are longer-lived APIs, it is quite impressive for an API to have remained in use and largely unchanged for 27 years. The only major update to the sockets API has been the extension of ancillary routines to accommodate the larger addresses used by IPv6.

by George V. Neville-Neil | May 11, 2009

20 comments

All-Optical Computing and All-Optical Networks are Dead:
Anxiously awaiting the arrival of all-optical computing? Don’t hold your breath.

We’re a venture capitalist and a communications researcher, and we come bearing bad news: optical computers and all-optical networks aren’t going to happen anytime soon. All those well-intentioned stories about computers operating at the speed of light, computers that would free us from Internet delays and relieve us from the tyranny of slow and hot electronic devices were, alas, overoptimistic. We won’t be computing or routing at the speed of light anytime soon. (In truth, we probably should have told you this about two years ago, but we only recently met, compared notes, and realized our experiences were consistent.)

by Charles Beeler, Craig Partridge | April 17, 2009

3 comments

Network Front-end Processors, Yet Again:
The history of NFE processors sheds light on the tradeoffs involved in designing network stack software.

The history of the NFE (network front-end) processor, currently best known as a TOE (TCP offload engine), extends all the way back to the Arpanet IMP (interface message processor) and possibly before. The notion is beguilingly simple: partition the work of executing communications protocols from the work of executing the "applications" that require the services of those protocols. That way, the applications and the network machinery can achieve maximum performance and efficiency, possibly taking advantage of special hardware performance assistance. While this looks utterly compelling on the whiteboard, architectural and implementation realities intrude, often with considerable force.

by Mike O'Dell | April 17, 2009

4 comments

Fighting Physics: A Tough Battle:
Thinking of doing IPC over the long haul? Think again. The laws of physics say you’re hosed.

Over the past several years, SaaS (software as a service) has become an attractive option for companies looking to save money and simplify their computing infrastructures. SaaS is an interesting group of techniques for moving computing from the desktop to the cloud; however, as it grows in popularity, engineers should be aware of some of the fundamental limitations they face when developing these kinds of distributed applications - in particular, the finite speed of light.

by Jonathan M. Smith | April 15, 2009

1 comments

A Conversation with Van Jacobson:
The TCP/IP pioneer discusses the promise of content-centric networking with BBN chief scientist Craig Partridge.

To those with even a passing interest in the history of the Internet and TCP/IP networking, Van Jacobson will be a familiar name. During his 25 years at Lawrence Berkeley National Laboratory and subsequent leadership positions at Cisco Systems and Packet Design, Jacobson has helped invent and develop some of the key technologies on which the Internet is based.

by John Stanik | February 23, 2009

1 comments

Sizing Your System:
A koder with attitude, KV answers your questions. Miss Manners he ain’t.

Dear KV, I’m working on a network server that gets into the situation you called livelock in a previous response to a letter (Queue May/June 2008). Our problem is that our system has only a fixed amount of memory to receive network data, but the system is frequently overwhelmed and can’t make progress. When I ask our application engineers about how much data they expect, the only answer I get is "a lot," which isn’t much help. How can I figure out how to size our systems appropriately?

by George Neville-Neil | September 24, 2008

0 comments

Automatic for the People:
Transcript of interview with Rob Gingell, CTO of Cassatt

Probably the single biggest challenge with large scale systems and networks is not building them but rather managing them on an ongoing basis. Fortunately, new classes of systems and network management tools that have the potential to save on labor costs because they automate much of the management process are starting to appear.

July 14, 2008

0 comments

Embracing Wired Networks:
Even at home, hardwiring is the way to go.

Most people I know run wireless networks in their homes. Not me. I hardwired my home and leave the Wi-Fi turned off. My feeling is to do it once, do it right, and then forget about it. I want a low-cost network infrastructure with guaranteed availability, bandwidth, and security. If these attributes are important to you, Wi-Fi alone is probably not going to cut it. People see hardwiring as part of a home remodeling project and, consequently, a big headache. They want convenience. They purchase a wireless router, usually leave all the default settings in place, hook it up next to the DSL or cable modem, and off they go.

by Mache Creeger | June 7, 2007

0 comments

DNS Complexity:
Although it contains just a few simple rules, DNS has grown into an enormously complex system.

DNS is a distributed, coherent, reliable, autonomous, hierarchical database, the first and only one of its kind. Created in the 1980s when the Internet was still young but overrunning its original system for translating host names into IP addresses, DNS is one of the foundation technologies that made the worldwide Internet possible. Yet this did not all happen smoothly, and DNS technology has been periodically refreshed and refined. Though it’s still possible to describe DNS in simple terms, the underlying details are by now quite sublime.

by Paul Vixie | May 4, 2007

2 comments

Better, Faster, More Secure:
Who’s in charge of the Internet’s future?

Since I started a stint as chair of the IETF in March 2005, I have frequently been asked, “What’s coming next?” but I have usually declined to answer. Nobody is in charge of the Internet, which is a good thing, but it makes predictions difficult. The reason the lack of central control is a good thing is that it has allowed the Internet to be a laboratory for innovation throughout its life—and it’s a rare thing for a major operational system to serve as its own development lab.

by Brian Carpenter | December 28, 2006

0 comments

Peerless P2P:
A koder with attitude, KV answers your questions. Miss Manners he ain’t.

Dear KV, I’ve just started on a project working with P2P software, and I have a few questions. Now, I know what you’re thinking, and no this isn’t some copyright-violating piece of kowboy kode. It’s a respectable corporate application for people to use to exchange data such as documents, presentations, and work-related information. My biggest issue with this project is security, for example, accidentally exposing our users data or leaving them open to viruses. There must be more things to worry about, but those are the top two. So, I want to ask "What would KV do?"

by George Neville-Neil | December 28, 2006

0 comments

The Network’s New Role:
Application-oriented networks can help bridge the gap between enterprises.

Companies have always been challenged with integrating systems across organizational boundaries. With the advent of Internet-native systems, this integration has become essential for modern organizations, but it has also become more and more complex, especially as next-generation business systems depend on agile, flexible, interoperable, reliable, and secure cross-enterprise systems.

by Taf Anthias, Krishna Sankar | June 30, 2006

0 comments

You Don’t Know Jack about Network Performance:
Bandwidth is only part of the problem.

Why does an application that works just fine over a LAN come to a grinding halt across the wide-area network? You may have experienced this firsthand when trying to open a document from a remote file share or remotely logging in over a VPN to an application running in headquarters. Why is it that an application that works fine in your office can become virtually useless over the WAN? If you think it’s simply because there’s not enough bandwidth in the WAN, then you don’t know jack about network performance.

by Kevin Fall, Steve McCanne | June 7, 2005

0 comments

TCP Offload to the Rescue:
Getting a toehold on TCP offload engines—and why we need them

In recent years, TCP/IP offload engines, known as TOEs, have attracted a good deal of industry attention and a sizable share of venture capital dollars. A TOE is a specialized network device that implements a significant portion of the TCP/IP protocol in hardware, thereby offloading TCP/IP processing from software running on a general-purpose CPU. This article examines the reasons behind the interest in TOEs and looks at challenges involved in their implementation and deployment.

by Andy Currid | June 14, 2004

1 comments

A Conversation with Mario Mazzola:
To peek into the future of networking, you don’t need a crystal ball. You just need a bit of time with Mario Mazzola, chief development officer at Cisco.

Mazzola lives on the bleeding edge of networking technology, so his present is very likely to be our future. He agreed to sit down with Queue to share some of his visions of the future and the implications he anticipates for software developers working with such rapidly evolving technologies as wireless networking, network security, and network scalability.

July 30, 2003

0 comments

Self-Healing Networks:
Wireless networks that fix their own broken communication links may speed up their widespread acceptance.

The obvious advantage to wireless communication over wired is, as they say in the real estate business, location, location, location. Individuals and industries choose wireless because it allows flexibility of location--whether that means mobility, portability, or just ease of installation at a fixed point. The challenge of wireless communication is that, unlike the mostly error-free transmission environments provided by cables, the environment that wireless communications travel through is unpredictable. Environmental radio-frequency (RF) "noise" produced by powerful motors, other wireless devices, microwaves--and even the moisture content in the air--can make wireless communication unreliable.

by Robert Poor, Cliff Bowman, Charlotte Burgess Auburn | July 30, 2003

1 comments

The Future of WLAN:
Overcoming the Top Ten Challenges in wireless networking--will it allow wide-area mesh networks to become ubiquitous?

Since James Clerk Maxwell first mathematically described electromagnetic waves almost a century and a half ago, the world has seen steady progress toward using them in better and more varied ways. Voice has been the killer application for wireless for the past century. As performance in all areas of engineering has improved, wireless voice has migrated from a mass broadcast medium to a peer-to-peer medium. The ability to talk to anyone on the planet from anywhere on the planet has fundamentally altered the way society works and the speed with which it changes.

by Michael W. Ritter | July 9, 2003

1 comments