HTTP continues to evolve
HTTP (Hypertext Transfer Protocol) is one of the most widely used application protocols on the Internet. Since its publication, RFC 2616 (HTTP 1.1) has served as a foundation for the unprecedented growth of the Internet: billions of devices of all shapes and sizes, from desktop computers to the tiny Web devices in our pockets, speak HTTP every day to deliver news, video, and millions of other Web applications we have all come to depend on in our everyday lives.
> Making the Web Faster with HTTP 2.0
Improving Performance on the Internet
High Performance Web Sites
How Fast is Your Web Site?
A close look at RTT measurements with TCP
STEPHEN D. STROWES, BOUNDARY INC.
Measuring and monitoring network RTT (round-trip time) is important for multiple reasons: it allows network operators and end users to understand their network performance and help optimize their environment, and it helps businesses understand the responsiveness of their services to sections of their user base.
> Passively Measuring TCP Round-trip Times
You Don’t Know Jack about Network Performance
Bufferbloat: Dark Buffers in the Internet
TCP Offload to the Rescue
Cryptography as privacy works only if both ends work at it in good faith
The recent exposure of the dragnet-style surveillance of Internet traffic has provoked a number of responses that are variations of the general formula, “More encryption is the solution.” This is not the case. In fact, more encryption will probably only make the privacy crisis worse than it already is.
> More Encryption Is Not the Solution
Join us in Lombard, IL, April 3-5, 2013, for NSDI ’13.
The 10th USENIX Symposium on Networked Systems Design and Implementation (NSDI ’13) focuses on the design principles, implementation, and practical evaluation of large-scale networked and distributed systems. The technical sessions will focus on hot topics such as pervasive computing, network integrity, data centers, performance, big data, security, privacy, and many others. NSDI ’13 also includes a poster and demo session, where presenters can showcase early research and discuss it with fellow attendees.
Register by March 13 and save. Additional discounts are available!
Constraints in an environment empower the services.
PAT HELLAND, SALESFORCE.COM
Living in a condominium (commonly known as a condo) has its constraints and its services. By defining the lifestyle and limits on usage patterns, it is possible to pack many homes close together and to provide the residents with many conveniences. Condo living can offer a great value to those interested and willing to live within its constraints and enjoy the sharing of common services.
Similarly, in cloud computing, applications run on a shared infrastructure and can gain many benefits of flexibility and cost savings. To get the most out of this arrangement, a clear model is needed for the usage pattern and constraints to be imposed in order to empower sharing and concierge services. It is the clarity of the usage pattern that can empower new PaaS (Platform as a Service) offerings supporting the application pattern and providing services, easing the development and operations of applications complying with that pattern.
Just as there are many different ways of using buildings, there are many styles of application patterns. This article looks at a typical pattern of implementing a SaaS (Software as a Service) application and shows how, by constraining the application to this pattern, it is possible to provide many concierge services that ease the development of a cloud-based application.
Fighting Physics: A Tough Battle
Commentary: A Trip Without a Roadmap
CTO Roundtable: Cloud Computing
A proposal to improve the performance and availability of streaming video and other time-sensitive media
AIMAN ERBAD, QATAR UNIVERSITY; CHARLES “BUCK” KRASIC, GOOGLE
The Internet/Web architecture has developed to the point where it is common for the most popular sites to operate at a virtually unlimited scale, and many sites now cater to hundreds of millions of unique users. Performance and availability are generally essential to attract and sustain such user bases. As such, the network and server infrastructure plays a critical role in the fierce competition for users. Web pages should load in tens to a few hundred milliseconds at most. Similarly, sites strive to maintain multiple nines availability targets—for example, a site should be available to users 99.999 percent of the time over a one-year period.
Related content on queue.acm.org
Four Billion Little Brothers?: Privacy, mobile phones, and ubiquitous data collection
- Katie Shilton
VoIP: What is it Good for?
- Sudhir R. Ahuja, Robert En
Data in Flight
- Julian Hyde
An introduction to PTP and its significance to NTP practitioners
RICK RATZEL AND RODNEY GREENSTREET, NATIONAL INSTRUMENTS
It is difficult to overstate the importance of synchronized time to modern computer systems. Our lives today depend on the financial transactions, telecommunications, power generation and delivery, high-speed manufacturing, and discoveries in “big physics,” among many other things, that are driven by fast, powerful computing devices coordinated in time with each other.
Principles of Robust Timing over the Internet
The One-second War (What Time Will You Die?)
Modern Performance Monitoring
An open standard that enables software-defined networking
THOMAS A. LIMONCELLI
Computer networks have historically evolved box by box, with individual network elements occupying specific ecological niches as routers, switches, load balancers, NATs (network address translations), or firewalls. Software-defined networking proposes to overturn that ecology, turning the network as a whole into a platform and the individual network elements into programmable entities. The apps running on the network platform can optimize traffic flows to take the shortest path, just as the current distributed protocols do, but they can also optimize the network to maximize link utilization, create different reachability domains for different users, or make device mobility seamless.
Related: Beyond Beowulf Clusters, SoC: Software, Hardware, Nightmare, Bliss, TCP Offload to the Rescue
A modern AQM is just one piece of the solution to bufferbloat.
KATHLEEN NICHOLS, POLLERE INC.
VAN JACOBSON, PARC
Nearly three decades after it was first diagnosed, the “persistently full buffer problem,” recently exposed as part of bufferbloat,6,7 is still with us and made increasingly critical by two trends. First, cheap memory and a “more is better” mentality have led to the inflation and proliferation of buffers. Second, dynamically varying path characteristics are much more common today and are the norm at the consumer Internet edge. Reasonably sized buffers become extremely oversized when link rates and path delays fall below nominal values.
Bufferbloat: Dark Buffers in the Internet
Revisiting Network I/O APIs: The netmap Framework
The Robustness Principle Reconsidered
A good user experience depends on predictable performance within the data-center network.
DENNIS ABTS, BOB FELDERMAN, GOOGLE
The magic of the cloud is that it is always on and always available from anywhere. Users have come to expect that services are there when they need them. A data center (or warehouse-scale computer) is the nexus from which all the services flow. It is often housed in a nondescript warehouse-sized building bearing no indication of what lies inside. Amidst the whirring fans and refrigerator-sized computer racks is a tapestry of electrical cables and fiber optics weaving everything together—the data-center network. This article provides a “guided tour” through the principles and central ideas surrounding the network at the heart of a data center — the modern-day loom that weaves the digital fabric of the Internet.
Enterprise Grid Computing - Paul Strong - http://queue.acm.org/detail.cfm?id=1080877
Cooling the Data Center - Andy Woods - http://queue.acm.org/detail.cfm?id=1737963
Improving Performance on the Internet - Tom Leighton - http://queue.acm.org/detail.cfm?id=1466449