The March/April issue of acmqueue is out now

acmqueue is free for ACM professional members. Non-members can purchase an annual subscription for $19.99 or a single issue for $6.99.

Download the app from iTunes or Google Play,
or view within your browser.

More information here


The Bike Shed

Web Services

  Download PDF version of this article PDF

HTTP/2.0 — The IETF is Phoning It In

Bad protocol, bad politics


Poul-Henning Kamp

A very long time ago —in 1989 —Ronald Reagan was president, albeit only for the final 19½ days of his term. And before 1989 was over Taylor Swift had been born, and Andrei Sakharov and Samuel Beckett had died.

In the long run, the most memorable event of 1989 will probably be that Tim Berners-Lee hacked up the HTTP protocol and named the result the "World Wide Web." (One remarkable property of this name is that the abbreviation "WWW" has twice as many syllables and takes longer to pronounce.)

Tim's HTTP protocol ran on 10Mbit/s, Ethernet, and coax cables, and his computer was a NeXT Cube with a 25-MHz clock frequency. Twenty-six years later, my laptop CPU is a hundred times faster and has a thousand times as much RAM as Tim's machine had, but the HTTP protocol is still the same.

A few days ago the IESG, The Internet Engineering Steering Group, asked for "Last Call" comments on new "HTTP/2.0" protocol (https://tools.ietf.org/id/draft-ietf-httpbis-http2) before blessing it as a "Proposed Standard".

Expectations Will Vary

Some will expect a major update to the world's most popular protocol to be a technical masterpiece and textbook example for future students of protocol design. Some will expect that a protocol designed during the Snowden revelations will improve their privacy. Others will more cynically suspect the opposite. There may be a general assumption of "faster." Many will probably also assume it is "greener." And some of us are jaded enough to see the "2.0" and mutter "Uh-oh, Second Systems Syndrome."

The cheat sheet answers are: no, no, probably not, maybe, no and yes.

If that sounds underwhelming, it's because it is.

HTTP/2.0 is not a technical masterpiece. It has layering violations, inconsistencies, needless complexity, bad compromises, misses a lot of ripe opportunities, etc. I would flunk students in my (hypothetical) protocol design class if they submitted it. HTTP/2.0 also does not improve your privacy. Wrapping HTTP/2.0 in SSL/TLS may or may not improve your privacy, as would wrapping HTTP/1.1 or any other protocol in SSL/TLS. But HTTP/2.0 itself does nothing to improve your privacy. This is almost triply ironic, because the major drags on HTTP are the cookies, which are such a major privacy problem, that the EU has legislated a notice requirement for them. HTTP/2.0 could have done away with cookies, replacing them instead with a client controlled session identifier. That would put users squarely in charge of when they want to be tracked and when they don't want to—a major improvement in privacy. It would also save bandwidth and packets. But the proposed protocol does not do this.

The good news is that HTTP/2.0 probably does not reduce your privacy either. It does add a number of "fingerprinting" opportunities for the server side, but there are already so many ways to fingerprint via cookies, JavaScript, Flash, etc. that it probably doesn't matter.

You may perceive webpages as loading faster with HTTP/2.0, but probably only if the content provider has a global network of servers. The individual computers involved, including your own, will have to do more work, in particular for high-speed and large objects like music, TV, movies etc. Nobody has demonstrated a HTTP/2.0 implementation that approached contemporary wire speeds. Faster? Not really.

That also answers the question about the environmental footprint: HTTP/2.0 will require a lot more computing power than HTTP/1.1 and thus cause increased CO2 pollution adding to climate change. You would think that a protocol intended for tens of millions of computers would be the subject of some green scrutiny, but surprisingly—at least to me —I have not been able to find any evidence that the IETF considers environmental impact at all —ever.

And yes, Second Systems Syndrome is strong.

Given this rather mediocre grade-sheet, you may be wondering why HTTP/2.0 is even being considered as a standard in the first place.

The Answer is Politics

Google came up with the SPDY protocol, and since they have their own browser, they could play around as they choose to, optimizing the protocol for their particular needs. SPDY was a very good prototype which showed clearly that there was potential for improvement in a new version of the HTTP protocol. Kudos to Google for that. But SPDY also started to smell a lot like a "walled garden" to some people, and more importantly to other companies, and politics surfaced.

The IETF, obviously fearing irrelevance, hastily "discovered" that the HTTP/1.1 protocol needed an update, and tasked a working group with preparing it on an unrealistically short schedule. This ruled out any basis for the new HTTP/2.0 other than the SPDY protocol. With only the most hideous of SPDY's warts removed, and all other attempts at improvement rejected as "not in scope," "too late," or "no consensus," the IETF can now claim relevance and victory by conceding practically every principle ever held dear in return for the privilege of rubber-stamping Google's initiative.

But the politics does not stop there.

The reason HTTP/2.0 does not improve privacy is that the big corporate backers have built their business model on top of the lack of privacy. They are very upset about NSA spying on just about everybody in the entire world, but they do not want to do anything that prevents them from doing the same thing. The proponents of HTTP/2.0 are also trying to use it as a lever for the "SSL anywhere" agenda, despite the fact that many HTTP applications have no need for, no desire for, or may even be legally banned from using encryption.

Your Country, State, or County Emergency Webpage?

Local governments have no desire to spend resources negotiating SSL/TLS with every single smartphone in their area when things explode, rivers flood, or people are poisoned. Big news sites similarly prioritize being able to deliver news over being able to hide the fact that they are delivering news, particularly when something big happens. (Has everybody in IETF forgotten CNN's exponential traffic graph from 14 years ago?)

The so-called "multimedia business," which amounts to about 30% of all traffic on the net, expresses no desire to be forced to spend resources on pointless encryption. There are even people who are legally barred from having privacy of communication: children, prisoners, financial traders, CIA analysts and so on. Yet, despite this, HTTP/2.0 will be SSL/TLS only, in at least three out of four of the major browsers, in order to force a particular political agenda. The same browsers, ironically, treat self-signed certificates as if they were mortally dangerous, despite the fact that they offer secrecy at trivial cost. (Secrecy means that only you and the other party can decode what is being communicated. Privacy is secrecy with an identified or authenticated other party.)

History has shown overwhelmingly that if you want to change the world for the better, you should deliver good tools for making it better, not policies for making it better. I recommend that anybody with a voice in this matter turn their thumbs down on the HTTP/2.0 draft standard: It is not a good protocol and it is not even good politics.

LOVE IT, HATE IT? LET US KNOW

feedback@queue.acm.org

Poul-Henning Kamp (phk@FreeBSD.org) is one of the primary developers of the FreeBSD operating system, which he has worked on from the very beginning. He is widely unknown for his MD5-based password scrambler, which protects the passwords on Cisco routers, Juniper routers, and Linux and BSD systems. Some people have noticed that he wrote a memory allocator, a device file system, and a disk-encryption method that is actually usable. Kamp lives in Denmark with his wife, son, daughter, about a dozen FreeBSD computers, and one of the world's most precise NTP (Network Time Protocol) clocks. He makes a living as an independent contractor doing all sorts of stuff with computers and networks.

© 2015 ACM 1542-7730/15/0100 $10.00

See Also

Making The Web Faster with HTTP 2.0
HTTP continues to evolve
- Ilya Grigorik
HTTP (Hypertext Transfer Protocol) is one of the most widely used application protocols on the Internet. Since its publication, RFC 2616 (HTTP 1.1) has served as a foundation for the unprecedented growth of the Internet: billions of devices of all shapes and sizes, from desktop computers to the tiny Web devices in our pockets, speak HTTP every day to deliver news, video, and millions of other Web applications we have all come to depend on in our everyday lives.

Better, Faster, More Secure
Who's in charge of the Internet's future?
- Brian Carpenter
Since I started a stint as chair of the IETF (Internet Engineering Task Force) in March 2005, I have frequently been asked, "What's coming next?" but I have usually declined to answer. Nobody is in charge of the Internet, which is a good thing, but it makes predictions difficult (and explains why this article starts with a disclaimer: It represents my views alone and not those of my colleagues at either IBM or the IETF).

The Software Industry Is The Problem
The time has come for software liability laws.
- Poul-Henning Kamp
One score and seven years ago, Ken Thompson brought forth a new problem, conceived by thinking, and dedicated to the proposition that those who trusted computers were in deep trouble.

acmqueue

Originally published in Queue vol. 13, no. 2
see this item in the ACM Digital Library

For more articles and columns like this, check out the latest issue of acmqueue magazine

Tweet



Related:

Ben Maurer - Fail at Scale
Reliability in the face of rapid change


Aiman Erbad, Charles Krasic - Sender-side Buffers and the Case for Multimedia Adaptation
A proposal to improve the performance and availability of streaming video and other time-sensitive media


Ian Foster, Savas Parastatidis, Paul Watson, Mark McKeown - How Do I Model State? Let Me Count the Ways
A study of the technology and sociology of Web services specifications


Steve Souders - High Performance Web Sites
Google Maps, Yahoo! Mail, Facebook, MySpace, YouTube, and Amazon are examples of Web sites built to scale. They access petabytes of data sending terabits per second to millions of users worldwide. The magnitude is awe-inspiring. Users view these large-scale Web sites from a narrower perspective. The typical user has megabytes of data that are downloaded at a few hundred kilobits per second. Users are not so interested in the massive number of requests per second being served; they care more about their individual requests. As they use these Web applications, they inevitably ask the same question: "Why is this site so slow?"



Comments

Displaying 10 most recent comments. Read the full list here

rotek | Wed, 07 Jan 2015 18:09:51 UTC

Furthermore, HTTP/2.0 does not address one of the primary problems with HTTP, head of line blocking, because it is still TCP based.

HTTP 2.0 is even more vulnerable to it, because it transmits all the data on the same TCP connection. When even one packet gets lost, the whole communication with the server is blocked. Whereas, in HTTP/1.1, only one of (for example) six parallel TCP connections is blocked in such case.


Robert Thille | Wed, 07 Jan 2015 20:43:22 UTC

Well, if everyone going to the NYT site use https, and almost all of them were reading news, and some of them were using a contact form to reach a reporter to reveal illegal spying by the NSA, it'd be a lot harder for the NSA to catch that than if only the contact form were over https. Whether that's worth the cost in CO2 pollution, I don't know...


Milton | Wed, 07 Jan 2015 23:15:10 UTC

www is nothing if not international, where it often has fewer sylables. But although I never spoke to a German about it, it seems absolutely hilarious in German. It is pronounced like the word "weh" (sounds like english "vay") which means woe (or injury), invoking an almost biblical "woe woe woe" at the beginning of each url, or, like the Max und Moritz classic German stories describing fairly violent strikes carried out against members the community by Max and Moritz "wehe wehe wehe wenn ich auf das ende sehe".


duane | Thu, 08 Jan 2015 04:15:25 UTC

"dub dub dub" that wasn't so hard, was it

(apparently the comment filter wasn't happy with this being a one line comment, good going boys)


malthe | Thu, 08 Jan 2015 07:28:42 UTC

NYT is a large website and you can engage with it. Why would I not want encryption? Note that NYT can't see my other traffic, but those who tap in on the network can.


Jack | Thu, 08 Jan 2015 12:37:07 UTC

"most memorable event of 1989" - WTF!? What about the Berlin wall coming down?


Rich | Thu, 08 Jan 2015 20:25:39 UTC

Authenticated communications aren't just for privacy. It doesn't matter if the resource I'm downloading is a news article, source code, an executable, or even cat pictures, I want to be certain that what I'm getting is actually what I'm expecting it to be, without any alterations having been made in-transit by third parties. The same is true for the information going in the opposite direction; what I transmit ought to arrive at its destination unaltered by any third parties.

Is SSL/TLS as currently implemented, with its reliance on trusting known-untrustworthy third parties, fundamentally broken? Absolutely. Even with it as broken as it currently is, it's still better than nothing though.


Greg Wilkins | Fri, 09 Jan 2015 12:57:06 UTC

@Robert Thille, actually TLS does not well hide who is using the NYT contact form vs who is reading the NYT front page. Given just the request/response sizes and their timing and sequense, the NSA can pretty much determine what pages you are reading on the NYT, if you have navigated to the contacts page and if you are sending data.

TLS everywhere is over promising. It might help privacy a little bit in a few circumstances, but not a lot in many more.


Adam Roach | Sun, 22 Feb 2015 21:06:09 UTC

"The same browsers, ironically, treat self-signed certificates as if they were mortally dangerous, despite the fact that they offer secrecy at trivial cost."

You can't evaluate this behavior in a vacuum, though. Using your definitions for privacy and secrecy: these kinds of decisions are coupled with very deliberate actions that provide actual privacy (which provides protection against active and passive attackers) for the same cost as one can achieve secrecy (which only protects against passive attacks). See https://letsencrypt.org/ for the alternative. All other things being equal, the tools "Let's Encrypt" will provide are even easier to use than installing a self-signed cert.

When faced with the opportunity to deploy good security for the same cost as bad, choosing the good security path seems like a far more rational choice.


Kristoffer Björk | Tue, 17 Mar 2015 09:05:39 UTC

There are also "middleboxes" that either inspects traffic or change it (ad and/or javascript injection etc). These do not support http2 yet, so without encrypting http2 you need to wait for all these boxes to update before you can test and/or deploy anything. This is why SCTP and other newer (better?) protocols arent used, a massive amount of broken devices that brakes when you do something else than really old protocols since noone bother to update the software they run.


Displaying 10 most recent comments. Read the full list here
Leave this field empty

Post a Comment:







© 2016 ACM, Inc. All Rights Reserved.