September/October 2018 issue of acmqueue The September/October issue of acmqueue is out now

Subscribers and ACM Professional members login here



Networks

  Download PDF version of this article PDF

Error 526 Ray ID: 47ccc7385ef1c5fa • 2018-11-20 17:53:48 UTC

Invalid SSL certificate

You

Browser

Working
Newark

Cloudflare

Working
deliverybot.acm.org

Host

Error

What happened?

The origin web server does not have a valid SSL certificate.

What can I do?

If you're a visitor of this website:

Please try again in a few minutes.

If you're the owner of this website:

The SSL certificate presented by the server did not pass validation. This could indicate an expired SSL certificate or a certificate that does not include the requested domain name. Please contact your hosting provider to ensure that an up-to-date and valid SSL certificate issued by a Certificate Authority is configured for this domain name on the origin server. Additional troubleshooting information here.

acmqueue

Originally published in Queue vol. 9, no. 11
see this item in the ACM Digital Library


Tweet


Related:

Yonatan Sompolinsky, Aviv Zohar - Bitcoin's Underlying Incentives
The unseen economic forces that govern the Bitcoin protocol


Antony Alappatt - Network Applications Are Interactive
The network era requires new models, with interactions instead of algorithms.


Jacob Loveless - Cache Me If You Can
Building a decentralized web-delivery model


Theo Schlossnagle - Time, but Faster
A computing adventure about time through the looking glass



Comments

(newest first)

Displaying 10 most recent comments. Read the full list here

Kent Peacock | Tue, 13 Mar 2012 05:22:08 UTC

I would suggest that FAST TCP (http://en.wikipedia.org/wiki/FAST_TCP) could be a viable approach to solving this control system problem. Unfortunately, I see that the developers of the solution have patented it. That's an almost guaranteed way to avoid wide deployment.


mike | Thu, 15 Dec 2011 22:15:33 UTC

Sorry, but complaining about the network conditions is a bit like cell phone network designers complaining about people moving too quickly, buildings going up and trees growing. You cant shift the buildings and the trees. You need a protocol which can cope.

Cell networks constantly monitor and analyse huge data sets. Many transmission parameters are constantly adjusted accordingly. Cell phones must not only maintain real-time bandwidth and latency sufficient for voice telephony, but they must deal with constantly changing signal conditions, and even more extreme, maintain and switch calls between base stations. And all this is in a small battery powered unit. Compared to this, TCPs task is trivial.

Endpoint TCPs are closed loop feedback control systems. The control signal is the RTT, and the controlled parameter is the window size. Feedback always involves a time constant, or period. In this case, there is an inherent time constant associated with a TCP expanding and contracting its window.

TCP window control based on RTT is about 20 years behind the times. And even back then, it was pretty crude. But it was good enough, because the behaviour of the network was simple. The RTT of any particular packet was a good predictor of all packets over a reasonable time period. And change tended not to be periodic. And so a packets in flight control system using RTT as a control signal worked well.

Todays networks, (for reasons described in the article), exhibit vastly more complex behaviour, with periodicity inherent in every step of the link. So the RTT can vary with multiple periodicities. These can move in and out of phase with the periods of the endpoint TCPs. So the feedback can shift from negative to positive and back, with all values in between. In other words, the control system can become completely unstable. The result is the massive and unpredictable variation in latency and bandwidth observed.

But surely the answer is not to try to control the network conditions- a chain is only as strong as its weakest link, and all that. Cell phone network designers knew they had a hard job to do. And they designed an appropriate protocol.

So surely we need a protocol which can cope. This must, just as a cellular network does, constantly monitor network conditions in detail, and react accordingly. This may best be done in a firmware layer in the network card. Until thats done properly (and that means hard sums, not hacking,) the problems will persist, and almost certainly get worse. The internet needs to grow up a little.

Mike


Eugen Dedu | Wed, 14 Dec 2011 13:38:53 UTC

The advantage of my proposition is that there is no incentive for the sender to choose one or the other, it just depends on what sender application prefer, i.e. the two queues are more or less equal in terms of service. Is this the case for ToS (at least as it was implemented) too?


Alex | Tue, 13 Dec 2011 19:18:25 UTC

Eugen Dedu> Flows with the bit set have low latencies but smaller throughput ....

These flags have been there from the very beginning -- ToS flags in the IP header. However at some moment windows 90-something came with ToS set to interactive for all traffic and broke everything.


Eugen Dedu | Fri, 09 Dec 2011 11:45:18 UTC

What about the following method. Divide the router buffer in two, 90% and 10%. Each flow has a bit in IP header set by application. When the bit is 1, the packet is put in 10% queue, otherwise in 90% queue. Flows with the bit set have low latencies but smaller throughput, while the others have high latencies but higher throughput.


Eugen Dedu | Fri, 09 Dec 2011 11:42:04 UTC

"... wherein routers along the path drop the path's bit rate into the initial packet if that rate is lower than what's there already. The idea is to tell both ends of the slowest link in the route."

Since links are shared, what a sender needs is not bandwidth, but available bandwidth, which depends on the flows traversing the link. As a simple example, for a 10Mb/s link with 10 flows, the router needs to put in the packet 1Mb/s, not 10Mb/s.


David | Thu, 08 Dec 2011 00:34:39 UTC

@ Neal Murphy

[QUOTE] "As part of my efforts to modernize Smoothwall (search for 'phaeton/roadster'), I've been developing a traffic control app that uses Linux Traffic Control and HTB to smooth out the flow." [/QUOTE]

Can I get that app, or something similar, for installation onto an OpenWRT router?


Ayie Qeusyirie | Tue, 06 Dec 2011 19:23:51 UTC

Help


Lennie | Tue, 06 Dec 2011 00:14:42 UTC

@Neal Murphy "The idea is to tell both ends of the slowest link in the route." I think you should look up Explicit Congestion Notification


Martin Fick | Mon, 05 Dec 2011 23:52:59 UTC

This is surely a really dumb solution, but I wonder how latency would be affected if all buffers (no matter their size) were turned into LIFOs instead of FIFOs?

It seems moronic on the surface, but perhaps if you have to wait to transmit, and there is more than one packet in the buffer, you might as well let the latest one through first. For latencies, it might be similar to having a single packet buffer, perhaps the latencies would stay low. But unlike simply dropping all other packets it might help avoid drops in the TCP case which might somehow still allow decent throughput?

Of course, you would get all sorts of packet order inversions and other weird stuff. Perhaps some of the packet order inversions would get reordered properly as they make their way through the system and it wouldn't be too bad.

Perhaps a simple variation on this theme would work better (depending on the medium), make it a LIFO of smaller FIFO buffers (say 10 buffers/per FIFO)...


Displaying 10 most recent comments. Read the full list here
Leave this field empty

Post a Comment:







© 2018 ACM, Inc. All Rights Reserved.