Comments

(newest first)

  • name | Wed, 19 Jun 2019 21:16:21 UTC

    I need help to unlock SIM network Alcatel 5009acan U please help me out I can read & right good so keep it simple for me please thank you very much for your time
  • Rich Brown | Fri, 23 Sep 2016 13:34:06 UTC

    Here are two resources for current work on the bufferbloat front:
    
    - https://www.bufferbloat.net - the canonical site that collects all our work
    
    - https://lists.bufferbloat.net/listinfo - the Bloat and Codel mailing lists contain our most recent investigations
  • Kathleen Nichols | Fri, 20 Feb 2015 16:02:28 UTC

    As noted above, this article was written rather hastily. In http://pollere.net/Pdfdocs/draft-02.pdf we got a chance to provide more explanation of some important features.
  • Scot | Wed, 28 May 2014 08:57:01 UTC

    Figures 1 and 2 look a lot like tubes with packets moving through them. Is the internet made up of a series of these tubes?
    
  • santosh | Sat, 09 Feb 2013 14:04:22 UTC

    i want simulation code for red,fred,wred and blue algorithms on ns2
  • Jesper Louis Andersen | Tue, 25 Dec 2012 16:03:46 UTC

    @Andrey, 
    
    I am trying to see if this works actually, by building a CoDel front for work queueing in Erlang. Currently, I have the code and I am measuring if it works to see what happens.
  • Saddy | Mon, 26 Nov 2012 21:00:45 UTC

    I don't know if this is the solution for a big problem or not.
    But when i start reading this article i didn't know what Codel is and now i know everything. 
    
    So this is a great article, thank you!
  • Andrey Polozov | Thu, 30 Aug 2012 04:21:34 UTC

    I'm not a scientist, so forgive me my ignorance. I have a crazy idea about this approach application.
    I believe that in software development we also have an issue of over buffering. In most cases of remote call implementation (i.e. web services) we tend to queue requests one way or another, but rarely reject new requests until the system is completely hosed. The timeout is enforced by abandoning the request if it took more than timeout so far.
    We don't say to the client: "No, we can't even start processing you request because there is no way we can do it in given time.", instead we hope that somehow we might be able to do it. (which technically is true: in order to make 100% reliable decision about rejection we have to look into the future).
    I tried to use exponential moving average of request time of past requests time (RTT in network term). It worked fairly well, but has some down sides: requires some knobs, sometimes misses the target, doesn't work well with mix of quick and slow requests, etc.
    So, I'm wondering if  CoDel be applied there?
    Maybe I'll be able to try it some day, but before that it would be great to hear how it looks from the scientific point of view...
    Thanks for the great article!
  • Guy | Fri, 17 Aug 2012 16:14:18 UTC

    I tried to comment here but the comment was refused.
    
    I don't understand the ruleset being used for comment refusal but I have real comments regarding this article. Contact me if you're interested.
    
    I've given up trying to enter my comment.
  • Kathleen Nichols | Tue, 10 Jul 2012 20:09:30 UTC

    No control law is "perfect" and we've looked at lots of variants. We are pleased with how this one works but welcome others' work on improvements.
    
    We have been looking at the "drop elderly packets" approach for a while. It turns out that it only seems to come into play where there are really large output link rate decreases, BUT it means making assumptions about what is "old" so we have not recommended that at this time because we only add something when we are sure it is useful, does no harm, and is worth any complexity added to the implementation.
    
    We were on a very tight deadline to get this article done and I'm sure the language could be improved. However, I was afraid Jim Gettys would show up at my door with a cattle prod if I didn't finish it when promised. I'm sorry the language offends.
  • testerer | Fri, 06 Jul 2012 03:28:56 UTC

    The sqrt law is logical for one tcp stream, or multiple streams with packets uniformly distributed through the queue. If you have multiple independent bursty streams, the sqrt law has little reason to work and might cause behavior which will be hard to debug. 
    
    Another way you can guarantee that latency will not go high (>100ms, f.e.) for the packets already in the queue by dropping ones that are "very old" (>150ms, f.e.) right away, and then having each incoming packet remove some "old" ones to bring the latency closer to the target.
    
    There might be other ways to do it, but the main point is, while CoDeL is a fine solution, the whole field is not really rocket science, and suffers BADLY from complex wording and lack of metaphorical explanations. Just look at the article above, and the "appendix" which for some reason is not in the links at the end of article. It took LESS amount of words to explain the algorithm (not including C code) than the "high-level explanation" which was for some reason labeled "ONE CODE MODULE, NO KNOBS, ANY LINK RATE" instead of "How it works". 
    Does anyone in the field still speak plain English?
    And what with the "r.p" and "sojourn_*" variable names in the code? Is this really necessary?..
    Would "deque_result.pkt" and "time_in_queue" really be that much harder to write?
  • Dave Taht | Tue, 12 Jun 2012 07:04:43 UTC

    Sigh. This has been a difficult problem that 1000s of theorists have struggled with for 30+ years and both the comments above reveal misunderstandings of what was the problem and what was solved.
    
    To take the last comment first:
    
    'Another method that should work, is setting up a timer for each packet and removing packets that are "too late".'
    
    This is sort of what codel does, except that packets tend to arrive or depart in bursts, so there is no clear definition of "too late". Instead of removing all packets that are "too late", codel arrives at a drop scheme based on an inverse sqrt of times to drop packets that allows for continuous end to end signaling to take place, so it attempts to drop "enough"  packets to signal the other side to slow down, while still delivering data.
    
    As for the previous comment, TCP/IP was designed to go as fast as the link allowed, there was never a human involved at all, no signal, no wink, no sign of incomprehension, passes, aside from ack packets. To try and and extend this analogy to make it work, the human is being yelled at from across a distant canyon, and he's saying OK (ack) every so often after hearing so many words, which also take time to pass across the canyon. The conversation would be very, very slow, if he had to say OK after every word.
    
    Bufferbloat is the fact that the size of that canyon (buffering) had arbitrarily grown to several lunar distances, and the OK's and the conversation were taking so long to transit that the speaker on the other side would ramp up well beyond any speed the receiver could understand.
    
    Plenty of other analogies exist, and common sense and the speed of "sound" are not very congruent until you try to optimize for shouting across the canyon. 
  • testerer | Mon, 04 Jun 2012 02:33:04 UTC

    Another method that should work, is setting up a timer for each packet and removing packets that are "too late".
  • testerer | Mon, 04 Jun 2012 01:27:49 UTC

    Just to clarify: this method is nice and should work. The problem is that it took so long to get to it and it's considered some kind of a novel solution, when in fact it is common sense and logic with a bit of real life observation. The problem is that the problem definition of this and other problems is so bloated that no normal "non-theoretical" people ever have enough motivation to decipher the blabber and propose a solution.
  • testerer | Mon, 04 Jun 2012 01:13:09 UTC

    Wow. This is a great example of overcomplicating things by unnecessary work bloat.
    Both the problem and the solution are completely obvious, given day-to-day common sense..
    
    An example, perhaps a bit unsensitive, metaphor: you are trying to talk to a "slow" person.
    Whatever word you say, he will probably understand, but much slower than you speak. In some cases he might miss some word and ask you to repeat. 
    
    Now imagine you read to him a whole page of a book. He will try to make sense of it until the end of your speech. So, in best case, if he got everything correctly, you will need to wait a long time until he is done understanding what you said. In worst case, if he misheard some word, he will ask you to repeat that part. But since he's so slow, the stream of your previous speech is still being heard by him and he's trying to make sense of it. Even if you repeat the word now, it will only get to him after all the previous words have gone through.
    In actual persons, the natural solution to this problem is that the slow dude doesn't actually store your speech somewhere for processing, he just remembers the next few seconds worth of words, and ignores the rest. So he limits the maximal delay that the words spend between being heard and being understood.
    
    This is a natural and obvious solution, being intuitive and actually implemented in nature.
    
    Regarding the previously-used size-based queue management, the reason this only came up now is because people don't actually understand what it is they are solving, and coming up with complicated new words instead of simplifying explanations.
  • David Collier-Brown | Wed, 09 May 2012 19:09:38 UTC

    A niggle: the first box-plot might deserve a bit of additional description. The first few figures are wonderfully intuitive, the latter ones made me reach for a scratch pad to work through them and their implications...
    
Leave this field empty

Post a Comment:

(Required)
(Required)
(Required - 4,000 character limit - HTML syntax is not allowed and will be removed)