Comments

(newest first)

  • Henry Gonzales | Fri, 04 Aug 2017 07:02:32 UTC

    The current Internet architecture is built around layers of different functions, where the Network Layer provides a technology-independent abstraction on top of a large set of autonomous, heterogeneous networks. The Internet Protocol (IP) is one mechanism for achieving such an abstraction. By making the choice for a rudimentary ``best-effort" service, the Internet has not been able to effectively respond to new requirements (security, manageability, wireless, mobility, etc.) Today, Internet Service Providers (ISP's) may be willing to provide better than best-effort service to their customers or their peers for a price or to meet a Service Level Agreement (SLA). The lack of a structured view of how this could be accomplished, given the current IP model, has led to numerous ad hoc solutions that are either inefficient or incomplete.
    
    The current Internet architecture has its shortcomings: (1) exposing addresses to applications, (2) artificially isolating functions of the same scope: transport and routing/relaying are split into two layers: Data Link and Physical layers over the same domain/link, and Transport and Network layers internet-wide, and (3) artificially limiting the number of layers (levels). On the other hand, RINA leverages the inter-process communication (IPC) concept. In an operating system, to allow two processes to communicate, IPC requires certain functions such as locating processes, determining permission, passing information, scheduling, and managing memory. Similarly, two applications on different end-hosts should communicate by utilizing the services of a distributed IPC facility (DIF). A DIF is an organizing structure---what we generally refer to as a layer. What functions constitute this layer, however, is fundamentally different. A DIF is a collection of IPC processes (nodes). Each IPC process executes routing, transport and management functions. IPC processes communicate and share state information. How a DIF is managed, including addressing, is hidden from the applications. 
    
    RINA is a clean-slate internet architecture that builds on a very basic premise, yet fresh perspective that networking is not a layered set of different functions but rather a single layer of distributed Inter-Process Communication (IPC) that repeats over different scopes---i.e., same functions / mechanisms but policies are tuned to operate over different ranges of the performance space (e.g., capacity, delay, loss). Specifically, a scope defines a Distributed IPC Facility (DIF) comprised of the set of IPC processes, running on different machines, that collaboratively provide a set of well-defined flow services to upper application processes. Application (user) processes can themselves be IPC processes of an upper DIF that is providing services over a wider scope.
  • kyle lahnakoski | Wed, 24 Jul 2013 01:52:37 UTC

    may you go over why it is bad to have (almost) no buffer at all?  what is wrong with high packet loss when there is not enough bandwidth?  is it just that TCP fails badly in those situations?  thanks!
    
  • bud | Mon, 10 Dec 2012 20:05:15 UTC

    The idea to have to have tools to actually control the internet data stream is compelling. It would be sweet to actually have a user friendly program that not only would measure latencies of the internet,but would recomend  the best way to deal with these delays by monitoring all of major components of the system(including cpu,gpu,memory,OS and bandwidth).The program then could remotely(automatically) tweak the connection to give you the best throughput based on the datastream .
  • Howard Green | Sat, 10 Nov 2012 10:22:57 UTC

    Something that may be related, is the need for some way for the user of streaming data, (either Netflix or YouTube or Skype or Google Talk/Video) to express their bias or preference for EITHER bursty but "full frame" transmission, OR continuous 'Pure' audio transmission with the video degraded in frames per second and/or size of image.  In other words, to THROTTLE what is being transmitted by "dumbing it down"
    
    This might allow the Sender to alter the nature of the stream to match the composition of the packets being transmitted to the characteristics of the network
  • Bernd Paysan | Mon, 04 Jun 2012 12:50:25 UTC

    Hm, IPv6 is not adopted, because there have been no thoughts about adopting it (other than "it is magically deployed at once by everybody").  Why should I use IPv6?  In my home network, I can do this, but I have just 6 devices connected to each other (all of them IPv6 capable).  Hey, for 6 devices, IPv4 is just fine.  Hey, on my home network, I don't even need IP, Ethernet is fine.  So there's no point.
    
    On the larger net, I have no benefit from adding IPv6.  I only have troubles.  Tons of them.  I need a SiXXs tunnel, because 6to4 is broken in a NAT environment (the only environment, where switching to IPv6 has a potential benefit).  Using that, all I can do is reaching the same sites I can reach with IPv4.  Therefore, no early adopter likes to start with IPv6.  When no early adopter starts using it, there will be no momentum for adaption.  Therefore, phase 2 of a rollout never happens.  Phase 2 is the most critical, as a significant portion of the population has switched over, while another also significant portion didn't.  The whole point of the Internet is to have people communicate with each others, and dual stack is no sane option - if I still absolutely need an IPv4 address, there's no point to getting an IPv6 address.
    
    And phase 3, when the few remaining users who still haven't switched are still supported with their legacy environment (because maybe they can't switch?) is not considered, either.
    
    My experimental protocol currently uses UDP over IPv4 and IPv6 to pass packets around (and as this is just a legacy layer, the internal API doesn't even expose what protocol is used, so it's completely transparent to switch to other transport layers).  As it is still in development - it's not yet that much useful.  As you can see with other UDP-based protocols like the current BitTorrent protocol, you can replace TCP in some applications quicker than you can replace IPv4. It's a matter of "does it benefit the user".  If so, people will adopt it.  IPv6 does not give any immediate benefit to the user, *and* it makes transition incredibly hard.  People rather use solutions that are useful now (like NAT), even if they make things somewhat more complicated to overcome the limitations of IPv4 instead of switching to IPv6, which makes the assumption that everybody will cooperate.  No, nobody cooperates.  Wrong social assumptions are just as bad as wrong technical assumptions.
  • Phil Koenig | Mon, 23 Jan 2012 13:32:12 UTC

    Re: replacing TCP... hehe. It's taken us this long just to start a minimal rollout of IPv6, and you want to replace TCP? See you in the year 3000.. ;-)
    
    What I think deserves more discussion are the OS platforms, and all the 'black box' hardware devices that will remain in service for many years. Whatever is done has to be studiously backwards compatible. 
    
    It also doesn't help that some early mechanisms for congestion-control like ICMP 'source quench' messages were deprecated because it was thought that paying-attention to them opened you up to (ie) attack from malicious parties. One would think that with modern encryption/authentication technology we could finally actually _use_ such built-in protocol mechanisms instead of ignoring them because of fear of their misuse by rogue entities.
    Van's comment about IPv6 and spreading packet flows amongst a huge number of IP's is definitely food for thought there too.
    
  • Suresh Shelvapille | Mon, 19 Dec 2011 19:58:21 UTC

    RDMA over Infiniband and Ethernet should help -- at least for data centers and some variant of the same should help Bit torrent folks.
  • Bernd Paysan | Sat, 17 Dec 2011 12:24:39 UTC

    I'm a bit surprised that this talk pops up just now.  I've known about the buffer bloat problem for years (though not under this name), and consider the TCP flow control as fundamentally broken - it was made under the assumption that there is essentially no buffering, which, at that time, was correct, but now is broken.
    
    I've investigating into better flow controls, and first looked at LEDBAT, but LEDBAT is also broken.  LEDBAT works if there is a single LEDBAT-controlled data stream through the bottleneck, it can't even manage two.  Dan Bernstein's CurveCP is said to be better, but the algorithm lacks a detailed description (it is delay controlled, like LEDBAT, and it's a fairly short algorithm, but it is not well factored out, and the code is written in a typical Dan-Bernstein-"the code is obvious and needs no comments" fashion ;-).
    
    I'm now working on my own flow control, which takes these ideas into account.  I'm using an experimental protocol where I can obtain and send around whatever diagnostic data I want - so I'm now packing a ns resolution timestamp into the acknowledge data, which indicates when the receiver actually received the packet.  That way I can do a lot more experiments with the algorithm than when I would modify TCP' flow control itself, which can't transmit when the packet arrived (and there may be an arbitrary delay in the acknowledge, which is of no interest for the sender's flow control).  This all is tricky, because there is also a receive buffer, i.e. the OS does not pass the first packet immediately to the listener, but queues that up, too.
    
    IMHO, TCP is more broken than just the flow control.  It tries to provide three things:
    
    * Flow/bandwidth control
    * in-order arrival of the data
    * reliability (i.e. retransmission of lost packets)
    
    Modern usage of the Internet often have a different set of requirements.  Correct order of packets for file transmission is not needed at all - see BitTorrent, where chunks of the file are transmitted in random order.  Reliability, i.e. automatic retransmission is not required either, because the client keeps track of which part of the file are there and which aren't.  It should be possible to switch these properties of a protocol on and off, depending on the requirements of the application.  Many applications have real-time requirements, for those, flow control mainly is about picking the right resolution for video and the right compression rate for audio (real-time means the sender must fire and forget, and when the bandwidth is not sufficient, adapt by reducing the quality of the data, not by slowing down a HD video stream - this goes beyond the actual network layer; a mobile phone has the bandwidth to receive a full HD video, but it does not have the processing capabilities to play this stream).
    
    And as TCP's flow control is broken under the conditions we have now, this needs fixing, too.  To be blunt: I think TCP must be replaced.
  • Dan Lynch | Wed, 14 Dec 2011 19:00:33 UTC

    This is VERY interesting.  Thanks guys for diagnosing the current situation.  Van knows how dynamism is the byword for how this has to be addressed.  Does he have an algorithm again to be incrementally added to the end points as he intimates that can help those who want to help themselves?  Selfishness is what promotes goodness when it comes to improving the commons as we have all learned.  Vint is asking vendors to do this in their self interest.  But IS there a THIS?
    
    Dan
  • Mike Erran | Wed, 14 Dec 2011 10:14:35 UTC

    This bufferbloat problem is an indicator that one of the core algorithms supporting Internet was designed ad hoc, without a solid theoretic foundation (of course this is not an easy work). We need sound theory to capture the essence of the reality. Otherwise we will end up with keeping facing such problems again and again when the environment changes. Unfortunately most research in this area so far failed to achieve this goal. 
  • Rodger | Tue, 13 Dec 2011 16:33:10 UTC

    I was pretty young when I toured a switch room and saw a teletype punching paper tape that was falling on the floor, and another teletype tugging that paper tape out of the jumble and sending it along its next hop.
    
    I don't much mind waiting a few seconds while I'm christmas shopping on line. If I were running a research application, deep buffers would annoy me.
  • Roger Bohn | Tue, 13 Dec 2011 05:29:52 UTC

    Great discussion. I followed most of it, but I didn't understand why local (end user equipment) solutions won't help. Sure they won't be optimal, but if some vendor adds crude dynamic buffer sizing to home units, won't that be useful to both senders and receiver?
    
    Also, for home users what's the right size for down-buffer compared with up-buffer? For example should I always prefer smaller out-buffers measured in MB, due to slower upload speeds and other factors? Again this sounds easy for vendors to implement.
  • Jim Gettys | Sat, 10 Dec 2011 02:23:30 UTC

    That's just the Queue folks asking for feedback....
  • Elvis Stansvik | Fri, 09 Dec 2011 21:38:00 UTC

    I read the all of the previous "Bufferbloat: Dark Buffers in the Internet" article and found it very interesting.
    
    A bit funny how the end line of this one is "LOVE IT, HATE IT? LET US KNOW".
    
    Well obviously I don't love the apparent state of affairs or the gloomy future, but I love how you bright minds seems to be taking a long hard think about it! So, very interesting read this time around as well.
  • Jim Gettys | Fri, 09 Dec 2011 01:13:58 UTC

    You are correct; that was a mis-statement.  But the thrust is correct: Windows XP does not enable TCP Window scaling by default; there is a registry key to turn it on.  The thrust is therefore the same: most XP systems won't have more than 64K bytes in flight at once.
    
  • S M | Thu, 08 Dec 2011 23:33:53 UTC

    You say "That's actually a significant milestone because Windows XP didn't implement window scaling,"
    
    Wikipedia (http://en.wikipedia.org/wiki/TCP_window_scale_option#cite_note-3) says:
    "TCP Window Scaling is implemented in Windows since Windows 2000.[3][4] It is enabled by default in Windows Vista / Server 2008 and newer, but can be turned off manually if required.[5]"
    
    
Leave this field empty

Post a Comment:

(Required)
(Required)
(Required - 4,000 character limit - HTML syntax is not allowed and will be removed)