Comments

(newest first)

  • Albert Klausevits | Tue, 02 Aug 2016 13:53:01 UTC

    I had a trouble with the body's weight, back pain and also motion security. After physical medication recovery in New York Dynamic Neuromuscular Rehab (NYDNR) https://nydnrehab.com/ I feel wonderfull!!! Throughout DNS treatment, I treated for imbalances, disorders, problems with pose and control disorders with an approach that takes the specific back to placements of early growth as well as uses therapy to proceed function as it is endured. The training is conducted in the most all-natural (excellent) body placements. When learnt in this manner, the main movement mechanisms become automated offering basis for healthy.
  • Michael Hennebry | Wed, 18 Jan 2012 06:52:23 UTC

    Heikki | Sun, 08 Nov 2009 11:16:54 UTC:
    "So what is the alternative to the DNS hack in the case of CDN? 
    
    HTTP redirects (including the already mentioned metalink/Http) simply do not work for everything because that is a per request thing. Let's say you have 100 1kB images on your web page. Were you using HTTP redirects, you would have to make 100 of them, each already as big as the resource it self. Not very efficient and adds a lot of latency to page loading."
    It seems to me that one would just need to redirect to the correct front door.
    Once one is at the correct front door, relative URLs would all go to the same site.
  • | Fri, 16 Jul 2010 05:11:45 UTC

    The DNS is not a reverse Polish notation calculator.  Or is it?
    http://bert.secret-wg.com/Tools/index.html
    
    The DNS is whatever you want it to be, unless you're Paul Vixie. :)
    
    Please remember, as bobel said, the DNS (using catchy names) has always been *optional*.  (Internet) Numbers are the only thing that the network understands, and they are what we use when the use case is "mission-critical".
    
    Apache, the world's most popular web server, has an option to disallow queries that use numbers.  This is the ultimate "abuse" of the DNS: coercing people to use it.  
    
    Of course, it's easy to work around this silly option, using /etc/hosts.
    
    Internet Numbers are not any more difficult to remember than phone numbers. The more you use them the more familiar and meaningful they become.  In any event, hosts(5) provides for user-specified "unofficial names" and aliases.  The DNS is not the only way to assign names to numbers.  Hosts(5) can do it.
    
    How many DNS queries will someone make in their lifetime?  Imagine a number.  Now imagine a file, a hash table, b-tree or some other data structure with a small entry for each of those lookups.  How big would that file be?  I'm quite confident it would be small enough to fit on your mobile phone, most likely, and certainly would fit on your netbook, laptop, desktop, workstation, etc.  Maybe even entirely in RAM.
    
    
    
    
    
    
    
    
    
    
    
    
    
    
  • DNS Fan | Fri, 26 Mar 2010 02:57:06 UTC

    There is an insightful and relevant blog entry (and clever graphic) that references this article at WhatTheHellSecurity.com titled "Security and the Unforeseen Use Case"
  • Eric Lawrence | Tue, 15 Dec 2009 00:32:55 UTC

    Paul, if you have a network capture of the IE address bar behaving this way, I'd love to have a look (ericlaw AT microsoft). 
    
    Generally speaking, the IE address bar does not attempt to resolve against DNS as you type. The only exception is that, in IE8, after you've typed 4 characters, we'll attempt to resolve the hostnames for your top 4 previously navigated hostnames that match what you've typed thus far. We do not try to resolve hostnames you haven't visited, and we do not try to resolve partial hostnames based on what you've typed.
    
    -Eric Lawrence, Program Manager, Internet Explorer Networking
  • Withheld | Mon, 30 Nov 2009 21:27:39 UTC

    Why is the text gray?  And why are half the comments gray text on a gray background?  Why is the site designer trying to make the text hard to read?
  • Matt | Wed, 18 Nov 2009 17:14:52 UTC

    Great stuff.
  • Paul Vixie (AUTHOR) | Wed, 18 Nov 2009 15:55:21 UTC

    On CDNs: 
    
    I don't hate CDNs. I just don't think they need to hack in at the DNS layer.  HTTP offers various kinds of redirects. IBM's WebSphere CDN takes this approach and has been a technical success. Anycast TCP is also in use by a few CDNs and works at least as well as any DNS-layer solution. Anycast is stable for minutes or hours at a time, so it's rare for two TCP packets to the same destination to reach different anycast contributors. My gripe about DNS-layer CDNs is that they push a lot of the service burden onto the clients and ISPs, and don't actually work better than available alternatives. Do 'dig www.microsoft.com' and count the number of times the zone changes inside the answer section. Every such change requires that a recursive nameserver somewhere restart its iteration. I realize that most successful businesses have to find ways to offload their costs, but this is ridiculous. How many CDN buyers actually compare the cost and benefit of CDN as compared to a single well provisioned well connected web server? 
  • Paul Vixie (AUTHOR) | Tue, 17 Nov 2009 00:00:00 UTC

    On Innovation:
    
    I don't discourage or disparage innovation. I've authored a half dozen DNS related RFCs to expand the protocol, and I've implemented all kinds of new features (most of which didn't require new RFCs), and I regularly bend my elbow at the bar with others who do likewise. Innovation is great, when it improves the capabilities of the whole system without destabilizing it or shifting costs onto others. It would be a mistake to call me a DNS luddite, and I urge you to consider the possibility that I am being quite specific about exactly which changes people are slipping into DNS that I don't like.
  • Paul Vixie (AUTHOR) | Tue, 17 Nov 2009 00:00:00 UTC

    On DNS name completion:
    Boris draws first blood -- Mozilla's products don't do name completion. I got Mozilla mixed up with Chrome, which does name completion, and I was sloppy. I hope my friends at Mozilla will accept my humble apologia. I do worry about 〈https://developer.mozilla.org/En/Controlling_DNS_prefetching〉 and about the information a spammer or other attacker can make me leak with: 〈link rel="dns-prefetch" href="http://${UNIQUE_IDENT}.evil.local/"〉 and i don't know if i'll like living in a world where webmasters have to say: 〈meta http-equiv="x-dns-prefetch-control" content="off"〉 in order to prevent it happening. Thank you Boris for correcting my error.
  • Paul Vixie (AUTHOR) | Tue, 17 Nov 2009 00:00:00 UTC

    On anycast DNS:
    
    In early discussions about DNS anycast, some confusingly incorrect terms were used. I was against DNS incoherency, and the example of incoherency I was complaining about happened to use DNS anycast, and sometimes I said anycast was bad when I really meant that incoherency was bad. Anycast DNS just means using a lot of identical servers all over the world having the same IP address and making it look like a single server that's amazingly well connected. I wish I'd thought of this myself, but it was David Conrad who proposed doing anycast DNS for root name servers, and it was Akira Kato who first implemented it in the "M" root name server. Anycast and incoherency are not bundled -- one can deploy either without the other, or deploy both. Do note that anycast often does a terrible job of traffic localization, due to the way that IP traffic flows independently from geography and due to ISP interconnection ("peering") being a business decision rather than a regulatory necessity or an engineering decision. Anybody using anycast has to have at least one "supernode" that can take quite a lot of global traffic.
  • Paul Vixie (AUTHOR) | Tue, 17 Nov 2009 00:00:00 UTC

    On Internet Archive:
    
    At ISC we love the Internet Archive, and we consider them fellow travelers. We'll give free transit to anyone who wants to give information away for the purpose of making the world a better place. See also kernel.org, FreeBSD, NetBSD, Mozilla, [your name here]. We could not justify our network expenses nor the donations of network services we receive if we didn't reach out to our fellow travelers and help them do what they do. If you know of Warez boxes inside Internet Archive or any other guest of ISC, please contact me. We intend to run a clean network, but sometimes problems do creep in. My contact information is at "whois -h whois.arin.net pv15-arin".
  • Paul Vixie (AUTHOR) | Tue, 17 Nov 2009 00:00:00 UTC

    On RBLs:
    
    I love RBLs. When Eric Ziegast invented the DNS RBL concept, it was the first wide scale use of DNS for something other than host names and host addresses. I'd like to see DNS carry many more kinds of information. But it would be a mistake to think that the policy data published in an RBL is the same kind of policy based response logic present in a CDN. Every DNS response we ever produced in the MAPS RBL was coherent -- if many people asked a question within a given TTL then they all got the same answer no matter where they were in the world or what we thought their connectivity was. I'm not against new kinds of data in DNS. I oppose incoherency, no matter how well engineered it may be.
  • Paul Vixie (AUTHOR) | Tue, 17 Nov 2009 00:00:00 UTC

    On CDNs: 
    I don't hate CDNs. I just don't think they need to hack in at the DNS layer. HTTP offers various kinds of redirects. IBM's WebSphere CDN takes this approach and has been a technical success. Anycast TCP is also in use by a few CDNs and works at least as well as any DNS-layer solution. Anycast is stable for minutes or hours at a time, so it's rare for two TCP packets to the same destination to reach different anycast contributors. My gripe about DNS-layer CDNs is that they push a lot of the service burden onto the clients and ISPs, and don't actually work better than available alternatives. Do 'dig www.microsoft.com' and count the number of times the zone changes inside the answer section. Every such change requires that a recursive nameserver somewhere restart its iteration. I realize that most successful businesses have to find ways to offload their costs, but this is ridiculous. How many CDN buyers actually compare the cost and benefit of CDN as compared to a single well provisioned well connected web server?
  • bobel | Mon, 16 Nov 2009 21:51:53 UTC

    I think DNS was a big mistake.  It's been responsible for all sorts of intellectual property and other problems that would have been avoided had DNS never happened.  
    
    What's the matter with using IP addresses?   We have used numerical addresses for the phone system for years w/o any problems.  Our phones all have the capability to build our own phone books with names meaningful to us.  All browsers have bookmark capabilities.  Most people access what they call the Internet via search engines any way.
    
    DNS - just say no
    DNS - consider harmful
    
    
  • Andrea Costantino | Mon, 16 Nov 2009 18:36:45 UTC

    About policies, they are not bad as they are smart.
    BIND itself have a concept of proximity and topology, so it had already hardcoded the basic idea of policies.
    What it sound so bad are the CDN GSLB used by most Content Distributor.
    If you're going to ping the DNS just to reply with the A response for the WWW main site is quite a stupid thing. It's also prone of errors for those doing it unintentionally or without enough knowledge (as any other feature, of course).
    Basically, the best solution should be the simplest one:
    1) Anycasting for the root DNS (already widely used)
    2) CDN in the need for balancing, should have a redirected behaviour on (for istance) customer region or ASN (most of the network allocated can be simply mapped to Europe or US or south america etc.) for the A question to the right DNS server in the right country (or region or whatever). And this can be distributed among all regions.
    3) Local DNS receive redirectes and maps to the local CDN distribution point.
    
    If a major fault cut the site, or region, no answering is performed and the DNS client will perform request to another region, and download the content from there.
    If the local DNS works, but the CDN goes out, is simple as to shut also the DNS or reconfiguring it.
    If the CDN works, but the DNS goes out (strange, given the robustness of the DNS structure itself), that region will be simply not used even if available.
    If the CDN operator is wise enough to put two different ISP-based service points in an area a balance among them, it becomes stronger and stronger.
    
    This works perfectly even if:
    1) You don't put ping probes. You don't need to, Europe is Europe and is probably better connected inside it than to Asia etc. Partial exception could be US with some low develop sites (Africa), they could be better connected with US than in the same area, but they are a tiny fraction compared to other, developed, areas.
    2) You don't have a service policy at all at 2nd level, it must just answer plain A,MX,whatever response
    3) If allow caching for a reasonable time (6-24h), given you answer with at least 2 IP (or MX or whatever) into different ISP service points.
    
    
    No need for GSLB anymore then.
    
    My 2 cents.
    
  • John Stracke | Fri, 13 Nov 2009 15:13:59 UTC

    "Small-sized Anycast HTTP requests with a redirect to a locally optimized content server for the object. Surely someone thought of it already." -- Yes, of course it's been thought of.  Only problem is, it doesn't work.  You can't do TCP to an anycast address, because you can't be sure that all your packets will go to the same host.
  • Josh Goodall | Wed, 11 Nov 2009 12:13:39 UTC

    DNS is critical neutral infrastructure the integrity of which needs constant advocacy from the likes of Vixie until the day comes that a credible and equally neutral (or better) global-scale shared distributed mechanism for unique and repeatable identifier resolution can arise. And no I don't mean Google, very funny. And all that is solely because the single strategic flaw in the DNS architecture is that it has a root that is managed by a single entity, currently a country.
    
    CDNs should either do redirection at the HTTP layer, where such behaviour has been designed in (and also enabling user agents to nominate their own preferences or other sophisticated policy behaviour), or using anycast, which is simplicity itself in observing that the network takes care of the shortest path. Or, heck, why not both? Small-sized Anycast HTTP requests with a redirect to a locally optimized content server for the object. Surely someone thought of it already.
    
  • Nosce Te Ipsum | Wed, 11 Nov 2009 04:41:09 UTC

    Would this rant be authored by the very same individual who claimed that anycast DNS requests were a veritable abomination?  [I'm paraphrasing, as I wasn't privy to the actual conversation.]  As I recall, the only root server (at the time) that DID support anycast, was in fact the only root server that was still operational during the DDOS that occurred, thereby obviating why anycast was in fact a good idea.  I would submit that in the pantheon of galatically stupid presumptions or statements, that would be right up there with IBM's projection of the market for personal computers.
    
    With regard to validly signing geographical DNS requests, and other so called lies, this issue in particular is an issue that actually requires applying some massively scalable signature generation capabilities for DNSSEC in order to make it viable for large variant response zones.  In that context, would not such a mechanism ultimately be a positive for the adoption of DNSSEC as a whole, or is profiting from a differentiation in service capabilities some how a bastardizing of the internet and DNS in general?
  • Dave Piscitello | Tue, 10 Nov 2009 18:22:33 UTC

    Compliments to Mr. Vixie for putting a broad set of DNS uses into context and exposing the misuses with clarity and conviction.
    
    Additional reading on issues and concerns related to misuses of DNS can be found in an ICANN SSAC report, SAC 032: Preliminary Report on DNS Response Modification (20 June 2008), found at http://www.icann.org/en/committees/security/sac032.pdf
  • Terri | Tue, 10 Nov 2009 13:47:46 UTC

    If we don't push things to their limits we will never evolve.  Closing ones mind to innovation is an invitation for extinction.
  • Neal Murphy | Tue, 10 Nov 2009 07:31:58 UTC

    Perhaps one solution is to require DNS providers to include in the response whether the returned IP address actually matches the domainname sent or if it is guess because the sent domainname does not exist. Then let the end user decide if he wants to allow such misinformation, convenient though it may be, to get through.
    
    Perhaps another solution is to create one's own root server and maintain a 'private' network that never references the 'official' internet DNS services. DNS on the world-wide internet can only be like a carnie barker doing his best to get you to visit *his* booth or attraction, despite everyone's best efforts to the contrary. So set up your own DNS DB with your own TLDs and include from the 'standard' DNS DB only those domains you want to access. Be on the internet, but don't be part of the 'InterNet".
  • Boris Zbarsky | Mon, 09 Nov 2009 14:59:09 UTC

    Paul, I really wish you'd done your fact-checking before accusing people of doing stupid things.  Mozilla (whether Firefox or Seamonkey) does not now do and has never done any DNS queries based on the string in the URL bar until the user hits enter.  At that point, of course, the user-typed string is treated as a URL and DNS resolution does have to be performed to fetch the data.  I'd be very interested in finding out what gave you the idea that it performs such DNS queries.
  • Stuart Gathman | Mon, 09 Nov 2009 03:36:55 UTC

    I've found that ISP resolvers are so full of cruft that they are useless.   They ignore TTLs, so that users don't see timely updates to their own company DNS.  Users are unable to access essential business sites because the DNS redirection is screwed up at their ISP.  Sometimes NXDOMAIN is broken (lied about and redirected) - screwing up checks like SPF.  Our ISPs were very slow patching the cache poisoning vulnerability.  As a result, I always install a caching name server as part of every site.  It wastes DNS traffic that might have been cached on the ISP server - but it works.  I dread the day when ISPs start altering DNS packets not directed at their servers.
  • APK | Sun, 08 Nov 2009 23:18:57 UTC

    Mr. V:
    
    Your thoughts on this are appreciated (since it is somewhat related): 
    
    Question, on the use of 0 as a domainname/hostname blocking "ip address":
    
    The use of "0" (zero), as a "blocking IP Address" in a HOSTS file, vs. the larger & slower 0.0.0.0 or worse yet, the std. "loopback adapter" address of 127.0.0.1, produces far faster read & smaller HOSTS files on disk, than do those that use the larger & slower 0.0.0.0 or worst of all, the 127.0.0.1... 
    
    Yes, or no will do. 
    
    Thanks. 
    
    APK
    
    P.S.=> Microsoft first introduced the smaller & faster 0 blocking address for a HOSTS file in a service pack for Windows 2000 (not its original version model build, which @ best, could only do as well as a 0.0.0.0, vs. 127.0.0.1), and? 
    
    Well, MS left it operating with 0 as a legit IP address to block known bad servers or sites with (ping'ing such a blocked site using 0 prefixing the domain/hostsname, returns 0.0.0.0 on Windows 2000/XP/Server 2003 & did even with VISTA for a long while) even into VISTA until 12/09/2008 on that "Microsoft Patch Tuesday", which removed it... 
    
    Personally - Well... I don't get it, like "why make a good thing not as good, in other words?"
    
    Hey - If there is a GOOD SOLID LOGICAL TECHNICAL REASON? Then, I'd like to know it.
    
    OR, even maybe "here are the tradeoffs we made and WHY" or "We will issue a patch" etc. et al - instead of evasions... I say that, because that's all I've seen & directly from MS themselves, on their blogs or forums etc. or other forums online.
    
    So, that said, I'd like your thoughts on it all, as it is name resolution related (many folks like myself use HOSTS files for both added security AND FAR MORE SPEED online, & I need only point to mvps.org or SpyBot "search & destroy" users, & many more such as these HOSTS files @ WIKIPEDIA -> http://en.wikipedia.org/wiki/Hosts_file  )  apk
  • Myatu | Sun, 08 Nov 2009 13:31:44 UTC

    Apart from the CDN paragraph, I agree with what's being said. Especially when it comes to NXDOMAIN mappings. However, there's no way to enforce it in its current form as the DNS system is open (RFC or not).
  • Daniel Torrey | Sun, 08 Nov 2009 11:53:03 UTC

    "They solve a real problem - requirement to push data as fast as possible from a source as close to the consumer as feasible."
    
    Wrong.  There may be a requirement to push the data as fast as possible, but source proximity to the consumer isn't a requirement at all, unless you believe that the internet is a bunch of pneumatic tubes.  
    
    A fast server on a fat pipe in Japan is much more likely to meet the "fast as possible" requirement than a slow server down the block from the consumer.
  • Heikki | Sun, 08 Nov 2009 11:16:54 UTC

    So what is the alternative to the DNS hack in the case of CDN? 
    
    HTTP redirects (including the already mentioned metalink/Http) simply do not work for everything because that is a per request thing. Let's say you have 100 1kB images on your web page. Were you using HTTP redirects, you would have to make 100 of them, each already as big as the resource it self. Not very efficient and adds a lot of latency to page loading.
    
    The only good solution is to get different clients to make their requests directly to different sites. The only way to do this is that a) they are given a different URL in the first place to load the content from or b) the DNS gives a different server IP address to each client or c) the IP address, given by the DNS, routes to a different for each client. 
    
    Solution 'a' is what we had a few years back (www1.myservice.com, www2.myservice.com,etc). The reason we don't want it is that it breaks URLs (the URL starts to contain load-balancing state). Alternative 'c' would be cool, but AFAIK very hard to implement since it requires control over the way ISPs route. That leaves us with what's currently being done, alternative 'b', the DNS hack.
    
    So Paul, before saying a working solution is a solution from hell please give your 5c on how you would do it.
    
    Disclaimer: I have no affiliation with any CDN.
    
    Heikki
  • Paul Wall | Sun, 08 Nov 2009 08:17:21 UTC

    DNS is a tool, not your personal protocol. It has grown to encompass tasks that it was not originally designed for, and this is a good thing, not a misuse.
  • Director of Front | Sun, 08 Nov 2009 04:58:54 UTC

    Also, how come ISC secretly purports itself to be "infrastructure" to the Internet, yet gives free transit to archive.org? Yet, archive.org besides its obvious web site, seems to have a ton of warez boxes sitting behind it. So, peering with ISC seems to serve some sort of an agenda.
  • Director of Stunt | Sun, 08 Nov 2009 04:56:47 UTC

    "Cisco Distributed Director" ? That was end of life'd around 2002. Vixie, get with the program.
    
    Also, I believe Keyboard Cat was summoned at your NANOG keynote (good job with the 6 point font on a giant screen). You and Randy should move on. The "you kids get off my lawn" thing is getting a bit old.
    
  • Matt Taggart | Sun, 08 Nov 2009 01:51:08 UTC

    I wonder what vixie thinks of using DNS infrastructure for RBLs. Is this something "DNS is not"?
  • Jeff | Sun, 08 Nov 2009 00:13:12 UTC

    Interesting how every single post here shows that the technical discussion above has escaped them, and all the commenter sees is an attack on their own revenue model or activity.  Some quite obviously ignore or intentionally misrepresent the arguments presented... lying indeed!
  • Anthony Bryan | Sat, 07 Nov 2009 22:12:31 UTC

    We're working on solutions to CDN providers using Stupid DNS Tricks with the Link header in 
    http://tools.ietf.org/html/draft-bryan-metalinkhttp
    
       Link: ; rel="duplicate"; pri=1; geo="gb"
    
  • Gaige B Paulsen | Sat, 07 Nov 2009 22:06:11 UTC

    Re: CDN's.   I think that what Paul is getting at here is not a dislike of CDNs, but an inappropriateness of using DNS tricks to try and determine where data should be coming from.
    
    Although it's true that DNS has been used (misused?) to do some pretty amazing things, it has also been used as a crutch for shady operators to do things like watch people's traffic, steal information, redirect users away from legitimate sites, and generally make determining where packets are going more difficult and less reproducible than would otherwise be the case.
    
    I'm a big fan of the free market, but a free market works best when costs are associated with the revenue they generate.    In the case of DNS, the costs are disproportionately borne by organizations other than those who are generating revenue.   Let's take CDN's for example:
    
    If you are a CDN and you couldn't try and switch around DNS, how could you achieve the same basic effect?   You could have a large system of front ends somewhere that would accept incoming HTTP connections and use 301 redirects to point the user's browser at a 'more appropriate' local server.   This way, your determination would be based on the actual origin of the traffic and DNS results could be relatively static for your content.  Might not be your favorite solution, since it would cost you money and you'd have to figure out how to make sure those servers don't get inundated, but it certainly would work, and arguably be more effective.   And, guess what, when your traffic goes up, you need to provision more servers, but the DNS information remains cached, so the intermediate DNS resolvers don't have to do any more work.
    
    For those of you who missed the point and thought that Paul's complaint here was with using CDNs, please go back and re-read.   The complaint is with using DNS determine where the CDNs should be sending their traffic.
    
  • mr | Sat, 07 Nov 2009 21:42:43 UTC

    You have an interesting view point.  I guess your point is that DNS should only be used for what it was originally intended for in a strict way.
    
    I believe that innovating and figuring out other uses for a wide variety of things makes sense not only for DNS, but for tons of other items.  Can you imagine where the world would be if inventions were only used in the limited way that the inventor intended?  Ouch!
  • Elin F. | Sat, 07 Nov 2009 21:10:39 UTC

    Another problem with using DNS this way is it also affects split-tunnel VPN's which will first check your ISP DNS before checking with your VPN's DNS to resolve an internal address. Very frustrating to deal with as the way to opt out is different by ISP.
  • Blake | Sat, 07 Nov 2009 21:07:36 UTC

    "Stupid BGP Routing Tricks" ?
    
    Anycast rouing has been in use for a very long time and has proven to be a sound way of handling client traffic with the closest servers.
    
    Notably by the root-zone DNS servers...
  • cs | Sat, 07 Nov 2009 21:05:39 UTC

    As far as policy goes, wouldn't views in BIND be considered policy?
  • Bill Bogstad | Sat, 07 Nov 2009 20:59:25 UTC

    As a general rule, your suggestion that CDNs' unusual usage of DNS increases transaction time MIGHT be true for the DNS portion of the conversation.   (Some CDNs regionalize DNS by using sub-domains which is why I say MIGHT.) However, I think you are paying way to much attention to DNS vs. all associated network traffic.  Even you start this article by pointing out that almost every DNS transaction results in another TCP transaction.  That TCP transaction typically takes many orders of magnitude more time and bandwidth to complete then even an uncached CDN based DNS transaction.  Regionalizing that TCP transaction saves way more time/bandwidth-miles then the more expensive DNS transaction costs.  At least that's what figures out of Internet performance companies like Keynote Systems seem to indicate.
    As for using pseudo-random methods (DNS round robin?), my (now dated) experience is that it doesn't even come close to working as well as active load management. 
  • Alex Besogonov | Sat, 07 Nov 2009 20:00:26 UTC

    I'm curious, why do you hate CDNs so much? 
    
    They solve a real problem - requirement to push data as fast as possible from a source as close to the consumer as feasible. Like not downloading file from a server in Japan to a node in Germany.
    
    It's not like there's a lot more alternatives for region-based DNS. You can also do Stupid BGP Routing Tricks for load-ballancing, but is it much better? It's also possible to do load balancing on upper levels (like, using HTTP redirects to a regional server) but it has its own drawbacks.
  • John Stracke | Fri, 06 Nov 2009 14:27:34 UTC

    "a higher DNS request rate (perhaps leading to higher revenue for CDNs that charge by the transaction)" -- I used to work on Akamai's DNS server, and I can assure you we did not consider DNS requests to be a revenue stream.  In fact, if anybody had suggested trying to increase DNS traffic, we would've laughed at them, because we were already working hard to keep up with the traffic we had.
    
    (I can agree with you on the rest of it, though.  I never did get comfortable with the Akamai attitude that DNS was just another request/response protocol, which we could manipulate to get the client to do what we wanted.)
Leave this field empty

Post a Comment:

(Required)
(Required)
(Required - 4,000 character limit - HTML syntax is not allowed and will be removed)