Factsheet: DNS attack

by Kieren McCarthy on March 8, 2007

Today ICANN posted the first [pdf] in what we hope will be a series of factsheets that will help explain various elements of ICANN’s mission as well as wider, technical aspects of the Internet.

The aim and intention is very clear: many of the issues that affect the Internet are quite technical and as a result are not well understood. Since the Internet is of such importance, and since ICANN believes that the best decisions over the Net’s future path will derive from wide and open discussion by all interested parties, the hope is that a series of factsheets written in plain English will improve that discussion and encourage involvement.

This factsheet hopes to serve several different ends: provide some timely information on the 6 February 2007 attack on the root server system; correct some misunderstandings about the root servers; act as an information resource for future referral; explain how the Internet is protected and by whom; outline what the attack was and how and why it happened; and lastly, look forward to what can be done to help tackle such attacks in future.

The factsheet has been produced by ICANN, and has ICANN’s masthead on it, but the information has been compiled and written with the wider Internet community in mind and as such we are releasing it under a Creative Commons licence. This means people are free to use it, copy it, add to it, do whatever they want with it, so long as a credit is given to ICANN, so long as people don’t use the material to make money, and so long as whatever changes are made by others are also released under a Creative Commons licence.

Creative Commons License

In other words: spread it as far and as wide as you like. If people want to make different-language versions of the factsheet, we would be delighted to receive copies.

We have also compiled a tentative list of other topics we hope to cover in future, but if you feel particularly strongly that an area in which ICANN can claim a legitimate interest needs to be covered, please do add it in a comment below.

[Download] Factsheet: DNS attack


George Kirikos 03.08.07 at 8:33 am

The timing of the report is perfect, as I just posted:


Everytime I see these reports of “attacks”, my wallet starts to tingle, as the scaremongering seems to always result in later demands for “more money”.

I’ll take issue with 1 specific example of disinformation. On page 2, it says “In theory, if even one of the 13 root servers is up and running, then the Internet will continue to run unhindered as the directory will still be visible to the network.”

This is very misleading. Indeed, due to caching, the internet can function with only minor hiccups if ZERO root servers are up and running. The root zone file is very tiny. You can see a copy of it at:


How long did that file take to load? Not long, since it is only 68 KBytes in size! And, if you ignore all the minor banana republic countries and TLDs, there really is much less “important” information in that 68 KByte file (i.e. due to Zipf’s law, see:


i.e. for most people, .com, .net, .org, .gov, and a few major ccTLDs matter most).

What’s really important is what happens when the “cache” is stale (i.e. the time-to-live (TTL) of the data has expired). Using a telephone book analogy, the “TTL” is related to “how often you should check to make sure that a phone number has changed.” DNS itself can be considered like a hierarchical directory of phonebooks, i.e. the root is the directory of addresses of where to find the white pages for each country (or city), all the way down to the local city phonebook which is typically published once per year.

Of course, with DNS, the “TTL” is typically a lot less than the 1 year of physical phonebooks. However, this notion that the internet “breaks” if zero root servers are available is like saying that the telephone system will break if you don’t get a copy of this year’s phonebook.

An expired cache is similar to using the 2006 phonebook, instead of the current 2007. If you look up my phone number in 2006’s phonebook, or even 2005’s whitepages, you’ll be fine, as the number is the same as it for me in 2007. For a few people, though, the number will be incorrect. In a DNS context, thus, having expired cache data need not be greatly costly. For example, the IP address for ICANN’s website has been the same for the past 2 years:


IP History: 1 change. Using 1 unique IP address in 2 years.

I suspect you’ll find ICANN’s website at tomorrow, and the day after that too…these things don’t change very often.

For its nameservers: NS History: 6 changes. Using 3 unique name servers in 6 years.

Our pals at VeriSign:


IP History: 1 change. Using 1 unique IP address in 2 years
NS History: 2 changes. Using 2 unique name servers in 5 years.

So, what *really* matters is how often the data in the root zone file changes. That will determine how much damage occurs if a stale cache is used (i.e. like the damage that would occur if you used 2006’s phonebook instead of 2007’s). I suspect most TLD operators are not constantly renumbering their networks, so the root zone file should be changing very slowly over time, and ICANN should provide data to prove otherwise. Indeed, if the root zone was static, and non changing, we’d have no need for root zone servers at all. Since memory and hard disks are cheap these days, caching is *very* cheap (68 KB is trivial), indeed one can have a basically infinite cache (or multi-Gigabytes at the very least).

The 2nd prong is distribution of the root zone file. Back in the early days of the internet, there was no BitTorrent. There was no RSS. There is no reason that the 68 KB file at the heart of the internet could not be distributed to the biggest ISPs using alternative measures. e.g. do you really think that AOL couldn’t get a copy of the 68 KB root zone file (to serve its 20 million users) through some “push” mechanism like RSS or even email, or “pull” methods like FTP or BitTorrent? Heck, you can even have a dialup modem distribute the 68 KB file to AOL just like the Fidonet BBS days of the 1980’s. The same goes for other big ISPs. The reliability of those torrent networks in serving up movies and music show that they’re highly scalable and resilient to attacks (if they were easily attacked, I assume the MPAA and RIAA would have taken them down by now). How difficult would it be to serve up 68 KB files (signed appropriately, to ensure authenticity) to thousands, if not millions of users? Too trivial to ponder, if there’s a will to do so. What percentage of the internet worldwide users would be represented by the top 1000 ISPs? I suspect more than half, and if not, it wouldn’t be hard to scale this to the top 10,000 or 100,000. How many millions of people receive multi-megabyte Windows or Mac operating system security updates daily, without incident?

Instead of fear-mongering and trying to justify its exploding $30+ million annual budget:


with pretty graphs, ICANN should talk about real solutions. Real solutions don’t put caviar on the table for lazy bureaucrats, but they definitely benefit the public through lower costs and greater reliability.


George Kirikos

Stéphane Bortzmeyer 03.08.07 at 8:58 am

George Kirikos, between two despising comments against “banana small countries” suggested that every ISP keeps a copy of the root zone file, to be able to sustain a long failure of the root name servers. (Or even to be able to work without root name servers.)

The issue with this scheme is not the size of the root zone file (which is indeed quite small because of ICANN’s very restrictive policies). The issue is ensuring it is up to date. I just witnessed a name server which was using this method and still serving the root zone file of 2004! “.eu” or “.mobi” were not in it. The cron job which refreshed it simply stopped working and nobody noticed.

Having zillions of stale copies of the root zone file spreaded in many places is certainly not going to help when debugging DNS problems.

If we examine the situation with bogons list, or BGP filters, we can worry a lot: stale data used for many years without a way to change it is an actual plague in many organizations.

George Kirikos 03.08.07 at 9:59 am

Of course fresh data is preferred to stale data, that’s obvious. Presenting it as though those are the only 2 choices is misleading, though. They are not the only choices.

If ALL the root zone servers went down, and one had the choice of saying “the internet is down for today, stop using it” or had the alternative of “let’s use the zone file from 30 minutes ago or 2 hours ago or yesterday, or even 2 days ago, that we had cached”, most folks would go with the latter. Furthermore, if everyone knew that the root servers were down, most TLD operators would be smart enough to decide that this was not a great day to renumber their networks! :) And if folks knew that the zone file used by the root servers only changed once in the past month, yet one’s local copy of it was last cached 7 hours earlier, one knows that it doesn’t matter if the root servers are down, because even if they were up, their query results would be 100% identical.

If ICANN wants to publish how often and by how much the root zone file changes, and which TLDs are changing the most, to counter my argument, they should go ahead. Those are real world stats that would be very educational.

Indeed, if part of ICANN’s mission is to promote the security and stability, this would be consistent with having a relatively small zone file that is not dynamically changing every 10 seconds at the whim of the operator of .greed or .anothertldwedontneed or .myvanitytld. This means FEWER TLDs, not more. If there were thousands of TLDs, we’d have a large root zone that is harder to mirror/distribute and that could get stale very quickly.

George Kirikos 03.08.07 at 10:15 am

By the way, the fact that “nobody” noticed that the zone file hadn’t been updated since 2004 strengthens my argument. If one didn’t add the new TLDs (like .mobi or .eu), probably one wouldn’t have noticed (i.e. had no major problems) even longer. A 2004 telephone book is almost as good as a 2007 one.

Kieren McCarthy 03.08.07 at 10:22 am

You have to be kidding me, George.

How on earth do you connect a factsheet about the 6 Feb DNS attack to some ill-defined conspiracy about control of the root zone?

This is about giving people a clear, concise explanation about how these foundational systems work and you’re running around with an aluminium hat on rambling incoherently about people inducing fear while munching on caviar.

Why couldn’t you just have said: “I think the next factsheet you do should be on IANA.” ?


George Kirikos 03.08.07 at 10:50 am

What are you talking about, Kieren? I talked about coming price increases. “Ill defined conspiracy about control of the root zone” is indeed VERY ill-defined — it is UNDEFINED, sheesh. I’ve never disputed ICANN’s control of the root zone.

As to scaremongering to justify price increases or other things people want, if you’ve not seen it before, you’ve not been watching closely enough. I gave the example of WLS earlier. Let’s see, you actually wrote an article about this in The Register in 2002:


where in an aside you wrote “And, that VeriSign is still under investigation for using underhand scare tactics to force people to renew domains with itself over competitors.” That’s your example of VeriSign’s use of scare tactics (my emphasis was added). Scare tactics are common in this industry, from suggesting people need “Domain Privacy”, to the games played by those higher in the food chain.

Don’t be so naive to think that these “attacks” aren’t going to be used at some later date (sooner, rather than later) to attempt to justify more money, ultimately from consumers, while those employing the scare tactics laugh all the way to the bank. Scare tactics were used by registries to justify “presumptive renewal”, i.e. essentially permanent ownership of their TLDs. How many billions of dollars did consumers lose due to that scare mongering, that somehow the world would explode if we didn’t have competitive tendering of the gTLD operations?

When the 7% .com and 10% .net/info/biz/org price increases come along, “higher registry costs due to DDOS attacks” will most certainly be their main argument.

If you want to write another factsheet, why don’t you focus on providing some data as to the frequency and nature of root zone file changes, as mentioned in other comments? Or, heaven forbid, try to get some dollar figures to see how much DDOS attacks are costing — webhosting companies get DDOS attacks everyday, yet I see the price of webhosting FALLING, not rising, unlike domain names. I’m sure GoDaddy or other registrars who offer webhosting can educate you. Indeed, many webhosts offer DDOS protection at very low if not as a free add-on these days, due to the economies of scale and rapidly falling technology costs they’ve seen for anti-DDOS solutions.

John Crain 03.08.07 at 11:54 am

Somebody noticed,

What percentage of out of date data is fine with you George? Is it ok if the TLD under which your website, e-mail etc. operates no longer resolves for portions of the Internet?

Yes, I agree cache is an important factor here. What the fact sheet here eludes top is that as long as one of the root servers is answering effectively then people can update that information when their cached data expires.

You claim that the root-zone changes once a month?

The root-zone is publicly available so my suggestion to you would be to go do some research on that before making such statements.

The zone is published twice daily. Changes to nameserver records and related glue records are a regular thing.

George Kirikos 03.08.07 at 12:44 pm

[One of my past comments (replying to Kieren) was censored (labelled as “spam”) and still hasn’t appeared yet, so who knows if this one will appear either….]

John: I didn’t claim that the root zone changes once a month — I was doing a hypothetical, i.e. if the root zone had last changed X days ago, but one’s cache was recently updated Y days ago, and Y was less than X, then the results from using the cached copy would be 100% identical as the “live” copy, even if we weren’t within the actual TTL specified by the zone file. Sorry if I confused anyone by using made up numbers, instead of “X” and “Y”.

I don’t get paid to compile zone file diffs (if some researcher or staffer has the time, be my guest), but it should be fairly evident that most major TLD operators would not be changing their nameservers each and every 12 hours…..if they are, they shouldn’t be running that TLD. One can use domaintools.com to see how often nameserver changes are done by corporate websites (I already gave examples for icann.org and verisign.com — as a third, http://whois.domaintools.com/godaddy.com reveals that GoDaddy had 2 unique nameservers in the past 3 years), i.e. 2nd level, below the TLDs, and I’m confident the 1st level (i.e. the TLDs themselves, that appear in the root zone file) change even less. It would be an odd network indeed if things changed more frequently as we moved UP the hierarchical DNS tree — that’s the opposite of stability. The most frequent changes will be at the bottom levels, not the top.

I’m glad we agree on caching. Of course if one root server is operating, the cached data can be updated. But, suppose they’re all down? Is it the end of the world? Probably not, since if the stale cache was used (i.e. like using a 2004 phonebook, instead of a 2007 phonebook), the odds are pretty good that you’ll likely still reach the person at the published number.

If BitTorrent, RSS, FTP, or other technologies are used, one could make the system even more resilient. e.g. one can imaging a version of bind or similar software that is caching the root that has pseudocode like:

“if all root servers are down, try to get fresher copy via FTP; if FTP fails, try HTTP; if HTTP fails, try BitTorrent, if BitTorrent fails (then the internet is probably really messed up!), try dialing the secret phone number to a hypothetical dialup system, given only to big ISPs; if the dialup fails, keep trying and notify administrator). Indeed, if the diffs were small enough, one could even publish them in newspapers (like the WSJ).

Kieren McCarthy 03.08.07 at 1:09 pm

Hey George,

My point was that you took a straight piece about the DNS attack and the root server systems to somehow launch into a wild and barely connected future conspiracy — extrapolation on acid.

I should apologise here – you ran foul of our spam removal software by writing too many comments in too short a period of time. This is classic spamming behaviour and the software, once it recognises an offender, then works retrospectively and removes other comments from that IP address.

So you triggered something and the software killed your comments but I’ve been through it manually and they should all now be back up.


John Crain 03.08.07 at 1:16 pm

Whether the end of the world appears once all root-servers are down may depend on your view point. I’m sure that as operator of one of those servers my world will be pretty miserable.

The reason we have the the multiple servers and the anycast scenarios is exactly to prevent this scenario.

There is a theory that a signed zone (dnssec) may make local copies much more practical, although still the issue of ensuring that up to date data is used is still critical.

If there is a well published alternative mechanism, that will be just as subject to attack as the servers themselves and likely less easy to harden,

The good news from the recent attack is that even though gigabytes of extra requests were being sent to the servers the anycast solutions put into place by many of the operators were effective.

If you believe there is a better protocol/method for improving resilliency then I would suggest taking the time to write them in a document and publishing them through the RFC process.

Simon Waters 03.09.07 at 5:44 am

The suggestion to move all servers to Anycast is not a logical conclusion to draw from the report.

Against this kind of attack Anycast is clearly very useful.

Had the attackers attacked say the Anycast routing protocol in some way rather than the root DNS servers, the result may have been quite different, and I’d now be arguing with should keep Anycast server because they are useful against DDoS attacks.

It is like concluding that because aircraft won the Battle of Britain, Britain no longer needs a Navy. Not having Anycast for some root servers is a diversity issue. This attack didn’t suggest that diversity is bad. That will come when someone compromises one of the root servers ;)

I still believe that the root-server model is basically flawed. Root zone is a 20KB file compressed. If every caching name server were to grab a copy of this file from the root servers, a whole compressed copy every 2 days, and if there were 4 billion recursive name servers, then the total traffic would be similar in magnitude to this attack (4Gbps). Of course we have protocols that allow us to send only what has changed (ixfr, or rsync spring to mind), when it has changed. With that, and realistic figures for the number of recursive servers, one could retire most of the current root servers With a digitally signed root zone file, we don’t even care how it gets to the recursive servers, ICANN could get away with a copy of GNUPG, and a selection of dial-up accounts (in case one is down or attacked), and a peer to peer file transfer system, would be sufficient to fulfill the technical requirement of distributing the root zone file.

Slaving the root zone on recursive name servers is a performance win, and could be a security win if ICANN put a digitally signed copy of the zone somewhere accessible, since then people wouldn’t have to rely on the integrity of the many root server operators, their servers, staff etc. Just the integrity of ICANN, their zone file, and their ability to keep the privates keys private, and strong cryptography. Much of which we have to rely on anyway for them to correctly update the existing root zone.

George Kirikos 03.09.07 at 6:58 am

Well said, Simon. I was thinking about this further last night, and assuming we kept the number of new TLDs reasonably small, so that the zone file doesn’t explode in size to several megabytes (sorry new TLD advocates, we don’t need 5000 new TLDs), conceivably root zone updates could be encoded in small brief satellite bursts, using frequencies like the global positioning system (GPS).

Hackers likely have little chance of taking down the global telephone system (although as it moves to IP over time, you never know). But, global satellites, sending out signed 20 KB burts every few hours — probably orders of magnitude harder to take down (unless the Chinese use their latest gizmos to knock down all the satellites, but if they did we’d have bigger things to worry about, like global thermonuclear war). Shortwave radio would be another alternative (whatever’s most cost-effective).

By the way, was it wise for ICANN to explain in so much detail (i.e. the large packet sizes that were dropped) how the attack was blocked? Why not just publish a hacker’s guide, “Yes, guys, please use random smaller packet sizes next time, to make filtering less trivial.” Security by obscurity isn’t great either, but perhaps be a little less giving until all the anycast systems are rolled out.

Search Engines W 03.10.07 at 1:24 am

Could you consider a future blog post – crediting the people who were responsible for implementing these safeguards and the scenario of how they were discussed?

This is of historical importance and should be documented


John Crain 03.10.07 at 3:59 am

My suggestion to those who believe that they have solutions to improve the way the protocols work follow the practice that turns any idea into reality or not:

Write them down in a well thought through document that stands up to review.

The place for development of Internet standards of this type is through the Internet Engineering Task Force (http://www.ietf.org.)

A blog page is definitely not the place for documenting such things.

There are two relevant working groups at the IETF:

In the “Internet” area there is DNS extensions


In the “Operations and Management” area there is DNS operations:


I look forward to reading the drafts when they are written.

I for one also look forward to seeing more people contributing. Remember that well thought out and documented solutions tend to get the most traction.

Kieren McCarthy 03.10.07 at 4:45 am

There is traditional sensitivity about providing any information that might be used in a future attack, but yes I agree that this information is interesting and important.

I think a big chunk of it is that when you are doing a job you don’t think to write down what you actually do in that job. Just getting on with it is enough.

I will ask about, see if the root server operators are willing to compile this – even if the information can’t be released until a future date.


Dr Eberhard W Lisse 03.12.07 at 5:43 am


and you run which TLD?


Comments on this entry are closed.