"I designed the DNS so that the namespace could be anything you wanted it to be"
The domain name system (DNS), often referred to as the phonebook of the Internet, is a fundamental part of the Internet’s infrastructure. Created by Dr. Paul Mockapetris in 1983 while he was working at the University of Southern California’s Information Sciences Institute, it is, of course, that distributed directory whose primary role is to convert human-readable host names, such as www.welcometothejungle.com, into machine-readable IP addresses, like 188.8.131.52. It was adopted by the Internet Engineering Task Force (IETF) as one of the original Internet standards in 1986.
Talking to Behind the Code, Mockapetris considers the current state of his innovation and its future in a world where countries are seeking to build their own intranets. Here, he shares his thoughts on the system’s security vulnerabilities, how to deal with malware, what he hopes a DNS 2.0 will look like, and how blockchain technology can be used with the system.
Please could you give us a brief history of the DNS and how it came about.
In the early days of the Internet, the host and IP address information was stored in a text file [HOSTS.TXT] that was maintained by the Stanford Research Institute [SRI]. When a new computer joined the network or an old computer modified its details, people had to contact the SRI and have this file manually updated. Anyone who needed the IP address of a computer also had to contact the SRI and get the updated file. This presented a major problem, as it would close at five o’clock on weekdays and wasn’t open on holidays. The DNS came about as a way to solve this problem with a distributed database.
A lot of the design, I think, came out of my background. I did my PhD at UC Irvine with Dave Farber and worked for the distributed computing project. I had previously worked at IBM virtual machine technology and cluster computing, so I had a lot of exposure to ideas about how to organize on multiple machine systems where the administration was distributed.
But distributed computing wasn’t an entirely new concept. There was already an idea about distributing information across different computers—for example, the people at Xerox had their proprietary system called XNS [Xerox Network Systems]. The thing about the DNS was that it was big enough to be more than any one manufacturer and it could take on certain jobs, such as being able to route email between different kinds of computing systems.
Many people think that the sole objective of the DNS was to go from names to addresses, but it was designed to be a much more general purpose than that. There are about 60 or 70 different uses that people have come up with. The main idea was to distribute authority so that you could get your domain and manage it without having to go back to some central authority whenever you wanted to change it. Furthermore, you could create some domains under that. So, universities could get their domain name and then create separate subdomains for different departments, companies could do it for different products. However you wanted to use your names, you could.
We now have more than 1,500 top-level domains. When you invented the DNS, did you think we would have so many?
Although I invented it and designed it so that it could be flexible, the ability to add new top-level domains [TLDs] has always been a game of political football. In the early years of ICANN [the Internet Corporation for Assigned Names and Numbers], they thought it would be dangerous to add a whole lot of new TLDs. I didn’t really see why, but I agreed that it might make sense to go slowly and be careful.
What eventually happened was that the original policy, which I helped craft, said that every country could have its own TLD. So, after a while, we had about 200 country codes. It became pretty clear that adding new TLDs isn’t as harmful as some people had feared.
There are still lots of issues about who owns a particular TLD, such as should Amazon get its own TLD while the countries in South America that are split by the Amazon River claim ownership? So there’s a bunch of politics there.
I designed the DNS so that the namespace could be anything you wanted it to be, and the choice was open to the people who were going to provision it.
Tim Berners-Lee [inventor of the World Wide Web] and many others have called the design of the URL clumsy because we have host.domain— welcometothejungle.com—where it’s hierarchically going from lower to higher. And then we have the path component—/en/collections/behindthecode—which goes from higher to lower. We also use a mix of periods and slashes. So Berners-Lee proposed that domain/host/path—that is, com/welcometothejungle/en/collections/behindthecode—would be a better design. What are your feelings about that?
I designed it this way for autocomplete. My vision at the time was that people would want to do autocomplete or they might want to just type part of the domain name and have it completed in a search list of the local environment. So it would not make sense to have the country code first. It had to be the least-significant part if you’re going to have any kind of reasonable search list or autocomplete. Imagine if we were to type “com.” and then wait for autocomplete. Out of 160 million choices, there would be a pretty long pull-down menu. So in the absence of any agreement or, in my view, cogent arguments about why the other way around made more sense, I did it that way.
The DNS is not perfect. It suffers from many security vulnerabilities, such as spoofing, flooding, and DDoS attacks. You yourself have said that more time is spent on protecting against threats than doing what it was meant to do. Which of these vulnerabilities concerns you most?
The fact that the DNS doesn’t have much security was pretty much intentional at the start, because the problem was to get people to accept the whole idea of a distributed system, which at the time was quite controversial. People would say, “If I can’t access the network, I can’t get work done.” These things have changed as our view of computing has changed, and I think that all of those issues you mention have been addressed over time. But the most frustrating thing to me is that people haven’t figured out a good solution for the DDoS problem yet.
The DNSSEC extension actually helped with a lot of the security problems plaguing the DNS, and it seems promising. How much of the Internet is protected by DNSSEC today?
That depends on whether you count the number of names that are signed up or the number of people using it. I think it would be somewhere less than half, no matter how you do it.
But suppose that DNSSEC does give you security—I’m not sure whether all of my DNS to Google or Cloudflare DNS servers [which perform DNSSEC by default] is what I really want if I’m interested in preserving my privacy.
Another big evolution that’s happening today is with the DNS over HTTPS, or DoH. This creates an interesting phenomenon because browsers are bypassing the DNS in the user’s computer, firewall, and ISP [Internet Service Provider]. I think moving the DNS traffic into the web protocols could have more of an organizational impact.
Around last December, Russia announced that it is building a sovereign intranet. One of the key components that it needs to develop to do this is a DNS infrastructure of its own. How feasible is this?
This has been a phenomenon we’ve seen many times. When I do DNS consulting for large companies, I usually tell them the first step is probably to figure out how their company’s operations will continue to work if the external DNS is not available or is compromised in some way, because a modern network needs DNS to work. So if you want to be in control of your own fate, you have to make sure that you figure this out. I think there are lots of large enterprises who run their own internal DNS service and don’t want to be reliant on anything on the outside. There are also those countries that may not choose to trust ICANN and want to make sure that their own internal systems will continue to function.
The technology is out there, but there is a fair amount of administration and you have to deploy servers, and you have to test them, and so forth and so on. We’ve seen prototypes of this done already. The open-source DNS that’s out there can be used to do what I think the Russians want to do. You can set up your own roots or you can accept ICANN’s root data and do a new set of digital signatures or a new distribution mechanism. So it’s just about getting something in place. It’s not a small effort, but it’s well within the capabilities of the Russians to do it.
But then you’re also going to have to figure out whether that means people in Russia won’t be able to send email outside Russia. You probably want to have some connections continue with the outside world, so how do you want to support that? I would think that, from a business point of view, having connectivity is important. Even the great firewall of China doesn’t cut off connectivity, it seeks to control it.
A lot of people don’t realize that clicking on random URLs in their emails is unsafe. What can we do to solve this problem so that they don’t have to worry about what they’re clicking on?
Here’s my favorite example. Once upon a time, if you clicked on the website link for The New York Times, you were exposed to malware. And the reason is that if you ask for the front page of The New York Times, there are ads inserted, and they have been inserted by ad brokers who take them from whoever wants to pay for them. And in some cases, it’s people who want to spread malware. The little ad links are one of the famous ways to spread malware, and it’s not something I’ve ever clicked on, but it’s included. So I have a certain amount of hygiene that I follow about what I click on, but in reality, I really rely upon additional filtering behind the scenes to keep me out of harm’s way. So I have blacklists.
I’m currently a director of a company called ThreatSTOP, which sells filtering services to people. We basically have a DNS server that you can load onto your laptop and then you can pick the filtering you want. You know, the digital watch of today has more computing power than the root servers back in 1983 did, so there’s no reason why you can’t run your own DNS server. The only real issue is that you would like to have an easy-to-use administrative interface.
The other thing is that Cloudflare hosts content for far-right US organizations, militant groups, torrent sites, and sites that spread malware. No matter what your political philosophy is, Cloudflare is hosting somebody that you find really repugnant. So using them to filter your DNS doesn’t seem to me to be a good idea, because they’re not going to filter out content that they’re serving themselves.
Why isn’t this DNS filtering default done by the ISP?
A lot of people view this as censorship. Once upon a time, people thought that spam filtering was dangerous because censoring email was evil, and so forth. Today, I don’t think there’s anybody who uses email that doesn’t use such filtering mechanisms. Likewise, I don’t think anybody should be using DNS without having filtering mechanisms.
The key issue is whether or not you can control what filtering gets done. ThreatSTOP collects hundreds of threat feeds, including proprietary feeds such as Cloudflare, Spamhaus and Farsight Security, but lets you choose which get deployed, and lets you add your own rules. It could be that you just take the filtering list from Spamhaus and put in whatever exceptions you want.
One filtering list that’s kind of interesting is newly observed domains or newly observed posts. That’s just a list of all the domains that have only been created in the previous half-hour, day, or five days. If you say, well, I won’t talk with a domain that’s less than five days old, that refers to all the people who obtained domains for hacking and fraudulently using a credit card and have had their domains taken down because the credit-card charge bounced. You can actually improve your safety just by not talking to those. Whatever filtering it is, I think the end user has to be in control or have the ability to control it.
Previously, you’ve used the term DNS 2.0 to describe the future of DNS. What would it ideally look like?
I think DNS 2.0 would have to offer some new capability that we don’t see today, and that could be things such as names that don’t rely on a central route and authentication such as ICANN. The challenge is to combine that capability with the mnemonic capability, names that people can understand. So I think DNS 2.0 would solve the problem of letting you create your own names for peer-to-peer use, or whatever, while constructing a directory of mnemonic names like DNS 1.0 has.
But that’s my conjecture. There are lots of people who have said that the next generation should be based on blockchain, and there have been a lot of ideas. But I don’t think anybody has really cracked that code yet—not me, for example.
Do you think we could use blockchain with DNS?
I think we could use distributed ledger technologies for the operation of registries and registrars. There wouldn’t be any problem with that kind of stuff. Replacing the basic query mechanism with a blockchain-oriented one never seems to me to get you the right kind of performance.
I once proposed to the ICANN people that they could just have the root managed by a voting system of the existing TLDs and you could have them vote on whether to admit new TLDs. You could think about having algorithms that are implemented using ledger technology to automate some of the bureaucratic functions here. Needless to say, none of the people with their own TLDs liked this idea, but I still think that it would make a lot of sense.
This interview has been edited for space and clarity.
This article is part of Behind the Code, the media for developers, by developers. Discover more articles and videos by visiting Behind the Code!
Want to contribute? Get published!
Follow us on Twitter to stay tuned!
Illustrations by Paul Mockapetris/WTTJ
- Přidat k oblíbeným
- Sdílet na Twitteru
- Sdílet na Facebooku
- Sdílet na LinkedInu