Well-publicized attacks against Domain Name System (DNS) root servers and top-level domains highlight the vulnerability of the DNS infrastructure. Many CIOs are looking for ways to ensure secure, reliable network services.
We've identified design principles and best practices for resilient, reliable Dynamic Host Configuration Protocol (DHCP) and DNS services. DNS is the protocol and global network of servers that translate host names into Internet Protocol addresses. Before taking action, prioritize the risks to your network and identify the potential threats you may face.
What are the risks?
Some risks are obvious. If your intranet loses DNS service, you can't view new Web pages. A DHCP failure means your laptop won't connect when you arrive at the office.
DNS and DHCP failures can also produce intermittent or delayed effects. Applications can run for hours or days, only to fail when an IP lease or a DNS address "time to live" expires, resulting in transitory, nonrepeatable failures.
In defining risk, make a priority list of the services most critical to your daily business operations:
- External DNS publishing: What does it cost if outsiders can't use your sites?
- External DNS resolving: What does it cost if internal users can't use the public Internet?
- Internal DNS and DHCP: What does downtime cost within my intranet?
- Extranet DNS: Are there extranet partners, suppliers, customers or others that justify special consideration?
Consider the potential threats
Threats to DNS and DHCP services include unintentional and malicious activities.
Unintentional threats: Human error causes many DNS errors.
- Surveys have found that 68% of Fortune 500 companies use incorrect DNS configurations.
- Nominum's automated discovery service finds unauthorized network elements, such as wireless access points, in every large network it surveys.
Many organizations manage IP infrastructure using Excel spreadsheets, Perl scripts and disconnected processes. As networks expand, coordination and change management become problematic. Corporate mergers often combine two barely manageable systems into one that's totally unmanageable.
Paul V. Mockapetris is inventor of the domain name system and chief scientist and chairman of Nominum Inc., a provider of IP address infrastructure software for enterprises in Redwood City, Calif. |
A more organized, centralized approach to managing IP assets (the logical network) is essential to reduce the dangers of human error for large and complex networks.
Malicious threats: Malicious threats to DNS services can be categorized by intent.
- Random vandalism: Attacks against your DNS servers or those you depend on. Attacks exploit known problems (buffer overflows) or combine methods in blended attacks.
- Political or hactivist attacks: Attacks targeted against specific targets, such as recent attacks against Pakistan and al-Jazeera sites.
- Infowarfare: This includes attacks supported by national means.
- Identity theft: Identity-spoofing attacks for financial gain, such as redirecting users to cloned financial Web sites to gain access to user accounts.
Designing robust and reliable DNS and DHCP
Nominum employs the following design principles in building resilient DNS and DHCP services:
- Bulletproof each individual component.
- Make success work in parallel and failure work in series.
Start by running the best available DNS and DHCP software applications. For open-source, consult the Internet Software Consortium (ISC), which has the latest release information for Berkeley Internet Name Domain (BIND) and ISC DHCP, as well as the CERT Coordination Center Vulnerability Notes Database, which lists known vulnerabilities. BIND, which is distributed free by the ISC, is software run by companies and Internet service providers to translate text-based Internet addresses into numbered IP addresses.
Recommendations:
- Version 9.2+ of BIND is the best open-source DNS choice. Any version other than the most recently patched Version 8 is an unacceptable risk.
- ISC DHCP is adequate unless you need complex fail-over configurations, fast restart or integration into back-end database, provisioning or management systems.
- Commercial DNS and DHCP servers offer support and performance with architectures incorporating security and management.
The second rule dictates redundancy in DNS and DHCP services instead of "hardening" single servers. A server with 10% downtime is available 90% of the time, which is unacceptable in practice. But with five of these servers, you have five 9s availability of at least one server operating.
Redundancy improves reliability only if the servers don't share reliability issues. If five servers are powered from the same plug strip or connected to the same Ethernet switch, the aggregate reliability is lower, as the 10% failure rate is correlated to a common cause.
Use the following best practices to eliminate single points of failure:
- Eliminate common physical and network dependencies.
- Use name servers in separate locations, connected to the Internet via separate routers and leased lines. Ideally, they won't even be in the same network or autonomous system.
- Multiple Internet service providers often share last-mile connections. Add external DNS publishing by partnering with another organization or purchasing external DNS services.
- Multiple DHCP servers with abundant private IP addresses can allocate from separate address ranges in the same subnet. This achieves redundancy at the cost of address space usage.
The Internet Engineering Task Force (IETF) is standardizing DHCP fail-over so that pairs of DHCP servers can coordinate allocations out of shared address ranges, which is essential for reliable allocation of scarce public addresses. The open-source version is still in progress; buyers should select commercial servers implementing the full fail-over model.
Eliminate common software dependencies
Run servers on different operating systems, using different DNS server implementations. Many organizations combine BIND and commercial DNS software.
Separate system components by function
Host DNS and DHCP on dedicated machines with other ports disabled. Firewalls and routers can isolate the servers from nonrelevant traffic and possible exploits.
If you have multiple versions of the DNS namespace (such as an internal and external "view" of DNS data), use separate servers for internal and external DNS data. If total separation is too expensive, use servers that differentiate between internal and external requests.
Also, use separate machines for serving an organization's DNS data (authoritative servers) and fetching data from outside sources (caching or recursive servers).
Using dynamic DNS, DHCP servers update DNS servers. For security, these functions are sometimes located on the same machine. All update or control transmissions should be validated using transaction signatures or carried in virtual private network tunnels (with either Secure Shell or the IPsec protocol). Unfortunately, the IETF and Microsoft Corp. have created separate de jure and de facto standards for signatures; few commercial products support both.
The emergence of a practical DNSsec infrastructure will prevent some attacks in the future, but the best defense is a robust and well-managed IP infrastructure.
References
"Securing an Internet Name Server" (download PDF), CERT Coordination Center, Carnegie Mellon University, Pittsburgh