How can you lower the risk of a successful attack on your Apache Web server? This excerpt from Maximum Apache Security, a hacker's guide to protecting your Apache Web server (Sams Publishing), outlines steps you can take to lower the risk of a successful attack on your Web server.
How Security Disasters Develop
The scenarios you'll face are the following:
Intruders gaining simple access
Denial of service
Defacement or total system seizure
Let's run through the factors that invite these situations.
Intruders Gaining Simple Access
Simple unauthorized access can happen in several ways:
Insiders who once had authorized access (former employees or developers, for example) return to haunt you.
Your users make bad password choices on other networks that fall to hackers. This leads to cross-network unauthorized access.
Your underlying operating system has holes, and diligent hackers exploit it to gain limited access.
The tools you use in conjunction with Apache are flawed.
Research studies show that some 70% of serious intrusions come from insiders. I encounter such cases all the time:
In January 2002, a prominent online porn provider contacted me. A former developer defected to another firm and took the porn provider's client list with him. He also took username/password databases and was using these, through anonymous remailers, to solicit its clients. Adding insult to injury, he also broke into my client's servers.
In 2001, I audited a system that offered bullion-backed credit/debit cards. Developers who had since quit left behind backdoors to secure remote access administrative sections through PHP, with SSL client certificates.
In 2000, a defense contractor contacted me. Its skunk works division used a centralized password server that housed 4,500 username/password pairs. Of users connected to these, more than 800 were no longer with the firm, and of these, 42 were still utilizing network resources without authorization—and these folks build nuclear weapon components.
To guard against these situations, when you terminate a user, remove the account. Also, preserve all files and directories associated with that user on backup media. (You may later need these for evidence.) And, you'd benefit by installing monitoring tools that record user activities.
Furthermore, in enterprise environments, try to isolate development boxes from production boxes. That is, have your developers do their work on test bed systems that mirror your production system's setup. That way, developers never actually have access to your enterprise system. A simple code audit prior to moving their work over to the enterprise box can then determine whether malicious code exists therein.
Users and System Security
As a rule, you shouldn?t let many people access your Web host from the inside. For example, Web servers aren't boxes that you'd normally put shell or Windows user accounts on. Rather, you should restrict these machines to Web services alone. That's a given.
However, you'll still have portions of your Web site that only authorized remote users can access, such as areas that house premium Web services for paying customers. This always entails passwords, and you can use various approaches for this, including simple, native Apache password controls, or database-based password access.
These approaches are fine, but harbor the same inherent weakness: If users create their own passwords, those passwords will invariably be weak. So in the end, it doesn't matter what controls you institute.
Encryption is vital, and there's no debating that, but even "strong" encryption fails when users make poor password choices, and they will. Users are lazy and forgetful. To save time and simplify their lives, most users create passwords from the following values:
Their birth date
Their Social Security number
Their children's names
Names of their favorite performing artists
Words that appear in a dictionary
Numeric sequences (like 90125)
Words spelled backwards
These are terrible choices, and most cracking tools can crack such passwords in seconds. In fact, good passwords are difficult to derive, even when you know encryption well, for several reasons.
First, even your local electronic retail store sells computers with staggering processor power. Such machines perform many millions of instructions per second, thus providing attackers with the juice to try thousands of character combinations.
Furthermore, modern dictionary attack tools are advanced. Some, for example, employ rules to produce complex character combinations and case variations that distort passwords well beyond the limits of the average users' imagination. Thus, even when users get creative with their passwords, cracking tools often prevail.
Worse still, cross-network password attacks and compromises are common. Suppose that your users have Hotmail or AOL accounts (or any account that provides them with mail, chat, or other services elsewhere). Ninety percent of users aren't savvy enough to make different passwords for different accounts. Thus, their Hotmail accounts have the same username/password pair as their AOL account.
These conditions invite cross-network password compromise. Suppose that crackers expose several thousand Hotmail passwords—this has happened before. Suppose further that within that lot, twenty such victims also have accounts on your system. Suddenly, attackers have twenty valid username/password pairs from your system.
This won't get them far, but it will get them inside your premium service area, which probably deploys JSP, ASP, PHP, Perl, Python, ActiveX, or other technologies that interact with your database. Attackers can then study that technology and try attacks that they couldn't otherwise try if they had access only to the home page. Over time, if there's a weakness, they'll find it.
To ward off such situations as best you can, implement the following controls whenever possible:
Set passwords to expire every 60 days, with a 5-day warning and a 1-week lockout, if your operating system supports it.
Install proactive password checking, enforcing the maximum rules (using at least a 100,000-term dictionary).
Periodically check user passwords against the largest wordlist you can find. You can automate this procedure using Perl on Windows, Unix, and Mac OS X.
Watch security lists for new password exploits.
Force users to create a new and unique password for each host they have access to. Take logs from your proactive password checker that contains passwords users previously tried and append these to proactive password checking wordlists on other hosts. This way, users' bad password choices follow them across the network.
Provide your users with basic education in password security. Even a simple Web page explaining what makes a weak password is good. Users will read this material if you offer it.
Denial of Service
A denial-of-service (DoS) attack is any action (initiated by a human or otherwise) that incapacitates your host's hardware, software, or both, rendering your system unreachable and therefore denying service to legitimate (or even illegitimate) users.
In a DoS attack, the attacker's aim is straightforward: to knock your host(s) off the Net. Except when security teams test consenting hosts, DoS attacks are always malicious and unlawful.
Denial of service is a persistent problem for two reasons. First, DoS attacks are quick, easy, and generate an immediate, noticeable result. Hence, they're popular among budding crackers, or kids with extra time on their hands. As a Web administrator, you should expect frequent DoS attacks; they're undoubtedly the most common type.
But there's still a more important reason why DoS attacks remain troublesome. Many such attacks exploit errors or inconsistencies in vendor TCP/IP implementations. Such errors exist until vendors correct them, and in the interim, affected hosts remain vulnerable.
An example is the historical Teardrop attack. This attack involved sending malformed UDP [user datagram protocol] packets to Windows target hosts. Targets would examine the malformed packet headers, choke on them, and generate a fatal exception. When Teardrop emerged, Microsoft quickly re-examined its TCP/IP stack, generated a fix, and posted updates.
However, things aren't always that easy, even when you have your operating system's source code, as Linux users do. As new DoS attacks arise, you may find yourself taking varied actions depending on the situation (such as patching software, reconfiguring hardware, or filtering offending ports).
Finally, DoS attacks are especially irritating because they can crop up in any service on your system. In a moment, we'll examine a DoS attack that Apache sustained in 2001. However, even though Apache has a good record in this area (not many DoS vulnerabilities), that's no cause to rejoice. Your operating system may harbor weaknesses, too, as can many of its services. So, even when you have a bug-free Apache distribution, this doesn't offer any guarantee that you'll escape DoS attacks.
An Apache-Based Denial-of-Service Example
A serious Apache vulnerability surfaced on April 12, 2001, when Auriemma Luigi discovered (and William A. Rowe, Jr. confirmed) that attackers could send a custom URL via Web browser and thereby hang Apache, or run the target's processor to 100% utilization.
Attackers could perform this DoS attack in one of three ways:
Issue a GET request consisting of 8,184 / characters
Issue a HEAD request consisting of 8,182 A characters
Issue an ACCEPT of 8,182 / characters
As Mr. Luigi explained, in both Windows 98 and Windows 2000, if an attacker sent two or more strings from different connections, the targets would crash (and all connections would thereafter fall idle).
The problem affected all Apache versions earlier than version 1.3.20 on the following platforms:
Microsoft Win32
Microsoft Windows NT
Microsoft Windows 2000
OS/2
As reported by the Apache team:
In the case of an extremely long URI, a deeply embedded parser properly discarded the request, returning the NULL pointer, and the next higher-level parser was not prepared for that contingency. Note further that accessing the NULL pointer created an exception caught by the OS, causing the apache process to be immediately terminated. While this exposes a denial-of-service attack, it does not pose an opportunity for any server exploits or data vulnerability.
Apache patched this problem in version 1.3.20. However, as I related earlier, Apache isn't your only concern. You must be ever diligent to monitor security advisory lists for your operating system and any applications or modules that run on your Web host.
Defacement or Total System Seizure
Your security should never lapse so far that attackers could deface your site or seize control of your Web hosts. Yet, this happens at least 50 times a day, all over the world. I could enumerate a dozen reasons why, but they all trace back to two root problems: the failure to adequately plan initial Web host configuration, and the failure to keep systems patched and up-to-date.
First, securing your Web host really begins even before installation, when you make your first crucial decision: the decision of what type of host you're building. The most common types are as follows:
Intranet Web hosts—Hosts without Internet connectivity, typically connected to a Local Area Network
Private or extranet Web hosts—Hosts that have Internet connectivity but provide services only to a limited clientele
Public or sacrificial Web hosts—Garden-variety Web hosts that users known and unknown can access publicly, 24 hours a day, on the Internet
Each type demands a different approach. On intranets, you may provide network services that you'd never allow on a public Web server (and these would pose infinitely less risk). Pages that interface with ActiveX are good examples. Default Linux or Windows/IIS installations include many services that your Web host can do without, including the following:
File Transfer Protocol
finger
Network File System
R services
You must decide which services to provide by weighing their utility, their benefits, and the risks they pose.
File Transfer Protocol
File Transfer Protocol (FTP) is the standard method of transferring files from one system to another. In intranet and private Web hosts, you may well decide to provide FTP services as a convenient means of file distribution and acceptance. Or, you might provide FTP to offer users an alternate avenue through which to retrieve information that is otherwise available via HTTP.
For public Web servers, though, you should pass on public FTP. If your organization needs to provide public FTP services, consider dedicating a box specifically for this purpose. This is especially true if your developers have onsite access to the system. Consider using Secure Shell instead, which ships with an easy-to-use, graphical file manager that allows host-to-host transfers via SCP.
finger
fingerd (the finger server) reports personal information on specified users, including their username, real name, shell, directory, and office telephone number (if available). This is primarily an issue for Unix-based servers.