Twitter made the news twice in the last week for security-related events. The first time was for its proactive measure in suspending the accounts of users whose machines were apparently infected with the Koobface worm. The second time was for the leak of critical information. Both cases provide valuable lessons for organizations and individuals.
(Just how that critical information was leaked by a hacker is an interesting story in itself, which you can read here.)
So, what are the lessons behind the attack and the Koobface account suspensions? I see 11 of them:
1. Don't be afraid to suspend accounts that present a risk to you and your users. It was great that Twitter suspended those user accounts that seemed to be infected. While it is possible that some uninfected accounts were erroneously suspended, the suspensions are the sort of proactive step necessary to protect other users from being infected. And users whose machines really were infected should be grateful to Twitter for saving their reputations. Their friends certainly wouldn't have appreciated getting a virus-laden tweet.
The suspensions also served as an alert for users who didn't know that they were infected, since worms like Koobface can go undetected.
Finally, the suspensions helped Twitter keep its operational costs down, since its systems could have been burdened with more and more infected tweets. As the virus spread, the number of illegitimate messages sent would have spiraled.
2. Doing one thing right doesn't make you good at -- does not even mean you understand -- security. While I do believe that Twitter's actions to stop Koobface were wise, the reality is that the hacking incident, and more specifically, the reaction to it, demonstrates that Twitter executives don't understand the fundamental nature of security.
Specifically, Biz Stone, Twitter's co-founder, stated that the hack wasn't a result of the insecurity of Web apps, but that it "speaks to the importance of following good personal security guidelines such as choosing strong passwords." That is a clueless statement, since the Twitter case involved the reuse of passwords and not necessarily "bad" passwords.
In fact, the hack demonstrates many vulnerabilities of Web apps involving authentication, accessibility and more (all discussed below). It's true that Google Apps itself was not hacked, but Google's password reset function was successfully compromised, and other vulnerabilities facilitated the compromise of information.
Stone does not understand that the goal of security is not to protect software, but to protect the data that the software accesses. Even if a weak password had been involved, a password for a generic Internet e-mail account should not provide access to critical organizational files that are stored on file servers.
3. Single sign-on should be limited. One of the major shortcomings that I see with this case is that Google Apps provides single sign-on. Once you have access to one function, you have access to all of them. And once one is compromised, all are compromised.
4. Sensitive information must be stored internally. Data that is stored externally is out of your control. That's the bottom line that has to be balanced against the money that can be saved using something like Google Apps. Thanks to this leak, we now know that Twitter has plans to become a multibillion-dollar business. It should have thought about whether keeping that sort of information from the world was worth spending more than the $50 per user annual fee that Google Apps charges.
Data that's stored internally can be compromised too, but access is much more limited. To compromise a Google Apps account, a hacker only needs access to the Internet. To compromise an internal account, he would at least need to compromise a VPN connection or a firewall.
And while an organization might have its own security weaknesses, at least they are its own weaknesses. Twitter will never become a multibillion-dollar company if it doesn't invest in developing its own secure environment. If a company can't trust its own people to secure its own data, how can you trust it to secure your data?
5. Access control must be implemented. There are two issues here. First, an executive assistant (the person whose hacked Gmail account led to the leak) likely did not need access to all of that data. Second, what level of granularity of access control does Google Apps provide? I tried to ask Google whether groups can be assigned different levels of access and about other things like change of file permissions, but I got no response.
6. Web-based password reset schemes are not appropriate for a corporate environment. Any Web-based application that is configured for millions of accounts is going to need an online password recovery and reset process. This involves those infamous secret questions that are not very secret and that were shown to be weak security buffers when Sarah Palin's Yahoo mail account was hacked. Google Apps gets points here, since its FAQ suggests that an organization can name an administrator who can manually change passwords. Twitter gets no points, though, because that's not how it implemented Google Apps.
You need a person in the loop. Some people might note that this approach is also not foolproof, since help desks can be social-engineered to change passwords. But that's a high-risk tactic that's a lot less likely to succeed. And even in a large organization, personal relationships matter. When a CEO or his assistant wants to change a password, chances are the administrator will be very familiar with the person requesting the change.
Those password reset and recovery schemes are perfectly acceptable for free e-mail services. You get what you pay for, but you expect better from paid services.
7. Implement misuse and abuse detection. When you use Google Apps or similar services, you have no ability to add in security tools that are designed to protect very valuable data. For example, with Google Apps you can't limit access geographically. Twitter's troubles might have been avoided if there had been a policy that the account in question could be accessed only from the San Francisco area.
When you have no ability to implement additional controls, you have no control.
8. Security must be proactive. A corporate Google Apps account can display the date and time of the last log-in. That is a reactive measure at best, and it can come too late. What's more, few people pay any attention to that data when it is presented to them.
Proactive measures would start with putting data inside a perimeter. Disallowing remote access that doesn't go through a VPN connection is the next step. Third, you need to proactively look for intrusions, abuse and misuse. Google provides very detailed forensics data, but that does nothing to prevent compromises in the first place.
9. You must control your own forensics data. Critically, it is unclear just what control a Google Apps customer has over its own forensics data. Google says it will respond to court orders, but it doesn't say that you can look at your forensic or log data whenever you want. That distinction matters. I once investigated a case where a person had a third party log into his Yahoo account and then wanted to look at the Yahoo log-in and access logs pertaining to the account. Yahoo took the position that we were not asking for data belonging to the owner of the account, but data of the person who logged in. Therefore, Yahoo wouldn't give us the data without a court order. It took a call to a friend who worked on Yahoo's legal staff to get around that objection. Even so, Yahoo's security staff was so twisted that during a teleconference with the victim and me, they would not answer my questions directly, even though they had confirmed my identity and that of the victim, and the victim had completely approved my involvement, They made the victim repeat my questions, word for word, before they would answer them. I thought I was being punk'd, but sadly that is really how Yahoo security behaved.
Now, I'm going to assume that Google provided Twitter with a lot of data without going through the court order process, given the publicity this case got. It would be bad PR for Google to make such a high-profile investigation difficult. Is your company as visible as Twitter? Would you get such cooperation if someone accessed your accounts, but didn't publicize it to the world? Remember, while Google clearly states that you own your data, you do not own the log, forensics and related data associated with your data.
10. Social networking can cripple an organization. I needn't add to what has already been written about how readily people give away information on the Internet and the telephone that can contribute to compromise. But you can use this case to remind your employees about the problem one more time. Herbert "Hugh" Thompson wrote an outstanding article for Scientific American describing how little things add up.
11. If an idiot can do this, what will a savvy criminal be capable of? Yes, I do think the Twitter hacker is an idiot, despite his creativity and success. I think that because he was willing to commit a very visible felony to satisfy his own ego. He claims that he intended to make a point that the Internet is insecure. That isn't exactly an earth-shattering revelation. That's why I say it was for his own ego; he is trying to rationalize a felony. If there were money involved in some way, I could better understand it. And that is the key point. There are many people out there with financial motivation.
The attack was somewhat creative, but I have seen far more creative and effective attacks. More importantly, financially motivated criminals will get in and out without being discovered. Without the security measures in place that I have described, the attacks will go undiscovered and will be much more damaging than embarrassing.
Ira Winkler is president of Internet Security Advisors Group and author of the book Spies Among Us. He can be contacted through his Web site, www.irawinkler.com.