Phony Facebook application security tests? Say it ain't so, Zuckerberg

Call it "Zuckerberg's Law" -- that mathematical postulation describing the inverse relationship that exists between the size and wealth of a social network and the wisdom of those who own and operate it. This isn't Beckstrom's Law, mind you, which postulated that there's an optimal size for things like social networks, beyond which more members decrease -- rather than increase -- the value of the network for everyone else.

No, Zuckerberg's Law is bigger and more subtle than that. It says that the larger and richer your social network becomes, the smaller and more penurious are your ideas for improving it; the more varied your options become for extracting a dollar from your sea of happy users, the narrower and more ham-fisted your methods for doing so.

Also see: 10 Security Reasons to Quit Facebook (And One Reason to Stay On)

How else can we explain the report from the Federal Trade Commission (FTC) this week that disclosed that, for close to a year, Facebook operated a for-profit application security testing service that was little more than a sham: taking money from hopeful application developers with false promises to vet their creations for security holes. Instead, the FTC concluded, the company banked the money and put a "Facebook Verified App" logo next to the application, without bothering to do any additional auditing of the submitted application. The program, the FTC said, was "false and misleading" -- a hollow show that, all the same, netted Facebook between $50,000 and $95,000 for "verifying" 254 applications between May and December, 2009.

Mind you, at the time the Facebook Verified App program was bilking developers with empty promises of security audits, the then-privately-held company had revenues of around $777 million. In other words: the Verified Apps scam was chump change, revenue wise: about 1/100th of a percent of Facebook's overall revenue. It was small, especially compared to the money Facebook was making selling information on its hundreds of millions of users to advertisers and application developers.

So why even bother with a bogus application security program? That's a good question, and it's one that is likely to remain unanswered. Facebook declined to comment on the FTC's announcement this week beyond a one-line statement saying it was pleased that the FTC had approved the settlement, which was first announced in November, 2011. As The New York Times reported, FTC rules also permit Facebook to agree to a settlement, submit to 20-year consent order and bi-annual audits and amend its woeful security practices without actually admitting that it did anything wrong.

Also see: 4 tips for Facebook from security and privacy experts

What gives? I asked Chris Wysopal, the CTO and Co-Founder of application security testing firm Veracode for his opinion. Chris is a recognized expert on application security whose company makes money by doing what Facebook led developers to believe it was doing --- only Veracode actually audits the applications its customers submit. He said that the lure of steady revenue from application testing probably attracted Facebook at first.

"Everyone likes check marks," Wysopal said. "VeriSign invented the whole check mark thing, and soon you saw "Secured by VeriSign" on every website. That started a trend where people realized that users feel more comfortable doing things online if they feel like something extra's been done."

But real application security testing is difficult to do, Wysopal said. And, while much of it can be automated, meaningful audits usually require some human intervention to make sure that the automated tests worked as designed. "It's not something that's free," he said.

That may be the quandary that Facebook found itself in: wanting to offer premium security audits to its developers, but realizing that the true cost of doing so was well above the $175-$375 it was charging developers. Of course, the company's strategy for resolving that dilemma -- jettisoning the testing altogether -- was fraudulent and a breach of trust, but it probably made sense at the time. After all, both the platform vendor and the application developer are united in their desire to have the testing process go as quickly and smoothly as possible. So who's going to complain?

The problem of sketchy security verification programs likely extends far beyond Facebook, also, he said.

"As a security guy, my first reaction to most of them is 'That's garbage. There's nothing behind that.'" In the case of the "Facebook Verified App" program, of course, he would have been entirely correct.

Wysopal said that testing programs that aren't transparent -- spelling out exactly what kinds of tests are performed and what constitutes a "passing" grade -- can quickly devolve into empty exercises in which operators collect fees in exchange for the "check mark" without bothering to do the hard work of actually testing the security of the applications (or Web sites) themselves. The "certification" becomes more akin to the stamp that your favorite nightclub places on your hand at the door, "certifying" that you paid them the money to get into the club, but not saying much about your qualities as a person.

Legitimate application testing and certification programs have some form of documentation of their process and examples of their work. In contrast, programs that focus on outcomes ("We'll certify your app is secure!") rather than process should raise a red flag, Wysopal said.

And that's a problem for platform providers like Facebook, Apple and Google, which see large application stores as a key component of the overall value of their networks. Those organizations have been knocking over barriers to entry for application developers for years. But, on the question of the security of all those applications, the companies have offered more assurances than specifics.

"Zuckerberg's Law" may be immutable, I fear, especially given the billions of dollars in "targeted advertising" that are on the table for social networking and Web-based service providers with the right mix of features and data. That means that, in the end, it may fall to the press to expose fraud where it exists and federal regulators and privacy watchdogs to protect the interests of consumers and aspiring application developers alike.

Read more about social networking security in CSOonline's Social Networking Security section.

This story, "Phony Facebook application security tests? Say it ain't so, Zuckerberg" was originally published by CSO .

FREE Computerworld Insider Guide: IT Certification Study Tips
Editors' Picks
Join the discussion
Be the first to comment on this article. Our Commenting Policies