Time to end the FBI/CSI study?

A serious topic deserves a serious survey, says Ira Winkler

The information security industry doesn't go more than a couple of weeks between the releases of surveys, most of which exist for marketing purposes rather than as reportage of major discoveries. Though venerable, the annual CSI/FBI Computer Crime & Security Survey is no exception -- and some of the claims it makes would, or should, stop a reasonable security pro in his tracks.

The survey is run by the San Francisco-based Computer Security Institute, which was founded in 1974. The survey began in the mid-1990s. In its early days, CSI got the FBI's Computer Intrusion Squad to co-sponsor its survey, providing a certain name cachet to a study by an organization with which few people were otherwise familiar.

While CSI offers useful training courses, education programs and major conferences, the organization feels compelled to keep conducting and releasing results from this poorly executed study. That's unfortunate, because a number of problems with the survey methodology compromise the credibility of an otherwise good organization.

The primary weakness of the CSI study is sample control -- that is, its sources aren't sound. The initial respondent pool is drawn from two sources: CSI membership rolls and the roster of paying attendees at its conferences and training events. CSI claims that it surveys 5,000 people, but that's simply how many surveys it sends out. One year, I personally received six of those 5,000 surveys, which are sent via both e-mail and snail mail; I can only imagine how many other people got. People who should be receiving the survey, for instance. I'm not one of those people, but still I get one or more copies every year. I could have easily made up data and return the survey to skew the results -- several times over.

Then you've got the response rate to that mailing of 5,000 people. By my calculations, except for one year, the response rate for the survey historically hovers around 10%. (This year's survey garnered 616 responses, or 12.32% of those solicited.) While this doesn't necessarily mean that the study is faulty per se, it does mean that there is an extremely high margin of error.

If you're hazy on what you know of margin-of-error calculations, Wikipedia (of all places) has a reasonably clear description of how margin of error is calculated and why some pollsters aren't comfortable with how the statistic is thrown around these days. For example, when you see political studies, they often report a margin of error of +/-3%. That means that if someone has 54% of the vote according to a survey, the actual percentage of the vote could be as low as 51% or as high as 57%.

Margin of error is of course a function of the size of the sample group. With a sample size as small as that garnered by the CSI study -- compared with the number of security computer security professionals, the population the survey claims to represent -- the margin of error is almost incalculable, and that's not taking into account folks who don't belong in the survey group to begin with, such as me.

Worse still, respondents within that group of 5,000 are self-selecting and can choose which survey questions they wish to answer. If those respondents were representative of all companies, then the margin-of-error question would be less pertinent. However, when you have a response rate of around one in every 10 people queried, you have to ask yourself why some people responded and others didn't. Are those who responded proud of how well their organizations did, while the others are embarrassed? Were certain respondents using the survey to vent some steam about something happening in their organizations?

Only the first CSI report flagged potential problems with sample size. That report then went on to say that at least the problem provided some discussion points! Subsequent reports don't bother to mention the potential margin-of-error problem at all, simply presenting response numbers and letting readers make their own calculations.

Anyone who makes business decisions based strictly on such data, without taking those numbers into account, would be making a bad decision. Granted, caution with statistics is just good business policy, but a truly scientific survey would at least flag troubling data-sample issues and discuss the weaknesses and limitations they impart to the study's results.

Once you dig into the report, the sampling problem pays off in "comedy gold." For instance, this year's edition states that the average company loses $167,713 to computer crimes -- a claim so far from reality that I don't know why any chief information security officer would bother to read further.

CSI's study goes on to say that computer security losses have been declining over the past four years. All you'd have to do is read any current newspaper, and you'd know how absurd that statement is. I don't know anyone involved in a corporate security program who could take that claim seriously.

For fun, let's apply some sound risk management theory to that "average corporate loss" of $167,713, a number derived, according to the survey, from a total of $52,494,290 in losses estimated by 313 respondents. Note once again that respondents could skip questions they didn't wish to answer. Interestingly, exactly half of this year's respondents did precisely that.

Now, consider that many large companies have incident-response teams comprised of a dozen or more people. A department of that size, properly staffed and with reasonably fair compensation in place, would require a budget of almost $2 million per year in salaries alone. Do you believe that companies would spend that much money to investigate $167,713 in losses? Would yours? And would companies such as General Electric, Citigroup, Washington Mutual or General Motors have sustained only $167,713 in losses from a high-profile, wide-ranging breach?

Even more comical is that the CSI study claims that there has been a 68% decline in average organization losses to computer crime over the past two years -- a jaw-dropper by any measure. Again, the study takes that average without including a margin of error, which is actually well more than 68%. (For instance, on the 2005 survey, 630 people responded to the loss-estimate question -- a factor that would skew year-over-year stats.)

While in the course of consulting, I might normally give people advice on how to apply a study or what messages to take away, my best advice in this case is reserved for CSI. At this point, the organization is only hurting its reputation with the survey as conducted. CSI needs to respect its audience and give us a report that incorporates proper survey techniques and accurate descriptions of the limits of its polling. Their results are so far from what the average member/subscriber/reader experiences on a regular basis or reads in every other source of information that it's just time to call the study quits and preserve CSI's otherwise good reputation.

6 tips for scaling up team collaboration tools
  
Shop Tech Products at Amazon