Skip the navigation

Time to end the FBI/CSI study?

A serious topic deserves a serious survey, says Ira Winkler

By Ira Winkler
September 26, 2006 12:00 PM ET

Computerworld - The information security industry doesn't go more than a couple of weeks between the releases of surveys, most of which exist for marketing purposes rather than as reportage of major discoveries. Though venerable, the annual CSI/FBI Computer Crime & Security Survey is no exception -- and some of the claims it makes would, or should, stop a reasonable security pro in his tracks.

The survey is run by the San Francisco-based Computer Security Institute, which was founded in 1974. The survey began in the mid-1990s. In its early days, CSI got the FBI's Computer Intrusion Squad to co-sponsor its survey, providing a certain name cachet to a study by an organization with which few people were otherwise familiar.

While CSI offers useful training courses, education programs and major conferences, the organization feels compelled to keep conducting and releasing results from this poorly executed study. That's unfortunate, because a number of problems with the survey methodology compromise the credibility of an otherwise good organization.

The primary weakness of the CSI study is sample control -- that is, its sources aren't sound. The initial respondent pool is drawn from two sources: CSI membership rolls and the roster of paying attendees at its conferences and training events. CSI claims that it surveys 5,000 people, but that's simply how many surveys it sends out. One year, I personally received six of those 5,000 surveys, which are sent via both e-mail and snail mail; I can only imagine how many other people got. People who should be receiving the survey, for instance. I'm not one of those people, but still I get one or more copies every year. I could have easily made up data and return the survey to skew the results -- several times over.

Then you've got the response rate to that mailing of 5,000 people. By my calculations, except for one year, the response rate for the survey historically hovers around 10%. (This year's survey garnered 616 responses, or 12.32% of those solicited.) While this doesn't necessarily mean that the study is faulty per se, it does mean that there is an extremely high margin of error.

If you're hazy on what you know of margin-of-error calculations, Wikipedia (of all places) has a reasonably clear description of how margin of error is calculated and why some pollsters aren't comfortable with how the statistic is thrown around these days. For example, when you see political studies, they often report a margin of error of +/-3%. That means that if someone has 54% of the vote according to a survey, the actual percentage of the vote could be as low as 51% or as high as 57%.



Our Commenting Policies