Security extortion? When legit disclosure morphs into a shakedown

How a company handles people telling it about its own security holes says an awful lot about how that company views itself.

ransom note
Credit: Jamie Eckle

The essence of security is not trusting that people will do the right thing. Firewalls, deadbolts and armed guards exist to slow down or stop bad guys, not to encourage good acts from good guys. With that in mind, let's look at how companies today handle security holes and ask ourselves if this, alone, isn't proof that humans are crazy.

Companies beg people that find security holes — whether they are cyberthieves, security researchers, journalists or rank-and-file end users — to report them to the company itself. Looked at from a moral altruistic perspective, this makes sense, assuming the intent is to plug the security hole, rather than encourage the population to exploit it.

But that is a flawed assumption, and it's flawed for members of every group mentioned. Of course thieves aren't interested in boosting security. Security researchers are so interested, but they have businesses to run. Their job is to make things safer for clients. Journalists' jobs are to inform their readers. (Some of us certainly can hold off publishing — for a very brief period — to give the company the chance to make repairs, but if no meaningful info comes back, our first obligation — to our readers — kicks back in.) And end users, who are often consumers, generally want the holes so that they (the consumers) are personally better protected.

The problem is that most companies think that a hole that hasn't been widely publicized is something that has no great time pressure. Unless the offer comes accompanied with "You have a week to fix before we go public," the sad truth is action rarely happens.

Here's where things get dicey. What if the "or else" is not "we'll go public," but "you give us money." Security consultants are used to getting paid for their services, but they need to negotiate terms before any work is performed. Performing work and then approaching a client and demanding payment is bad business practice. And then the "or else we'll go public" is no longer a community service. It's now extortion.

Consider what just happened to Groupon. According to this piece in, a security firm reported a security hole to Groupon, expecting payment. To be fair, Groupon all but asked for it. It encourages people to report certain kinds of security holes in exchange for an unspecified "bounty."

The researcher reported it to Groupon in the manner described, but Groupon reportedly objected because the researcher briefly published the hole. If the Groupon hole is to learn of security holes, why should it matter if the researcher also told others? (Nitpick: Because Groupon did not specify an amount, they could have simply given the researcher one dollar and been done with it.)

Most security researchers and journalists are quite willing to delay publication — so as to not endanger the company unnecessarily — as long as they are convinced the company is aggressively and rapidly trying to create a fix. It's when the companies plant themselves in the cone of silence that researchers and journalists conclude they are being ignored and opt to publish.

I truly wish this weren't the case. I wish that major vendors and retailers would want to fix security holes to make operations more secure and not merely as a way to minimize how embarrassing a story might make them look. Unfortunately, that's not how the world we live in operates. If I reached out to a company and said, "Hey. We found this huge security hole on your site and in your mobile app. Please fix it or else we'll do absolutely nothing about it," I would not expect a lot of IT resources to be applied to the hole.

Some security researchers view their security findings as potential revenue. At one level, that is perfectly valid. That researcher is demonstrating to companies their level of technical acumen and creativity in finding holes that have eluded others. That's an ideal sales pitch.

But subtlety and nuance — traits not that often found among security professionals given the nature of security — often get little attention and the pitches quickly become borderline illegal. At one extreme, we have "Look at all of these holes I found in your system. If you're interested in retaining my services to find other holes in some of your other products, here's my contact information." On the other, we have "Look at all of these holes I found in your system. Give me $XXXX or I'll tell the world about them. You have 24 hours to decide."

That second approach is frighteningly similar to pitches I have seen security investigators (and cyberthieves) send to vendors and retailers. It's what I would expect from cyberthieves, but security researchers should know better. It's quintessential blackmail: someone has uncovered embarrassing information about you and will keep quiet about it if you pay him.

That said, here's a general warning. A year ago, several of my Computerworld columns focused on what we found when we asked security researchers to perform penetration tests of quite a few well-known mobile apps. Most fared rather poorly. The most striking takeaway was how few companies then performed their own pen tests, to find holes before others discovered them. As we stand today, a year later, all indications are that the number of companies performing those tests is a similarly tiny number.

That warning? I'm doing another round of pen tests right now. If you haven't run your own pen tests, it's probably a really good time to start. If you don't have the time, fear not. We may end up doing it for you.

This article is published as part of the IDG Contributor Network. Want to Join?

Why is Apple letting Macs rot on the tree?
View Comments
Join the discussion
Be the first to comment on this article. Our Commenting Policies