Opinion: To sue is human, to err denied

Security analyst Pete Lindstrom explains why software liability won't work

All of you who support software liability, do me this favor -- the next time you see a software developer, just walk up to him and slap him in the face! All this latent hostility is making you come up with really, really bad ideas.

You see, liability proponents often hide their discussions of liability behind "evil" big companies such as Microsoft, but the fact remains that they're targeting each and every software developer out there, because every developer codes vulnerabilities into their programs.

(By the way, I was just kidding about the whole slap in the face thing.)

Every once in a while, the idea of software liability arises in the trade press. Most recently, it was reported that Howard Schmidt suggested liability for individual developers. (This report was later amended to replace "liability" with "accountability"). And Bruce Schneier continues his crusade to make everyone liable for everything with a recent attack on software vendors in his Wired column. This idea is bad in so many ways, let me count them.

  1. It's unenforceable. Many liability proponents are the same people who publicly ridicule just about every other regulation on the streets because there is no way to police them, and now they think they have a miracle cure for software development: Just tell everyone they have to develop more secure software.
  2. It will destroy innovation. Software liability is a huge win for the biggest software companies out there, because they'll be the only ones that can afford the risk and the insurance. Nobody will take chances with new applications or functional uses with lawsuits lurking around every corner. At best, innovation slows to a crawl.
  3. It will destroy open-source. Imagine volunteering time for an open-source project and being sued because the code you wrote at 4 a.m. is no good. Schneier suggests a "Good Samaritan" rule for open-source because, presumably, they mean well. I don't understand why ego motive makes bad code allowable and profit motive doesn't. Regardless, attempting to discern anybody's motives amounts to a violation of privacy.
  4. It will create an Xbox Internet. Do you know how many applications and systems have "hooks" in them? In the new era of secure software for everyone (sung to the tune of the national anthem of Soviet Union), nobody's software will integrate with nobody else's. Which of course leads to closed "black-box" (or "green-box") appliances so those large software companies can manage their risk. But hey, if that's what you strive for. ...
  5. It will double prices. Anyone out there besides the Warez pirates think that software already costs too much? I know I do. And I don't want to pay twice as much to support the insurance industry and legal profession, which will get rich while everyone hopes for their own chance to sue, completely ignorant of being part of the problem in the face of the payday.
  6. It will force lock-in. Nobody in their right mind would create open file formats or plug-ins or any other interoperable functions. It wouldn't make economic sense to lose control over some set of inputs or program operations.

  7. And, finally -- it won't work. Plenty of software vulnerabilities today aren't even coding errors; they are features that expose systems in ways we actually want them to (which is why no large company can actually switch from Internet Explorer to Mozilla's Firefox).

Contrary to a strange but popular belief, software manufacturing has nothing to do with making a car (surprise!), and liability models can't be compared. The biggest difference is that people don't purposely abuse cars seeking defects like they do with software. In order for software to be compromised, a third-party attacker is necessary, and he is aided and abetted by bug finders.

Nobody purposely creates bad software; they are simply giving us exactly what we wanted. Of course, it may not have been what we needed. Here are the only two steps necessary for us to have a safer Internet without liability muddying things up:

  1. Make it a crime for individuals other than vendors and their contractors to identify and/or disclose new vulnerabilities in software. It's already a crime to actually exploit holes, but this "aiding and abetting" by the good guys has got to stop. Enforcing criminal penalties will significantly reduce the number of "suspects" in any case. In addition, it would allow enterprises to focus on areas like social engineering, insider threats and configuration weaknesses that are common in organizations and account for much higher losses.

    Vulnerability discovery and disclosure is a time-consuming, expensive facade that does nothing to eliminate "black hat" threats (unless you think next month there won't be any vulnerabilities disclosed). Security professionals should be focusing all efforts in these areas rather than the "comfort food" distraction of techno-elites. How can they do this? That brings us to No. 2.
  2. Require software companies to provide a "software safety data sheet" that includes detailed information about how their software operates, the same points that host intrusion-prevention solutions (HIPS) currently discover in the aftermarket. Look at "policies" being defined by HIPS from Cisco or Sana or Symantec or McAfee. Look at systrace information. Predefine known good processes and their components exhaustively (which only software vendors can do), and the bad can be blocked.

Software liability may sound like a reasonable way to reduce frustration and force secure software. In reality, there is nobody out there intentionally creating defects; humans are simply imperfect, and this is a reflection of that. Can they get better? Sure. Can they become perfect? Not likely.

Pete Lindstrom is research director at analyst firm Spire Security LLC in Malvern, Pa. He can be reached at petelind@spiresecurity.com.

IT buyer’s guide to business laptops
Shop Tech Products at Amazon