Skip the navigation

Measuring security

By Eddie Schwartz, netForensics Inc.
July 15, 2004 12:00 PM ET

Computerworld - Almost a year ago, I wrote a commentary in Computerworld asserting that true information security return on investment can't be proved and that quantitative risk-analysis models yield inaccurate or incomplete data.

Although I still believe that information security ROI is an elusive metric, further research has convinced me that CIOs can effectively measure the actual performance of information security investments and the linkages among performance, cost and opportunity.


Here are some thoughts on ways CIOs can use technology investments to gather and analyze the performance of information security technologies and controls, measure deviations from an accepted risk and cost baseline, and ensure that future investments provide measurable and quantifiable benefit to the enterprise.


Robert S. Kaplan and David P. Norton, in their article "Putting the Balanced Scorecard to Work," made a simple yet poignant statement: "What you measure is what you get." Information security organizations traditionally haven't been held to the same performance metrics as other areas of IT. For example, a network administrator is paid to provide service levels related to uptime, latency reduction and cost per gigabyte. A help desk manager is rated based on the number of "first-time-final" calls and factors such as queue hold time. What are we measuring for information security?


The distraction level to most areas of IT caused by security problems is very high. According to an Intel Corp. white paper, the company last year applied more than 2.4 million software patches. Bug fixes are being released on average every 5.5 days, and the time to react to vulnerabilities is getting shorter. According to the "Symantec Internet Security Threat Report" (Volume IV), 39% of vulnerabilities are exploited within zero to six months of discovery, and 64% within zero to 12 months of discovery.


Yet, according to Gartner Inc., computer networks without a comprehensive vulnerability management program are 5% to 7% patched. The bottom line here is cost. Given these statistics, if organizations can establish a risk baseline and find ways to monitor and manage that baseline to acceptable levels of deviation, there is opportunity for real cost savings.


Ways to measure security performance: Risk, time and cost


To build a meaningful performance management framework for information security, we must start with variables we can measure: threat level, vulnerability level, asset valuation, problem-resolution time and cost in terms of losses or savings.



For a system that's fully deployed and is in a production/operational state, let's assume that we've built the system as securely as possible and accepted residual risks that we couldn't resolve due to complexity or cost and that we now have a risk baseline of zero on Day 1. As time passes, events will cause this baseline to deviate from zero. For example, a new vulnerability will be discovered, and new patches and attacks (threats) will occur as a result. There may be unauthorized configuration changes or internal attempts to affect the confidentiality or integrity of the system. All of these factors together represent deviations from the risk baseline and can be quantified using the following equation:




Our Commenting Policies