Vendor disk failure rates: Myth or metric?
Disk problems contribute to 20% to 55% of storage subsystem failures
Computerworld - The statistics of mean time between failures (MTBF) and average failure rate (AFR) have gotten lots of attention lately in the storage world, especially with the release of three much-discussed studies devoted to the topic in the last year. And for good reason: Vendor-stated MTBFs have risen into the 1 million-to-1.5 million-hour range, equaling 114 to 170 years, a lifespan that no one is seeing in the real world.
Three studies over the past year on MTBF include the following:
- Google Inc.'s "Failure Trends in a Large Disk Drive Population"
- Carnegie Mellon University's "Disk Failures in the Real World"
- University of Illinois' "Are Disks the Dominant Contributor for Storage Failures?"
Indeed, "how do these numbers help a person who wants to evaluate drives?" says Steve Smith, a former EMC Corp. employee and an independent management consultant in Bellevue, Wash. "I don't think they can.
Even storage system maker NetApp Inc. acknowledges in a response to an open letter on the StorageMojo blog that failure rates are several times higher than reported. "Most experienced storage array customers have learned to equate the accuracy of quoted drive-failure specs to the miles-per-gallon estimates reported by car manufacturers," the company says. "It's a classic case of 'Your mileage may vary' -- and often will -- if you deploy these disks in anything but the mildest of evaluation/demo lab environments."
Study resultsThe upshot of the recent studies can be summarized this way: Users and vendors live in very different worlds when it comes to disk reliability and failure rates.
Consider that MTBF is a figure that's reached through stress-testing and statistical extrapolation, Harris says. "When the vendor specs a 300,000-hour MTBF -- which is common for consumer-level SATA drives -- they're saying that for a large population of drives, half will fail in the first 300,000 hours of operation," he says on his blog. "MTBF, therefore, says nothing about how long any particular drive will last." In other words, MTBF does a very poor job communicating what the actual failure profile looks like, he says.
It's like providing the average woman's height in the U.S. but without showing the numbers used to derive that average, Smith says. "MTBF became the standard because it was perceived as a simpler answer to the question of reliability than showing the data of how they arrived at it," Smith says. "It's an honest-to-God simplification."
- Top 5 Reasons for Cloud-Based Disaster Recovery There is no question that every business wants to protect their operations from downtime and loss of data. But many companies don't have...
- Server-side Caching for the VMware Admin vExpert David Davis weights in on how best-in-class server-side caching solutions can drastically improve storage performance and reduce latency without the addition of...
- 5 Things You Didn't Know About Cloud Backup IT departments are embracing cloud backup, but there's a lot you need to know before choosing a service provider. Learn all the critical...
- Case Study: Extending DR Protection for Apps W/O Fixed Costs/Fees Find out how the city of Asheville, NC won the Global City on a Cloud Grand Prize from Amazon AWS for Best Practices...
- Is SQL Server AlwaysOn really as powerful? Tips and Tricks from the field With the introduction of AlwaysOn, Windows Clustering Services is now more critical than ever.
- Introducing Cloud-Based Disaster Recovery From VMware Cost-effectively protect your business applications in the case of a local disaster or disruptive event. VMware is excited to introduce vCloud Hybrid Service... All Disaster Recovery White Papers | Webcasts