Vendor disk failure rates: Myth or metric?
Disk problems contribute to 20% to 55% of storage subsystem failures
Computerworld - The statistics of mean time between failures (MTBF) and average failure rate (AFR) have gotten lots of attention lately in the storage world, especially with the release of three much-discussed studies devoted to the topic in the last year. And for good reason: Vendor-stated MTBFs have risen into the 1 million-to-1.5 million-hour range, equaling 114 to 170 years, a lifespan that no one is seeing in the real world.
Three studies over the past year on MTBF include the following:
- Google Inc.'s "Failure Trends in a Large Disk Drive Population"
- Carnegie Mellon University's "Disk Failures in the Real World"
- University of Illinois' "Are Disks the Dominant Contributor for Storage Failures?"
Indeed, "how do these numbers help a person who wants to evaluate drives?" says Steve Smith, a former EMC Corp. employee and an independent management consultant in Bellevue, Wash. "I don't think they can.
Even storage system maker NetApp Inc. acknowledges in a response to an open letter on the StorageMojo blog that failure rates are several times higher than reported. "Most experienced storage array customers have learned to equate the accuracy of quoted drive-failure specs to the miles-per-gallon estimates reported by car manufacturers," the company says. "It's a classic case of 'Your mileage may vary' -- and often will -- if you deploy these disks in anything but the mildest of evaluation/demo lab environments."
Study resultsThe upshot of the recent studies can be summarized this way: Users and vendors live in very different worlds when it comes to disk reliability and failure rates.
Consider that MTBF is a figure that's reached through stress-testing and statistical extrapolation, Harris says. "When the vendor specs a 300,000-hour MTBF -- which is common for consumer-level SATA drives -- they're saying that for a large population of drives, half will fail in the first 300,000 hours of operation," he says on his blog. "MTBF, therefore, says nothing about how long any particular drive will last." In other words, MTBF does a very poor job communicating what the actual failure profile looks like, he says.
It's like providing the average woman's height in the U.S. but without showing the numbers used to derive that average, Smith says. "MTBF became the standard because it was perceived as a simpler answer to the question of reliability than showing the data of how they arrived at it," Smith says. "It's an honest-to-God simplification."
- Case Study: Extending DR Protection for Apps W/O Fixed Costs/Fees Find out how the city of Asheville, NC won the Global City on a Cloud Grand Prize from Amazon AWS for Best Practices...
- Pilot Light DR for Amazon Web Services Pilot light disaster recovery is a perfect use case for the cloud; CloudVelox offers Pilot Light DR for AWS--automated cloud-based disaster recovery for...
- 6TB Oracle Ecommerce Stack Deployed on AWS in 7 Days A Fortune 1000 company was told that it would take more than 6 months to deploy their ecommerce stack on AWS. CloudVelocity deployed...
- When Disaster Strikes, Can the Cloud Save Your Business? Find out why the survivors of Hurricane Sandy and other recent calamities say they wish they'd had cloud-based business VoIP communications, rather than...
- Introducing Cloud-Based Disaster Recovery From VMware Cost-effectively protect your business applications in the case of a local disaster or disruptive event. VMware is excited to introduce vCloud Hybrid Service...
- Why Purpose-Built Backup Appliances? Seeking cost-effective data protection solutions that can handle the ever-growing expansion of data, organizations are frequently turning to purpose-built backup appliances (PBBAs). All Disaster Recovery White Papers | Webcasts