Computerworld - It's a cruel world out there in the data center. Nothing lasts forever, especially not mechanical devices with fast-moving parts, such as disk drives and printers. It would be very useful if we could predict when something might break or, at the very least, determine which of two similar products would be less likely to break in a given period. The answer is MTBF, short for mean time between failures, and the closely related MTTF, short for mean time to failure. Both are measures of reliability that are defined statistically as the number of hours a component, assembly or system will operate before it fails.
MTBF sounds simple: the total time measured divided by the total number of failures observed. For example, let's wring out a new generation of 2.5-in. SCSI enterprise hard drives. We run 15,400 initial units for 1,000 hours each (thus our tests take a little less than six weeks), and we find 11 failures. The MTBF is (15,400 x 1,000) hours/11, or 1.4 million hours. (This is not a hypothetical MTBF; it represents current drive technology in 2005.)
What does this calculation really mean? An MTBF of 1.4 million hours, determined in six weeks of testing, certainly doesn't say we can expect an individual drive to operate for 159 years before failing. MTBF is a statistical measure, and as such, it can't predict anything for a single unit. We can use that MTBF rating more accurately, however, to calculate that if we have 1,000 such drives operating continuously in a data center, we can expect one to fail every 58 days or so, for a total of perhaps 19 failures in three years.
The MTBF figure for a product can be derived from laboratory testing, actual field failure data or prediction models such as MIL-HDBK-217 (the Military Handbook for Reliability Prediction of Electronic Equipment, published by the U.S. Department of Defense).
MIL-HDBK-217 contains failure-rate models for various parts used in electronic systems, such as integrated circuits, transistors, diodes, resistors, capacitors, relays, switches and connectors. These failure-rate models are based on a large amount of field data that was analyzed and simplified by the Reliability Analysis Center and Rome Laboratory at Griffiss Air Force Base in Rome, N.Y. (Instructions for downloading MIL-HDBK-217 are at www.t-cubed.com/faq_217.htm.)
Kay is a Computerworld contributing writer in Worcester, Mass. You can contact him at firstname.lastname@example.org.
See additional Computerworld QuickStudies
Read more about Hardware in Computerworld's Hardware Topic Center.
- 15 Non-Certified IT Skills Growing in Demand
- How 19 Tech Titans Target Healthcare
- Twitter Suffering From Growing Pains (and Facebook Comparisons)
- Agile Comes to Data Integration
- Slideshow: 7 security mistakes people make with their mobile device
- iOS vs. Android: Which is more secure?
- 11 sure signs you've been hacked
- 4 Customers who never have to refresh their PCs again This paper illustrates a common theme: the combination of desktop virtualization and thin client computing helps organizations deliver an up-to-date user experience more...
- Mobile Devices: The New Thin Clients Get essential guidance for understanding the role thin clients plus virtual desktops play in the enterprise today.
- Taking Windows Mobile on Any Device Taking Windows applications mobile has many advantages, but the process of identifying a solution is complex. Learn how to solve this complex problem...
- PaaS - Powering a New Era of Business IT Why PaaS has suddenly become relevant and irresistible to many organizations. Dive into the opportunities and considerations associated with using PaaS from an...
- Redefine Your IT Operations: Remote Office IT Has Never Been Simpler Join us to see why PC Pro named Dell PowerEdge VRTX the "2013 Server of the Year." PowerEdge VRTX may be just what...
- Meg Whitman presents Unlocking IT with Big Data During this Web Event you will hear Meg Whitman, President and CEO, HP discuss HAVEn - the #1 Big Data platform, as well... All Hardware White Papers | Webcasts