Will we ever use flash drives for enterprise backup?

We are living in a time of technological ferment, if you will, and there are multiple IT approaches in the works ─ all with the mission of dealing with the ‘data deluge.’  Obviously, as data grows, there’s more to back up. And the various strategies for dealing with it include deduplication, block-level backups, snapshots for recovery and so forth. But there’s one I haven’t heard discussed too much: what about using flash drives for backup?

My money says that a few years from now, flash arrays will be commonly used backup targets.  But, that’s just me. Whatever path storage technology takes, the one thing that seems certain is that the data deluge won’t stop, and as the song says, “If it keeps on raining, the levee’s going to break.” Are we getting near a breaking point? And is flash the solution?

Obviously, I’m not talking about backing up your home photos onto a flash thumb drive; I’m talking about enterprise class backup. Could you really back up a 50 or 500 TB data environment onto an enterprise flash array?

The first objection is cost.  Flash drives cost more, though some startups are claiming they can now sell large flash systems for about what standard high-performance disk would cost. But you don’t usually back up to high-performance disk, and the cost gap between flash drives and high-capacity SATA drives of the kind used for backup will likely remain for a while.

For the sake of argument, then, let’s say cost is no object. Would backing up to flash be a good idea?

Backup is all about writing data to a target, and while flash systems are faster at reading data than writing it, they are still plenty fast at writing it. So a flash array would make a terrific backup target in that sense, but would you be able to feed the beast? Backup starts with reading the data from the source, and if your primary storage isn’t also flash based you wouldn’t be able to send data nearly fast enough to match the ingest rate of the flash array. At least not on a one-to-one basis. But, as an aggregate target, it might work really well. Assuming you had the network bandwidth, you could run far more simultaneous backups to a flash array than a disk-based array, which would shrink your overall backup time.

So far, so good. But flash has another challenge, which is read/write cycles. You may not realize it, but that consumer flash-based device you have (like your iPod) is only good for about 3,000 read/write cycles. Flash decays over time as you continue to read and write to it.  And backup is all about writing data over and over again.

Computerworld recently published an excellent survey article by Robert L. Scheier on the storage landscape, New storage technologies to deal with the data deluge. Some of the technologies discussed make you go hmmmm, such as disk drives filled with helium - because they create less resistance than air. Who knew that could make a difference?  Others start to sound like science fiction such as hard drives created by “self-assembling molecules and nano-imprinting.” Self-assembling? That’s getting a little too close to grey goo nightmare scenarios of the bots taking over the world.

Scheier’s article begins with a telling quotation from Douglas Soltesz, vice president and CIO at Budd Van Lines, who says: "If you gave me an infinite amount of storage, I could fill it.” He then notes that if you doubled his capacity, his users would just expect to store twice as much!

Enterprise class flash, according to Scheier, is currently good for about 30,000 read/write cycles. But is that enough to use it as a backup target? How much system life would you get out of that?       

There are design strategies for minimizing the problem, such as write amplification and wear leveling and other techniques that start to make my head spin as I read about them (and unfortunately, my head is not encased in friction-free helium!). But the bottom line is that current flash technology might not be very well suited for use as a backup target because of limited read/write cycles.

But fear not! Help is on the way.  Scheier learns from an IBM researcher that newer kinds of flash using phase-change memory technology will not only be faster but will handle “at least 10 million read-write cycles.”  From 30,000 to 10 million? That’s a leap. 

With new technologies in the works, higher capacities and ever decreasing price points, the way I see it, a few years from now it will be common to use flash arrays as backup targets, and probably a few years after that we’ll all wonder how we ever used anything else.

The data does keep on raining down, but technology keeps making the levee higher and stronger.

FREE Computerworld Insider Guide: Five IT certifications that won’t break you
Join the discussion
Be the first to comment on this article. Our Commenting Policies