In the last decade we have seen some amazing innovations in data storage. Disk-based storage capacity has increased from the old hard drives of the 80’s that could barely store a few megabytes of data, to the latest tiny hard drives offering gigabytes and now terabytes of data in a tiny two and a half inch form factor for laptops. You can now hold trillions of bytes of data in one hand on just a single hard drive.
But data growth is not slowing down. The Internet and the Web have spawned exponential growth in the amount of information needing to be stored, which no one could have imagined only a few years ago. YouTube, Facebook, Instagram, Twitter, LinkedIn and the zillions of websites on the Internet are having a dramatic effect on data growth and the technology we need to store it all. Anyone on the planet with a smartphone can now create and share information. Businesses are seeing data storage requirements double every couple of years, and there seems to be no end in sight to this rate of change. Data management has become a cottage industry, as consultants work with companies to implement data deduplication, storage tiering, thin provisioning, just-in-time storage provisioning, and other forms of data lifecycle management.
Massive data growth is pretty good news if you are in the data storage business, but it can be a costly part of doing business for most organizations. That’s why those in the know are looking for alternatives to how they manage data so they can get out in front of the data deluge. Innovation is the American way, and as such, the United States will once again be a driving force in the creation of the technologies that will help the world adapt to the new normal of massive, pervasive data.
Related Story
Harvard stores 70 billion books using DNA
What tweaked my interest in this subject today was a recent article I read in the Wall Street Journal by Robert Lee Hotz, “Future of Data: Encoded in DNA,” which cited a new report in the journal Science. It seems a team over at Harvard, led by a geneticist named Dr. George Church, was able to digitize an entire book on genomic engineering into tiny bits of data using a chemical process to store it in DNA. Yup, DNA. Who would have ever thought that DNA material would be the next big thing in data archives? The article states that the researchers believe it would only take 1.5 milligrams of DNA (the weight of a small mosquito) to store one petabyte of data.
The implications of this news are astounding. If this innovative way of storing data actually becomes commercially viable, we would be able to fit the entire Library of Congress in a test tube. All the data housed on the Internet could be stored in a small closet. The term big data would take on new meaning, as applications are developed to search and mine all the information on Earth in a single location. I’m sure it may take years to develop, but I am amazed at how someone thinking outside the box always finds a way to solve extremely hard problems.
The most astounding aspect of this, though, is the ability of DNA molecules to withstand time. Once the data is encoded into DNA molecules, the lifespan of the molecules can be in the millions of years. Forget about worrying that the magnetic material in the tape media you are using may deteriorate, or the tape library and drives you use are dated. This technology could be analogous to that cool science fiction movie by Steven Spielberg called “Artificial Intelligence,” in which a future race finds a humanoid robot still intact after eons of time have passed, and they are able to bring him back to life and ask questions. The future is HERE.