The storage landscape has been continually shifting over the last five years and the next year should prove no different. With the transformation of proprietary-based storage systems taking the lead, technology advancements that lower latency and increase SSD capacity will only serve to advance this storage landscape shift.
From SMBs to enterprise IT data centers, opportunities for growth and success over the next year exist with the vast array of available storage options, but the challenge lies in finding the ideal approach for your business and use case. It is paramount today that business and technology leaders not only have the capability to securely store massive amounts of data, but also the ability to make sound business decisions based on that data.
Keeping this in mind, here are six storage technology trends to keep an eye on in 2016:
1. Unstructured data sets and workloads will continue to grow in tandem with new data that is generated from IoT. It seems that almost every endpoint in this world is either collecting data or measuring it. Everything from your smart TV and tablet to RFID devices are all metering and tracking consumer behaviors. All this monitoring and "machine" data will continue to grow and expand. On top of that, these data streams require an endpoint to be stored and accessed. These endpoints, various configurations of servers and storage only add to the growth of more data sets with the creation of backups, monitoring data and security logs. Add all this together and the world of unstructured data has only one direction to go: upward.
2. A movement away from storage administration and toward data management. Not necessarily a focus on how and where to store data, but rather a shift and focus toward how to analyze and manage metadata.
3. Investigation of moving storage controllers and services into containers, or embedding microcode into packaging frameworks, like Docker. In the past, controller microcode and associated file systems lived on separate physical hardware, but now with the evolution of containerization, that code set could be "packaged" similar to applications. The code set should be able to move freely around, just like an application, and not necessarily be bound to a physical set of hardware. When you think about it, all of the critical functions that controllers are responsible for (role-based access control, data tiering, GUI for central administration, and even replication) could be done with multiple containers.
4. Continued adoption and maturity of object storage, especially in enterprise and HPC environments.
5. Advancements in hardware technology such as NVMe and Flash. The resulting decrease in latency and integration will reduce the complexities of storage performance management. This technology transformation, in particular with storage arrays, will reduce the I/O contention of HDD-based arrays and limitations of spinning disks.
6. Backup and archiving will continue to prove a challenge as primary data sources continue to grow. Simplifying backup workflows will not only be important for data recovery purposes but also for compliance.
I think the most important of the six trends mentioned above is data management. Data scientists play such a crucial and central role today in assuring that an organization knows how to properly ingest, process and analyze data. The hiring of said data scientists, empowered with the ability to work across different business units to set strategies and goals for what information is being distilled from the raw data, should be a top priority for businesses of all sizes in 2016. There are a number of data influencers emerging at a growing rate, and being able to hire one of these influencers would have a dramatic effect on any organization.
In the past, developers and application owners went to the central IT organization to solve the company's fundamental questions -- where do I place my data and how do I manage it? The IT group then engaged the storage team, typically made up of very highly specialized storage administrators, and those admins then decided where to place the data in regard to workload requirements (IOPs, throughput, response times and/or availability).
But today, those fundamental questions of the past and the resulting requirements are not of supreme importance. Rather, the requirements of today should be based on data type and the corporate policies that govern that data. Managing data is much more complicated than simply choosing which silo or platform is going to meet performance requirements. Should that trend come to fruition, it will change every industry for the better in measurable ways.
This article is published as part of the IDG Contributor Network. Want to Join?