The inaugural IDC Government Insights report, released earlier this year, paints a revealing picture of how the U.S. Federal Government is spending and planning to spend information technology (IT) dollars on cloud solutions.
The report projects that federal cloud services spending will reach $1.7 billion in FY2014, and, boosted in part by U.S. Federal CIO Cloud Council efforts the past few years to usher agencies away from standalone computing to the Cloud, we are starting to see more tangible evidence of not only federal agency cloud spending, but also results.
A key report finding is that while overall cloud spending will accelerate, federal agencies in the near-term will continue to leverage different cloud types (private, hybrid, public) based on their specific agency needs and concerns. The leading category of government cloud service is private, but public clouds and hybrid clouds continue to gain traction. As a result, many federal agencies will be employing a multi-cloud architecture.
Introduction of multi-cloud architectures challenges data governance
Hybrid clouds provide an attractive balance between the flexibility of innovative public cloud technologies, and stringent security requirements for agency data and systems that may require on-premise private clouds. The level of ownership for sensitive data, along with the need for security and responsiveness will lead many agencies to continue to focus on private clouds, which explains why the IDC Government Insights report states that the leading category of government cloud services is private.
However, agency CIOs are increasingly being mandated to move their clouds beyond internal data centers – at least for some applications – for the significant scalability and pay-as-you-go computing resources associated with a public cloud. Agencies are looking at public clouds from government service providers or even to hyperscaler cloud solutions such as Amazon Web Services. This blend of private and public clouds creates a hybrid cloud environment that can optimize costs, seize opportunities, while still mitigating risks. In this reality, data must travel across multiple clouds seamlessly – from private to public – while providing IT with the necessary control to centrally manage, govern, and transport data across discrete cloud resources.
Need for seamless data management and portability across all Clouds
Providing agencies with flexibility in which cloud path they choose – and how fast they advance down that path – requires arming agencies with the ability to manage data across all of their public, private and hybrid cloud environments.
Agencies must be able to maintain data governance no matter which cloud model they implement. It is often quite easy to get data into the cloud, but not nearly as straightforward to get that same data back out. From that perspective, agencies are now looking at methodologies to establish a universal data platform that can provide data portability across public and private clouds.
There are three general methods of addressing data portability – via standards-based interoperability, via open source infrastructure that is resistant to lock-in, or via vendor solutions that allow a storage operating system to operate across multiple clouds with data portability between on-premise and off-premise facilities.
Standards-based interoperability has largely been focused on newer methods of data access beyond the usual SAN & NAS. One key example is the Cloud Data Management Interface (CDMI), a standard developed by the multi-vendor Storage Networking Industry Alliance (SNIA) for object-based storage management. Clouds that support CDMI are able to exchange data seamlessly and share common compatibilities as far as metadata support and management interfaces.
Open source infrastructure stacks is another option being explored by agencies, as open-source software is generally more resistant to data lock-in and frequently can support multiple hardware and/or cloud vendors. OpenStack is a key example of this type of solution.
The final method for ensuring data portability is implementing a storage operating system that can operate on hardware from multiple vendors, and that can be virtualized and abstracted to run on private and public storage clouds. In some cases, solutions have been developed to allow agencies to run on-premise traditional storage arrays, while implementing off-premise storage arrays that can utilize public cloud compute facilities on replicated data sets – creating a “best of both worlds” hybrid cloud model. By using a traditional storage operating system, agencies can preserve proven storage efficiency, availability, and scalability, while minimizing risk by going with a known consistent storage operating system.
Data portability across all clouds will support extensive customer choice for application, technology, and cloud partner options. The ability for agencies to address these data challenges will allow them to turn their attention from which cloud approach to take to a multi-cloud perspective focused on effectively managing the interaction between public, private, and hybrid cloud models.