What the CIA Private Cloud Really Says About Amazon Web Services
CIO - Back in March, leaked news that the CIA was about to award a $600 million private cloud contract to Amazon Web Services kicked off a series of events and gossip worthy of a soap opera.
Much of the early discussion focused on the fact that Amazon was going to turn its back on its avowed public-cloud-is-the-only-true-cloud stance, swallow its pride and implement a single-client cloud environment. In many of the discussions (not especially the one linked to above) there was a bit of a gleeful smirk about Amazon's about-face.
While this change of policy is interesting, it undoubtedly reflects two things. First, the contract is for a lot of money, so it's attractive from a commercial point of view. Second, and more important, the endorsement implicit in the CIA-the CIA!-choosing AWS is that it provides Amazon a trump card in all discussions about security, trustworthiness and so on.
When a prospect raises the issue of AWS security, the sales rep is going to narrow his or her eyes, lean forward and, in a lowered voice, say, "Did I mention that the CIA trusts our cloud?" That endorsement is well worth the headache of running an environment dedicated to a single tenant.
IBM Bid for CIA's Public Cloud Was Lower, But Amazon's Was Better
Of late, much discussion has moved to IBM's protest of the award of the project to Amazon-specifically the fact that the CIA planned to award the project to AWS despite Amazon's bid being more than 50 percent higher than IBM's.
Forrester's James Staten provides good analysis of the protest, noting that IBM complained about how the RFP was scored on two items: One relating to how costs for a MapReduce service were calculated, and the other relating to how much responsibility the CSP would take on for removing viruses from provided software. Both complaints were sustained. IBM also complained that the RFP scoring didn't take into account AWS service outages, which the CIA rejected as irrelevant.
Now, I don't profess expertise in the ins and outs of federal government procurement, but my read of the Government Accountability Office decision showed two things. IBM was grasping at straws by raising such minor issues as the basis of an award protest, and these issues are unlikely to change the final outcome of this award. AWS will emerge victorious.
However, to my mind, all this analysis misses the real import of the CIA choice of AWS for its cloud environment. The implications of the decision illustrate what will drive cloud user deployment decisions in the future and what the future makeup of the cloud provider marketplace will look like.
These are the three things to note about the CIA decision.
1. In the Cloud, Easy-to-Deploy Applications Rule
It's no secret that AWS has grown fat on developers stampeding to its service, enticed by its ease of use and the rapid availability of resources. Meanwhile, most of Amazon's competitors provide a gussied-up hosting service with a smidgen of self-service. More critically, most of those competitors continue to sell to their established buyers: IT operations. The motivations, judgment criteria, and agility expectations of the two groups are completely different.
With its choice, the CIA came down-decisively-on the side of applications, so much so that it was willing to pay a 50 percent premium to buy the offering that best enables applications.
This decision should put a shiver up the spine of every cloud provider in the country. It's a clear message that application owners are driving deployment decisions, and the criteria that applications groups judge cloud computing by will be the important ones going forward.
2. For AWS, Smart Software Trumps Enterprise Gear
Just as Amazon targets a different user base with its offering, it pursues a different path in how it designs and operates its cloud environment. Most cloud providers tout the quality of the kit used to build their cloud: Name-brand servers, routers, storage arrays and so on. Amazon is notoriously cheap, on the other hand, and refuses to pay premium prices for its gear. More critically, it uses very different design assumptions about what it takes to deliver a cloud computing environment.
Amazon assumes that it will be operating its offering at vast scale and can't afford to use designs that can't grow to support that assumption. As an example of how this plays out, unlike most cloud providers, Amazon uses Layer 3 networking rather than Layer 2, because the latter ends up tied to VLAN topologies that don't scale. James Hamilton, a-or perhaps I should say the-AWS data center architect, uses a series of interesting presentations to discuss high-scale infrastructure requirements and approaches.
The design approach goes beyond just using inexpensive kit to save money. It's driven by Amazon's recognition that, at large scale, hardware fails constantly, no matter how cheap or expensive. If you're going to run a robust, highly available environment, then you can't depend on the underlying hardware.
The obvious alternative is to use redundancy to avoid hardware-caused service outage. That, of course, requires more sophisticated coordination to ensure there are sufficient redundant resources available, that data is replicated to those resources, that CSP-provided services are operated on redundant devices to avoid service outages, and so on. Consequently, Amazon operates its inexpensive hardware with a layer of extremely smart software that coordinates the environment. Think of it as Amazon's Cloud Operating System.
In the figure below, the magic happens in the dark blue boxes, which is where the Cloud Operating System resides. In addition to all the software that coordinates AWS itself, this is where AWS services such as Elastic Compute Cloud reside.
Amazon Web Services' orchestration and services software (seen in the dark blue box) add tremendous value.
Part of the way Amazon continues its astonishing pace of innovation is that it creates new services by combining existing services with new software overlays. For example, its DynamoDB service places a redundant key-pair storage software capability on top of the existing EC2 instance service, enabling the storage service to leverage the EC2 computing capability.
The use of smart software to run a cloud environment clearly offers advantages in terms of scalability. It also makes it easier to create new services and applications. It can't have escaped the CIA's notice that the explosion of big data and next-generation applications is far better served by a smart, adaptable, agile infrastructure environment. In the clash of cloud design philosophies, the CIA clearly voted for the cheap but clever AWS approach.
3. AWS Ecosystem of Rich Services Attracts Developers
One of the main reasons developers embrace AWS is because of the richness of its services. This includes services that AWS itself provides, as well as a very large number provided by third parties. Developers can stitch applications together by combining these services with their own business logic.
The alternative for users with most other cloud providers is to implement those services on their own, in one of two ways- open source packages, which have the virtue of being easily downloaded, or commercial software offerings, which require a contractual arrangement prior to use. In either case, the burden of getting the required capability up and running falls to the developer. This significantly increases the effort of delivering and operating an application.
The AWS ecosystem provides an enormous advantage for users, enabling them to deploy applications quickly. Staten notes that the only extended service discussed in the RFP is a MapReduce analytics capability; he goes on to say that even if other services aren't available in the CIA's private environment, it would be easy for the agency to incorporate public AWS services, given that it would already have AWS interfaces and tooling in place to work with the internal cloud.
It may be, however, that other AWS services (if not third-party ones) could be made available on the private cloud. If the clever software that makes up the AWS infrastructure management capability is in place, it seems that it would be possible to, say, make DynamoDB available as well.
The power of Amazon's ecosystem is, surprisingly, not widely discussed as one of the enormous advantages it has in the CSP battle. Just as Microsoft leveraged its developer network to dominance in the 1990s, so, too, does AWS leverage its ecosystem as a weapon against its competitors.
From the perspective of a user, the richer the ecosystem, the better. A rich ecosystem provides time-to-market advantages, greater flexibility in terms of suppliers and application architecture choices, and lower costs through supplier competition. One suspects that, even if additional services were not called out in the RFP, the CIA recognizes that a richer environment provides additional benefits-and this may have factored that into its decision-making process.
CIA-AWS Partnership: Cloud's 'Judgment of Paris' Moment
The RFP outcome reminds me of the so-called Judgment of Paris in 1976, when American and French wines were compared. To the surprise and horror of the French wine industry, which took it as given that its wines were far superior to those of the U.S., American wines came out on top. Despite repeated protests and retests (truly reminiscent, eh?), the results confirmed the initial judgment. The perception of the quality of U.S. wines forever changed.
There were knock-on effects as well. Fine European restaurants began to carry American wines, while U.S. wine connoisseurs added California wines to their cellars. One could argue that the Judgment of Paris played a role in the evolution of fine dining and food quality that one can see expressed today in "artisanal" foodstuffs, pop-up restaurants, food trucks, and on and on.
The Judgment of Paris represented a watershed event that forced an entire industry to re-evaluate its assumptions and behaviors. It had long-lasting, far-reaching effects. It's likely that the CIA private cloud RFP will come to be seen in that same ligh.
Bernard Golden is the author of three books on virtualization and cloud computing, includingVirtualization for Dummies. He is senior director of Cloud Computing Enterprise Solutions group at Dell. Prior to that, he was vice president of Enterprise Solutions for Enstratius Networks, a cloud management software company, which Dell acquired in May 2013. Follow Bernard Golden on Twitter @bernardgolden.
Read more about government use of it in CIO's Government use of IT Drilldown.
This state transportation department uses computer science students from a local university as programming interns, and everyone is happy with the arrangement -- until one intern learns how to bring down the mainframe.
- IT Certification Study Tips
- Register for this Computerworld Insider Study Tip guide and gain access to hundreds of premium content articles, cheat sheets, product reviews and more.
- Changing the Way Government Works: Four Technology Trends that Drive Down Costs and Increase Productivity
- This paper discusses four technology-based approaches to improving processes and increasing
productivity while driving down department and agency costs.
- Path Selection Infographic
- Path Selection Infographic
- Hyperconvergence Infographic
- A wide range of observers agree that data centers are now entering an era of "hyperconvergence" that will raise network traffic levels faster...
- Preparing Your Infrastructure for the Hyperconvergence Era
- From cloud computing and virtualization to mobility and unified communications, an array of innovative technologies is transforming today's data centers.
- How WAN Optimization Helps Enterprises Reduce Costs
- If you wanted to break down innovation into a tidy equation, it might go something like this: Technology + Connectivity = Productivity. Productivity... All Government IT White Papers
- Cloud Knowledge Vault Learn how your organization can benefit from the scalability, flexibility, and performance that the cloud offers through the short videos and other resources...
- LIVE EVENT: 5/7, The End of Data Protection As We Know It. Introducing a Next Generation Data Protection Architecture. Traditional backup is going away, but where does this leave end-users?
- On-demand webinar: "Mobility Mayhem: Balancing BYOD with Enterprise Security" Check out this on-demand webinar to hear Sophos senior security expert John Shier deep dive into how BYOD impacts your enterprise security strategy...
- Mobile Security: Containerizing Enterprise Data In this on-demand webinar, Fixmo's Lee Cocking, VP of corporate strategy, explains why Apple-ization trends like mobility and "bring-your-own-device" (BYOD) are driving the...
- Endpoint Data Management: Protecting the Perimeter of the Internet of Things Not surprisingly, "Internet of Things" (IoT) and Big Data present new challenges AND opportunities for enterprise IT. Teams need to harness, secure and...
- All Government IT Webcasts