Manage Cloud Computing With Policies, Not Permissions
CIO - In my presentation on hybrid cloud computing at Interop New York, I began (as I often do) with a review of the NIST definition of the five characteristics of cloud computing.
I think the National Institute of Standards and Technology has done a great service in codifying its definition, and I rely on it to communicate the key characteristics of cloud computing - and, more importantly, to draw the distinctions between cloud computing and the traditional IT approach to infrastructure management.
The first characteristic in NIST's definition relates to self-service: "A consumer can unilaterally provision computing capabilities, such as server time and network storage, as needed automatically without requiring human interaction with each services provider."
In the session, I described this as being analogous to how one orders a book from Amazon: Fill out a Web page with necessary information and click a button to order. Minutes later, a link to the ordered computing resource is available to the user, who can then begin using it. Key to this is automation. The cloud orchestration software handles the request for the computing resource; there's no need for human support or intervention.
A session attendee immediately shot up his hand and said, "Well, somebody has to review requests, because developers will just request resources and use up capacity." I responded that the orchestration software should have a policy governing resource provisioning to ensure the request is appropriate, budgeted and within the scope of the requestor's job duties.
I got the feeling, however, that my response didn't satisfy him, that he couldn't really accept an environment that didn't have someone vetting resource requests. I think this suspicion of automated resource provisioning is widespread - and deeply rooted within IT operations organizations.
If IT Has Plentiful Resources, Why Scrutinize Who Uses Them?
This exchange crystallizes a critical aspect of cloud computing and why the topic seems so emotionally charged. For years, central IT has been responsible for rationing scarce resources, apportioning them among users and inevitably frustrating many. Application groups, confronted by the very real likelihood that needed resources won't be available, have every reason to request more than they need and hoard any they receive.
To ensure appropriate allocation, IT sets up checkpoints where individuals review and evaluate every resource request, applying judgment to determine which requests pass muster and are rewarded with access to computing resources. Those requests are passed on to operations, whose personnel perform the manual operations necessary to install and configure resources. Those whose requests fail this assessment either lick their wounds or devise a stratagem to bypass the gatekeeper.
This state of affairs has existed for so long that many IT operations personnel have come to assume it represents some kind of natural state of affairs, with an ongoing and inevitable charter to evaluate and judge user requests for resources.
However, the rise of cloud computing has shattered the basis of that assumption. First, cloud providers have automated the provisioning process so that no manual intervention is required to actually install and configure resources. No physical access or work is required to obtain computing capability. Second, cloud providers offer what the Berkeley RAD Lab's Report on Cloud Computing calls "the illusion of infinite capacity" - the notion that computing capacity isn't a scarce resource that requires rationing but, rather, something available in whatever amounts a user may desire.
Consequently, much of the rationale for the traditional reasons resource request reviews were necessary have fallen by the wayside. Nevertheless, old habits die hard. Despite the fact that resources aren't limited, many IT organizations assert that someone has to evaluate every request.
Use Rules Engine to Decide Who Gets Cloud Resources
This won't go on much longer. More to the point, this can't go on much longer.
First, the question is no longer, "Can this request be fulfilled?" The question is, "Should this request be fulfilled?" In other words, it's not an issue of rationing scarce resources. It's an issue of whether someone's desire for resources is appropriate.
Its silly to put a human in the middle of that evaluation. The organization should have a set of rules about who's able to request resources, and those rules should be captured in a policy engine that can be applied automatically. After all, that's what the human is doing - applying a set of organizational rules. Why not define the rules and apply them as part of the provisioning process?
As I said, the question now is, "Should this request be fulfilled?" Once the policy is applied - a rule that says a developer can request resources for a project he or she is assigned to, for example &msash; the issue becomes the "should." That's a decision best made by the resource user, and made in the context of, "Does this resource use support the business and its objectives?"
Price is perhaps the most efficient mechanism for making resource allocation decisions ever found. This lets you make an effective judgment quickly. Providing transparent resource costs allows user organizations to make their own minds up about whether using resources is justified or not.
The issue facing many IT groups is that current processes reflect historic circumstances that are no longer applicable. Worse, they make the processes inappropriate and obsolete. In any case, cloud providers offer powerful evidence that there is another, more convenient way to operate. The challenge for IT groups is to quickly move toward updating processes to reflect what is possible today rather than continue operating with yesterday's obsolete methods.
If You Block Users' Access to Resources, They'll Find Them Another Way
Organizations struggling to make this transition should consider the following:
The world has changed. It's now obvious that the need for human approval is no longer necessary for routine circumstances. More to the point, it's no longer tenable. Businesses are now creating and running applications that are directly tied to financial interactions with customers and that experience erratic workloads. Trying to operate a process that imposes lengthy delays in resource availability due to manual process is unacceptable in a world where digital engagement with customers is the norm. Existing processes need to be updated to operate effectively in this new world.
Codify your policy and capture it in a rules engine. I constantly hear about the need for "someone to review requests." When I ask why, it turns out a fairly straightforward set of heuristics are being applied. If a credit application can be automatically assessed, certainly a request for a virtual machine can't be too complex to evaluate against a set of organizational requirements.
Automate the process and reduce exceptions to the minimum. Putting a process in place that requires requesting resources from someone presents a power dynamic and poses real threat of organizational tension. After all, how would you feel if you had to contact your bank and ask for permission every time you wanted to buy something with your credit card? It's vital to make every day resource requirements automated. Make it clear that only unusual requests require face-to-face discussions.
Recognize the imperative automation imposes. If you fail to meet the benchmark external providers offer, users will abandon your offering. That presents a real risk of stranded investment, which can make for unpalatable economics. It's a new world, and you have to be prepared to support it. Falling back on shibboleths and anecdotes about irresponsible users is a dangerous game. Instead of substituting your judgment for theirs, enable them to make their own decisions with supporting facts and economic information.
Put another way: Let users make their own mistakes rather than shielding them from the consequences of their own decisions. If they're wrong, they'll learn. If they're right, your intervention wasn't necessary.
Bernard Golden is senior director of Cloud Computing Enterprise Solutions group at Dell. Prior to that, he was vice president of Enterprise Solutions for Enstratius Networks, a cloud management software company, which Dell acquired in May 2013. He is the author of three books on virtualization and cloud computing, includingVirtualization for Dummies. Follow Bernard Golden on Twitter @bernardgolden.
Read more about enterprise architecture in CIO's Enterprise architecture Drilldown.
- 15 Non-Certified IT Skills Growing in Demand
- How 19 Tech Titans Target Healthcare
- Twitter Suffering From Growing Pains (and Facebook Comparisons)
- Agile Comes to Data Integration
- Slideshow: 7 security mistakes people make with their mobile device
- iOS vs. Android: Which is more secure?
- 11 sure signs you've been hacked
- What Datapipe customers need to know about the new PCI DSS 3.0 compliance standard This handy quick reference outlines what PCI DSS 3.0 is, who needs to be compliant and how Alert Logic solutions address the new...
- The 12 PCI DSS 3.0 requirements addressed by Peer 1 Hosting This handy quick reference outlines the 12 PCI DSS 3.0 requirements, who needs to be compliant and how Alert Logic solutions address the...
- Defense Throughout the Vulnerability Life Cycle This whitepaper provides insight into how to leverage threat and log management technologies to protect your IT assets throughout their vulnerability life cycle.
- The Critical Role of Support in Your Enterprise Mobility Management Strategy Most business leaders underestimate the importance of tech support when they choose an EMM solution. Here's what to put on your checklist.
- Live Webcast Best Practices for the Hyperconverged Enterprise Network To the Age of Constant Connectivity and Information overload
- Live Webcast Unmasking the Differences between Consumer and Enterprise File Sync & Share The consumerization of IT combined with the rapid pace of the modern mobile workplace is forcing enterprise IT teams to evaluate file sync...
- Live Webcast Government Agency Webifies Outdated COBOL Applications Let this CTO tell you how his agency converted 1980s-era green screens into an e-filing portal for the 100,000 cases handled each year...
- The New Way to Work Knowledge Vault This Knowledge Vault focuses on how, in today's increasingly virtual world, it's more important than ever to engage deeply with employees, suppliers, partners,...
- Getting Ready for BlackBerry Enterprise Service 10.2 Find out how BlackBerry® Enterprise Service 10 helps organizations address the full spectrum of EMM challenges, while balancing the needs of both the... All Applications White Papers | Webcasts