Is virtualization just too complicated? Consider: In a recent poll of IT professionals at big companies, 37% said virtualization made their IT environments less complex. The rest -- almost two-thirds -- either said that virtualization made things more complex (27%), that it made no difference (13%) or that they just didn't know (23%).
We need to drive out that complexity, and fast -- but slowly.
Yes, fast. And yes, slowly.
Understand, this poll was of 286 senior IT people in the Fortune 1,000. The usual caveats about surveys apply: The sample was small and may not have been random. The margin of error is at least 6%. And this question wasn't even the main point of the survey, which was done by mValent, a vendor that sells tools for managing changes to applications and middleware.
But even taking all that into account, this data point is still a warning flag. Complexity translates into cost. Some of that cost eats into the ROI of a virtualization project from the start. But some is more insidious: Complexity makes a data center ever harder to manage -- and ever more fragile. That cost doesn't translate into dollars until things collapse.
This isn't the first warning flag we've seen, either. Last year, CA sponsored a survey of 800 IT organizations and found that 44% of those that had deployed server virtualization were "unable to say whether or not the deployment has been successful."
They literally didn't know how virtualization was working out. Why? Complexity.
We're good at managing real servers. We've got that nailed. But virtual servers can multiply fast. Very quickly, we can find that we're not sure how many virtual servers we have. We don't know how long it will take to back them up, to adjust software configurations and to track performance. We don't know which tools still work, and what techniques don't.
But that's merely complexity inside the data center. Want real misery? Just let those problems leak out, in the form of applications that don't work or that run slowly for users. Suddenly, virtualization isn't about reducing energy costs or recapturing server-room floor space; it's about users who can't do their jobs, and managers who do not want their departments to be subjected to any more virtual anything, ever.
And an already complex technology initiative turns into a morass of business politics.
How can we avoid that nightmare? We can drive out complexity, but it will take time. That's where "fast, but slowly" comes in.
Look, we all want virtualization to work. Our server rooms are all too full, too hot, too expensive, too much of a mess. Trouble is, we don't have the experience with virtualization that we need. No one does. We can't buy it, we can't hire it, and there's only one real way to develop it: by starting small with pilot projects, then building them up slowly to figure out how this stuff really works.
Sure, we can train and plan -- and we should. But there are too many unknowns to train and plan for everything. A slow ramp-up lets us discover and kill problems as we go, reducing complexity at every step. Going slowly means fewer changes at once, fewer nasty surprises, fewer problems leaking out of the data center. It also means direct, desperately needed experience.
But to go slowly, we have to move fast. We can't wait for an ROI analysis or a line item in the budget to start getting that experience. We can start right now, today, with a tiny pilot that gets us moving.
See? Fast, but slowly. That's the way to beat virtualization complexity -- and get real results.
Frank Hayes is Computerworld's senior news columnist. Contact him at frank_hayes@computerworld.com.
This version of the story originally appeared in Computerworld's print edition.
Got something to add? Let us know in the article comments.