Opinion by Steve Duplessie

IT and the Ford Pinto: two-of-a-kind lemons

You can put lipstick on an IT pig, but under the hood, it's still a Pinto

Opinion by Steve Duplessie

Show More

Q: Is it ever going to be realistic to think we will have infrastructure within IT that lets the "IT as a service" mantra be real? It is tiring to hear from vendors who are selling the same old junk now claim they have the answers to these problems. As far as I can tell, running a big box of blade servers doesn't do a single thing different for me except it has better packaging and takes up less room. Am I missing something?  -- B.E., Vancouver, BC.

A: Yes, it will be realistic. Will we be alive when that happens? I don't know. I understand the frustration level, so let me try to boil down the cause and the road to a solution.

IT is being outsourced by businesses at an alarming rate. That happens because businesses don't like variables they have no control over. That is logical. If the business can shove unknown variables to someone else, it will. IT is a cost center -- and worse, it is a variable cost center. IT does not have the ability normally to be able to tell the business exactly what can and will be delivered for exactly what cost. IT can't reasonably adapt to large changes in business conditions easily. IT can't readily even identify and fix problems that are causing business user problems. It is not IT's fault (or at least most of it isn't); all of these issues stem from the fact that IT has to play the cards it was dealt.

No one would ever design an IT operation to look like most look today. With a blank slate and a blank check, most of the technological advancements we'd want would be possible. The issue is that we must make decisions today based on tomorrow's semiblind requirements but at the same time incorporate everything we did yesterday.

For example, if it were 1971 and the Ford Pinto had just rolled off the assembly line, and you could modify the Pinto any way you wanted with all the technological advancements that we have today, you'd still have the problem of owning a Pinto. You're in the same boat as an IT manager.

You would find out that the Pinto is not a good car and that things break. You would wait on the manufacturer to create the fix and implement it. Then, two years later, you would see you are burning more oil than gas, and, as such, you would need to find a Pinto specialist who can tweak the engine to get that Pinto to be a fine-tuned Luvmobile. Pretty soon that Pinto mechanic would have guaranteed himself a job fixing problems, and that would become a huge dependency problem for you. Then, the next year, you would be watching the news on TV and see Pintos blowing up when a stiff breeze hits their rear bumper. You would put all your efforts into solving that important tactical problem. You would then end up hiring more specialists and spending more money to keep that baby going. Meanwhile, all along there are constant advancements in automotive technologies that you hear about but that don't work on the Pinto.

The bottom line is that you put lipstick on the pig, but under the hood, it's still a Pinto.

IT is a Pinto. It has the best tires money can buy, the most advanced ceramic break technology ever invented and a team of uberspecialists baby-sitting it that would make NASCAR jealous. But, no matter what you do to that Pinto, as long as you have to live with it, you will always be stuck with the limitations of that decision to buy a brand-new lemon.

The things that kill us in IT center on being forced to work within the confines of legacy infrastructural decisions and processes and, more importantly, how any change that occurs (for good or bad reasons) will at best fix one tactical issue but that will more than likely cause other problems.  We hope the new problem will be better than the old or at least will become someone else's issue.

What we need is not hard to understand. How to get there isn't as easy.

The flow of IT begins with the business, which interfaces to IT/infrastructure via their application interfaces. That's where we come in. What needs to happen is we need to remove the dependencies of the business, application and user to the IT infrastructure. If we can do that, then any change that is made beneath that line of demarcation happens without that business, application and user ever knowing -- and that is the real goal we should aspire to.

In IT management, we want to be able to rip, replace, upgrade, add, fail, reconfigure, provision, expand and contract anything in the infrastructure (a.k.a. change) and have those changes occur without anyone up above that line knowing they are occurring. We want all the pieces below that line to adapt to the change real-time, dynamically. Now, stay with me.

The top box is THE BUSINESS, which connects to the middle box, which is IT/INFRASTRUCTURE, which connects to INFORMATION. That's it. Everything we ever talk about fits into one or more of those buckets.

Consider IT infrastructure in three subcategories. There is a server/processor layer, a connectivity layer and a data layer. Above this, the business just wants to run an application. The business doesn't care how that happens -- so stop telling it; that doesn't make things better. The business side only wants to know that it will happen (each time, every time) in a time frame that it desires. Below the black hole of doom (a.k.a. IT) sits INFORMATION. That's what that business wants. That's it. Everything else in the middle is just noise to the business. That's us, noise.

To become more "invisible" and less noisy, we need to be able adapt to unforeseen (and often unreasonable or illogical) demands from above. To do that, we need some basic things to happen.

1. Within the data layer, which is composed of data management stuff like databases and file systems, storage devices, etc., is where most of the "dynamism" breaks down. And in the data storage industry, we have more knob-twiddling specialists than anywhere else. We must deal with legacy data management issues and arcane process assumptions as we deal with legacy infrastructure decisions with arcane abilities. The mantra here has been "Fix every problem by purchasing more hardware." Not fast enough? Buy more. Not big enough? Buy more. As capital costs decline, buying more becomes addictive and easy. Sort of like crack. Now that crack is cheap, it creates the next issue, which is operational. Managing a thousand storage boxes ain't easy. Managing the interdependencies of a thousand storage boxes connected to a hundred switches connected to a million virtual servers is impossible.

Storage boxes can't be autonomous. We need ONE box. That box needs to grow in any dimension, as we need it, and manage itself along the way. It needs to scale infinitely -- but never become more than one thing. It needs to heal itself, tune itself, replicate itself and recover itself. It needs to eliminate any of us from having even known about it. Let somebody upstream click the three buttons to instantly provision what they need. Let the application do it. Make it optimize itself for efficiency by not storing the same exact object more than X times locally and remotely. Make it be able to deal with component failures without slowing anything down and reconfigure itself every time something dies or something else is added. Do all those things and send me a status report every now and then.

There really has been only one type of storage array available in IT: a monolithic implementation that has a beginning and an end. It might be expandable by bolting on another giant frame (classic monolithic) or by bolting on a small rack of disks (modular monolithic), but in either case you will come to the end sooner or later. For an IT infrastructure to deliver on-demand, predictable services in any sort of reasonable, real-time way, the physical storage limitations of the "box" will have to cease to exist. Only when knob-twiddling specialists are no longer required, and scale in any dimension happens dynamically and autonomously, can we even start to talk about IT as capable of delivering real business services.

2. The connectivity layer is pretty self-sufficient already. If the data layer is all set, then we just need to make sure the plumbing connecting that layer to the server/processor layer is dynamic as well -- and those technologies have existed for a while now.

3. The server/processor layer will be a bunch of CPU/memory subsystems that some uber-OS will carve up and gang together to satisfy the one-time demands of whatever application it is connected to. Once the task is performed, the assets automatically are liquefied and go back into their pools awaiting their next calling.

So it seems the inability of storage to have disaggregated itself (control/access elements vs. capacity elements) is the most important tactical issue to solve in order to enable grander overall strategic IT capabilities. Some folks have taken nice steps in that direction, but no one is talking about completely eliminating all the issues that surface when change is mandated. Grid computing is a cool concept, but without a fully autonomous grid-storage infrastructure, it will never have wide reaching ramifications.

What was the question again?

Send me your questions -- about anything, really, to sinceuasked@computerworld.com.

Steve Duplessie founded Enterprise Strategy Group Inc. in 1999 and has become one of the most recognized voices in the IT world. He is a regularly featured speaker at shows such as Storage Networking World, where he takes on what's good, bad -- and more importantly -- what's next. For more of Steve's insights, read his blogs.

Copyright © 2007 IDG Communications, Inc.

7 inconvenient truths about the hybrid work trend
Shop Tech Products at Amazon