Sometimes I find myself thinking about work in the strangest of places: I was watching the dragster championships at Pomona recently and I was struck by an odd comparison with one of the challenges of IT.
Many years ago when I first had an interest in dragsters, they were comparatively simple machines - really just a large engine strapped to a simple frame with huge tires. What I saw this time was a massive evolution in what it takes to do drag racing competitively. Engines and transmission are rebuilt between each run and the collection and management of data has become an essential requirement. Whether that is for setting up the engine or building a picture of the traction available along the track by collecting information from the teams early runners and feeding it through to the racers yet to run. To succeed at this level, every system needs to work perfectly. They also need to interact perfectly with all the other systems. Minute deviations cost hundredths of a second and make the difference between winning and elimination.
The comparison that became apparent to me was that, while the early uses of computing in the enterprise were pretty simple, we have consistently increased complexity with every technology we have adopted. We now have the situation that desktop systems have become extremely complex, even before we consider the infrastructure that ultimately feeds into them. Every new technology we adopt brings with it an additional set of pre-requisites and dependencies that interact with those we already have. Consequently, we have to handcraft systems to make them work with all the other systems in the organization. Then, of course, when any part of the system changes or another technology is added, all bets are off as to whether things will continue to work. Apart from fragility and high ongoing costs we are also making systems that will trap us into keeping obsolete technologies; something that has dogged IT for decades.
On the positive side, we are on the verge of two very significant, but related, technology shifts: desktop virtualization and cloud. Both of these represent such a fundamental change in the way that applications are delivered to users that they require a complete re-think of client computing and how it is managed. As part of that re-evaluation we need to find ways that the new management techniques that virtualization and cloud provide can help advance our legacy systems. In this way we can bring our older systems forward and ultimately make their replacement easier. We can achieve this through consolidating duplicate infrastructure systems or moving to higher degrees of standardization. Either way will allow us to eliminate some of the special-cases' that add disproportionate complexity.
Highly fragile, highly interdependent systems can work for a drag racing team where cars only expected to work for 4 seconds (and sometimes do not last that long). Continuing their use in IT systems where we need systems to be both reliable and long lasting makes no sense. We have the opportunity as we re-architect for desktop virtualization and Cloud to design out some of the complexity that has been built in over decades. By doing so we not only get the benefits of newer technologies, we reduce the drag from legacy infrastructure.
Martin Ingram is an independent commentator on the virtualization space.