Hyper-convergence or just hype

It sure is hard to keep up in this fast changing technology world. Hyper-convergence is the latest new buzz word in data center circles. Third platform, Internet of things, object based storage, erasure coded disk, software-defined data center – all these trends are ushering in a new age in IT, and organizations everywhere are struggling to keep up to become the most efficient at IT and beat their competition. But which are the right technologies to adopt in order to find success? Are all these things actual trends that help and will stick, or are these just buzz words and marketing spin from technology vendors?

Most advancements in technology are critical to move the bar forward, and help improve the costs and speed of technology adoption and deployment. As an example, when computers first came about, they were necessarily massive since the foundation of their circuitry was based on vacuum tubes. The invention of the silicon transistor changed all that, and enabled computers to become much faster and smaller, and cheaper to build. The beauty of the trend towards faster and smaller is that it also makes it cheaper to manufacture, so as it improves, it gets less expensive to buy. Name one other thing in this world which gets cheaper to buy as it gets better over time!

Historically, technology implementation trends come in waves, and tend to run in cycles of distributed and converged. If you look at a typical wave for technology adoption, it is very cyclic in nature.

it_trends.jpg

If you have been around the IT industry a while, you know what I mean. Take for instance the current trends of hyper-convergence and the cloud. The ultimate in hyper-convergence is the mainframe, and the ultimate in distributed is the cloud. Which is the best way to move forward? Will the cloud become the new standard in IT deployment, or will the trend towards hyper-convergence win out as folks realize they want more control of their systems and data. You need to understand the history of IT to judge.

First came the mainframe computer, where compute power was centralized, but end users used cheap dumb “green screen” terminals to access the centralized mainframe over the network. Many businesses used time sharing over the network to gain access to the mainframe and run their applications in logical partitions, or within a virtual time slice allocated to their process. (Sounds a bit like the cloud, does it not?) mainframes were very expensive, so they were only available to a select few organizations with the deep pockets required to buy them.

Then came the mini-computer via Digital Equipment Corp, which made building a small data center available to almost any business. Next came the PC boom, which moved the computing horsepower closer to the end users, and distributed computing was born. Microsoft and Novell drove the adoption of small server based computing and clustering, which enabled Compaq and Dell to gain market share, and help make the mini-computer irrelevant. During those days, storage was purchased as part of the system, which folks found out that it stranded data within the servers, so storage area networking (SAN) was born.

Desktop PCs were everywhere, networks had to keep up with all the traffic between the PC users and the servers, the storage network was different from the IP network, so multiple teams were required to manage everything. All the data in both the PCs and the servers needed to be backed up every night, and the multitude of distributed components, systems, and storage became a nightmare to manage.  That’s when “utility computing” came into being, and the terms SaaS (Storage / Software as a Service) were invented. Companies moved from a purchase, build and manage model to a lease and run model. All this was right around the time the Internet took off and the Dot Com era was born.

This was all before virtualization came into being, so businesses found that that they need servers and storage from particular vendors in order to connect to the compute utility. Network and hardware costs kept rising until everything finally came crashing down with a loud thud as the dot com bubble burst.

Everyone went back to the buy and build model, and they spent billions in developing and streamlining large powerful consolidated data centers. Then virtualization technology came into being and changed everything. Virtualization enabled data center managers to commoditize and consolidate their systems and storage again. This made moving data and applications around easier, and the cloud was born. The virtualized cloud enabled the service model to come back into vogue, so companies once again began outsourcing their IT into this new model. Now the cloud vendors are trying to reduce costs by consolidating even further, which brings us back to hyper-convergence.

The next logical step in this direction is back to the 1970’s and the mainframe.  Do you see the trend in how the industry oscillates between distributed and centralized (or network based) computing. It’s like the heartbeat of a living creature. During this entire period, with the invention of the HTTP protocol and the World Wide Web, the internet was born, evolved, and has now almost become the brain of humanity. The Internet has become a humankind’s neural network, which eventually will contain everyone’s thoughts and memories in the shape of tweets, videos and pictures (along with the added benefit of being able to search for stuff!)

Advancement in technology are changing the world. Look how fast we went from the PC, to the laptop, to tablets, and now smartphones. Technology is the only area I know of where things get cheaper as they get smaller and more powerful. Processing power has moved from the mainframe to the edge with small powerful devices, but it seems the real power is once again moving towards a hyper-converged and centralized compute model behind the network, where data is turned into information and stored for everyone to see.  

I am not sure which mode of deployment will win out in the end, or even if the cycle of converged and distributed will continue. In the end, we may see a hybrid combination of hyper-converged data centers as the power behind a highly distributed cloud based network. Only time will tell.  

Copyright © 2014 IDG Communications, Inc.

It’s time to break the ChatGPT habit
Shop Tech Products at Amazon