Simplivity attacks the ‘unbearable complexity’ of IT

CEO Doron Kempel claims rivals can’t touch company’s ‘hyperconvergence 3.0’ model and promises big savings even over Amazon Web Services.

Hyperconvergence is a relatively new buzzword but Westborough, Mass.,-based Simplivity is already boasting of creating version 3.0 of this emerging IT model. In this installment of the IDG CEO Interview Series, Simplivity CEO Doron Kempel talked with IDG US Media Chief Content Officer John Gallant about how Simplivity’s OmniStack outperforms competitors like Nutanix and claims customers deploying workloads on Simplivity can save 22% to nearly 50% compared to running them on Amazon Web Services. Kempel also talked about Simplivity’s partnerships with Cisco, VMware and Lenovo and explored why it took nearly four years to bring the company’s vision of hyperconvergence to reality.

Why was SimpliVity formed? What was the mission?

If you look at IT infrastructure over the last 25 years since the mainframe we’ve seen what you could refer to as the Big Bang. A lot of technologies emerged and created this very long laundry list of products that you, as an IT professional, need to buy from many vendors and then train many people on and then manage from many panes of glass, so to speak. Some can count up to 10 or 15 products that you need to buy. This has become too complicated.

In 2009 when SimpliVity was founded the simple, or humble, mission was to simplify all of that - basically displace all these point solutions you buy from different vendors into one homogeneous software entity that runs on commodity x86 resources, the same ones that Amazon and Google run on. Deliver the best of both worlds. On one hand, the economics and simplicity of the public cloud.

We offer you IT infrastructure that is as simple to manage and as inexpensive as the public cloud. However, and this is the second world, we will also offer you enterprise Tier 1 capabilities. Sounds impossible, but after 43 months - which is twice as long as your normal IT infrastructure company’s journey from inception to version 1.0 - in April 2013 we launched our product. I can tell you about how that progressed but let me make sure that I address your first question which is what is the mission, what is the problem? The problem is complexity, the unbearable complexity, and the mission is to simplify IT, offering the best of both worlds.

I want to make sure that people understand the different parts of the technology that you roll out. Explain what the OmniStack Data Virtualization platform provides.

In a simple way you can think about it as a data services platform. It offers the functionality, and thus also the replacement, for the following products: One is the storage switch; two is the primary storage array. I can give examples of the vendors that offer these products but I think your readers will understand. We talked about the storage switch and then you have the primary storage array. Then you sometimes have acceleration storage devices such as SSD, flash arrays. Then you have myriad data protection products, sometimes software, sometimes appliances.

For example, you could have a backup application, replication software, DR [disaster recovery] software and that’s a category of products. Then you have another category of products that you can think of as data efficiency devices. Interestingly, since the volumes of data are out of control mainly since the mid-90s, today there are deduplication/compression devices for different phases of the data lifecycle. There are devices that dedupe and/or compress the primary storage. There are devices that dedupe or compress the backup.

For example Diligent and Data Domain focused on that. Then there are devices that focus on the WAN. Riverbed would be a great example. Then there are devices that focus on transit of data to the public cloud.

For instance, Microsoft acquired StorSimple and that is their gateway to the cloud. Then there are ones that focus on archives. There are a bevy of products. Each of them condenses the data at a specific phase of its lifecycle. We have software that offers all these data services and runs them inside a standard server. Basically all the storage services are now assimilated into one software entity that offers a dramatic reduction of cost because all of those functions now run on commodity x86. These are Intel-based servers with great efficiency because you don’t need to overprovision each of them separately because you’re running them all virtualized together and all of that functionality runs on a commodity x86 server.

What is the OmniStack Accelerator Card?

At the heart of complexity is a data problem. In other words, data is out of control. There is too much data to write, to read, to move, to protect, etc. That has given rise to all these appliances that I just described. What we have decided to do is condense, basically dedupe, compress and optimize all the data at inception at the millisecond in which the application writes it for the first time and keep it in that condensed fashion throughout its lifecycle; primary, backup, over the WAN, archive and cloud, globally.

The trick was how do you do that without compromising and introducing latency, reducing performance or [adding] unnecessary complexity? In order to do all of that, we created the OmniStack Accelerator Card. Basically it’s a standard PCIe card on top of which we have a large, very economical processor that has the mission in life of basically condensing the data, applying deduplication, compression and optimization, data efficiency technique.

We do that in real time in near wire-line speed and do that without - very importantly in the world of hyperconvergence - taking away too many of your Intel processors. In theory, everything that we do on the card you could do on Intel processors. In fact, we do that ourselves but if you do that on Intel processors, you take those processors away from the business application. You basically confiscate them in order to run these high-end data services. What we do on the card is compress all the data but we also have a technology that allows us to reduce latency, without getting into too many details, by processing the data very quickly and returning it back to the application. The card is an accelerant. Think about it like a turbo engine in a very fast car. It gives it more speed without taking away from the business application.

1 2 3 4 Page 1
Page 1 of 4
It’s time to break the ChatGPT habit
Shop Tech Products at Amazon