CloudVelox eases migration of core business apps to the cloud

CEO Raj Dhingra says platform automates cloud migration, easing test/dev, disaster recovery, and optimizing cost.

cloudvelox ceo raj dhingra

It’s easy to get all “cloud first” when you’re talking about new, greenfield applications. But how do you get the core business applications running in your data center – so-called brownfield apps – easily and efficiently migrated to the cloud? That’s the problem startup CloudVelox set out to solve, with the larger mission of helping CIOs build “boundaryless” hybrid data centers. IDG Chief Content Officer John Gallant spoke with CloudVelox CEO Raj Dhingra about how the company has automated the migration of complex, traditional applications to Amazon Web Services (and Microsoft Azure in the near future). Dhingra explained how companies are using CloudVelox’s One Hybrid Cloud platform to not only migrate apps, but to build cloud-based disaster recovery capabilities and simplify a variety of test/dev chores.

You have a long and very successful background in the tech industry. Talk about this opportunity and what CloudVelox set out to do?

The company was founded in late 2010, beginning of 2011 at a time public cloud was gaining interest in the minds of enterprise customers. There were quite a few companies trying to help enterprises and developers build new applications for the cloud. Vendors called them greenfield applications. We all heard companies who were innovators and early adopters talk about cloud first, mobile first, building customer-facing applications that can take advantage of the cloud.

What nobody seemed to have focused on was what about my existing data center applications? Let’s call them our brownfield applications. How can I take advantage of the cloud for my existing brownfield applications that I run in the data center? There are anywhere from 15 million to 80 million VMs running in data centers around the world. What about them? How can I take advantage of that? CloudVelox focused on the brownfield applications and how enterprises can migrate and run those applications in the cloud and how can that be automated.

The secret to cloud computing is automation. You request a service, maybe it’s infrastructure-as-a-service, then you are able to quickly spin it up and pay as you go. In a similar way, how could you automate taking an application that’s running in your data center and then run that in the cloud without having to do a lot of manual script-oriented effort? That was the vision.


Let’s go solve this problem. It’s a tough problem to solve. There are many things that need to occur for this automation to be useful and valuable, to allow enterprise CIOs to think about a boundaryless data center. What that means is, how can I basically manage my virtual data center as if it’s one data center, whether I actually own the data center, I run some of my workloads in a hosting facility or I’m running this application on Amazon Web Services. Think about all of that as a seamless set of resources and applications and, more importantly, be able to actually move that workload from any location or data center to another.

MORE: We’ve mapped aggressive cloud data center expansion by AWS, Google & Microsoft

When we talk to analysts or customers, there’s the sense that people don’t move those brownfield applications either because they’re worried about the security of the data or because they don’t see a huge cost advantage in making that change. Are you saying that it’s really just too difficult and you’ve overcome that obstacle?

In the past, many of the concerns have been around security or maybe about performance. We’ve seen a progression over the last few years where many companies of different sizes started to operate in a hybrid IT model. I might take some of my existing workloads and refactor them, modernize them so they can take advantage of native cloud services. It is not that straightforward to take that brownfield application and make it run.

There’s a variety of issues that come up. Traditional tools have been more about manual processes, taking a VM and doing an image conversion so it can run, let’s say, in AWS or Azure. There’s a lot of configuration required because the way the application was running in your data center required a certain type of server, so much memory, a certain type of storage infrastructure. You may have set up your network in a certain way; how your subnet works, how your IP address is used, physical IP addresses. Maybe you set up security groups and you locked down some ports and opened up some ports. If you’re going to re-host that in the cloud, all that needs to be replicated, keeping in mind the matching services in the cloud. Your data center may be running VMware, but AWS is not. How is this virtualized instance in EC2 going to use the right kind of storage, which is EBS on AWS? How will my network design map into a virtual private cloud on AWS? How do the security characteristics in my data center match the security groups on the public cloud?

If you do this manually it’s overwhelming, it’s time consuming, it’s error prone and many times people find it doesn’t work. The key was to address these barriers, automating to reduce that complexity of learning what I have in my data center, to reduce the complexity of learning what’s in the cloud. The cloud is a very fast-moving set of services based on the cloud provider. Do I have people trained to do this mapping and recomposing with matching services? We had to take a holistic approach understanding the infrastructure, understanding the application, looking at the data, looking at the database and all the apps that composed the particular workload.

What applications is your system appropriate for and what are some applications that it’s not so appropriate for?

Before I answer that I’m going to step back and make a point. What I’ve seen is there is a great need for education. People are trying to learn about the cloud and what works and what doesn’t. One of the things that we have focused on is providing some very good content around how to do these things. What’s important and what’s not? Your question maps exactly to a 10-installment blog we started a few months ago. We’ve written about why it makes sense to go to the cloud, the three paths you can use to go to the cloud - re-host, re-platform, re-factor.

The third blog post was: How do you select what are good candidates for the cloud? And the last one that’s just gotten published is: What are bad candidates for cloud migration? For example, if you look at what might be good candidates for cloud, what are the criteria you have to use? First and foremost, is the application operating system environment supported in the cloud, at least for re-hosting purposes? If your application is running Windows or Linux, that’s going to be a good candidate. If it’s running a proprietary version of your OS, then it will need some modernization or some re-platforming. For example, if it’s running Solaris or IBM AIX then that needs some work before it’s going to run in the cloud.

The second criteria can be, is your application running on proprietary, custom hardware which is actually not available in the cloud? If it’s running on some ASIC-based appliance that uses proprietary silicon, that hardware environment is not going to be available in the cloud either. You need to virtualize the application before you can move it there. Third, does your application have any dependency on another application or service that’s running in the data center? Maybe it uses AD {Active Directory}. You can move it to the cloud but then you need to set up a VPN or some sort of network connection.

AWS offers what’s called Direct Connect where you set up a high-speed link between your data center and AWS and the cloud becomes an extension of your data center. Are you concerned about data security and data sovereignty? If you are a global company, you’ll need to maintain some of your data in the right location. These are some of the factors that affect how you think about the right applications. Typically, Windows-based apps, Linux-based apps, collaboration apps, ERP applications, Oracle, SAP, SQL, we’ve seen customers basically take many of these applications and re-host them in the cloud.

+ ALSO ON NETWORK WORLD: Tech Q&As: The IDG Enterprise Interview Series +

What clouds do you work with?

Currently, we have been helping customers with migrating or protecting their data in Amazon Web Services. We plan to deliver Azure support. The source could be your data center running VMware, Hyper-V, Xen, KVM. For destination, we’ve commercialized Amazon Web Services. The second most popular cloud we are hearing about in 2016 - and this was not the case as much in 2015 for brownfield applications - is Azure. We are commercializing support for that by the beginning of 2017.

The third cloud, when we talk to enterprises, seems to vary. For some customers it might be Google, maybe it’s OpenStack, maybe it’s an IBM software cloud. We haven’t found a third cloud to be very popular. A very large percentage of enterprises today are operating more in one cloud and maybe two clouds. I have spoken to one customer that’s actually running in four different clouds. That’s more the anomaly today.

It would seem that one of the strong use cases of this is creating a test and dev environment for a particular application in the cloud so if you want to make changes to it you’re doing it there instead of on the live application. Are people doing that?

1 2 Page 1
Page 1 of 2
9 steps to lock down corporate browsers