NetApp's new CEO talks about hybrid cloud, customer challenges

1 2 3 4 5 Page 4
Page 4 of 5

The second area I want to talk about from a technology perspective is hybrid cloud. For your rival EMC, it's clear that helping enterprises build and manage hybrid cloud is a major focus going forward. What's your strategy there?

The first thing we're doing is focusing on what we are expert at in the problem space of hybrid cloud. Many of the discussions around hybrid cloud so far have been about applications and services and virtual machine portability. Frankly, that's a small part of the problem. The heart of the hybrid cloud environment is how you manage your data.

First, data is the only thing that remains after all of the transient processing and application logic have been done. Let me give you some simple examples. If you decide to go from one public cloud email provider like Hotmail or Outlook to another, like Google Mail, for example, what is the only thing you really care about moving? Your data, your contacts, your photos, your letters, things like that. So that's the first thing we focus on: Data, which is the thing of lasting value in the cloud.

The second is that to build a hybrid cloud architecture you need to be able to share the data across the different cloud environments so you can build integrated business processes, so you can optimize use cases such as putting development and test on a public cloud but putting your production and disaster protection within your own data centers -- building complex environments like that.

We're focused on the data management problem and we are the industry leaders at doing that. The second is we are partnering for the other elements of the stack and we think that the operational word in hybrid cloud is hybrid. Let me tell you what that means. We don't think that hybrid cloud has a single virtualization provider. The majority of customers combine one virtualization technology on premises with a different one on a public cloud.

Our approach is to enable heterogeneity while managing data consistently. What this allows our customers is choice of cloud providers, choice of technology stack, but consistent management of data so you can accelerate innovation, leverage your existing data governance and security processes as well as maximize efficiency.

You mentioned this earlier when you spoke about FlexPod but can you share your views on the hyper-converged infrastructure market? How big you think that opportunity is and what are you doing in that market?

Our perspective is that hyper-convergence addresses a customer problem in terms of rapid provisioning and ownership by the virtualization team of a particular piece of infrastructure. What that means, for example, is for a certain class of workloads, the virtualization administrator can simply spin up a set of virtual machines without needing to coordinate with computing, networking and storage teams -- and that adds value. Whether it's a radically better architecture remains to be seen, especially based on feedback we have from customers.

What we're doing is to enable customers to solve those problems by using technology from us and partners like Cisco and VMware. Let me give you a couple of examples. Within the FlexPod family of products we offer a range of configurations all the way from FlexPod data center, built for large consolidations of multiple applications within a data center, down to FlexPod Express, which is built for the branch and remote offices. You will see exciting announcements from us very shortly around how we can provide customers the ability to have seamless provisioning of those environments, much like hyper-converged solutions, but deploy a custom architecture all the way from the data center to the remote office.

Similarly, we have more than 50,000 customers of VMware who use NetApp storage as a default platform for storage and data management in virtualization environments powered by VMware. We have developed a solution called EVO:RAIL, together with VMware, that combines NetApp storage together with VMware's EVO:RAIL software stack. This allows people to have the same advantages that NetApp provides for VMware environments in the data center in a hyper-converged form factor with rapid provisioning and the ability to seamlessly move data from the data center to the branch and vice versa using the common SnapMirror tools, replication tools and so on. We're monitoring the landscape, we're more focused on the problem hyper-convergence is trying to solve rather than the technology pivot of hyper-convergence itself.

One other area I want to look at is scale-up storage for a world of unstructured big data and analytics, particularly in the cloud. What are your solutions for that?

With regard to the unstructured data in the cloud, we have several different ways that we approach that problem. For example, customers need to combine transaction processing environments with analytics environments. Historically, you would have segregated analytics from transaction processing systems and what people like Oracle have tried to do is to converge those under a common umbrella.

What we see in many customers is three different types of analytics environments. There is the in-memory database, for example, which is the fastest, highest-performance transactional system. There are traditional databases, whether they are clustered or non-clustered and then there are emerging analytic environments like Hadoop.

What we allow customers to do is use a common platform for storage and data management across all of these environments. Using clustered Data ONTAP and NFS you can deploy all-flash configurations to support in-memory database environments, you can deploy hybrid class or disk-based systems for your traditional database and you can use NFS technology that we call the NFS Connector for Hadoop to radically simplify your Hadoop environment. You do not have to copy thousands and thousands of terabytes of data between your real-time transactional systems and your analytic systems.

You can also, because you use NFS, separate the compute tier from the data tier that today, traditional Hadoop architectures don’t accomplish. Using a technology like NetApp Private Storage, you can run the big data analytics on a public cloud but keep your data segregated.

I'll give you a simple example. We are partners or technology suppliers to a very large European financial services firm, one of the largest security exchanges in all of Europe, and they use our technology in a FlexPod configuration for their daily trading transactions on behalf of their member banks. They run up to four billion transactions per day on the combination of NetApp storage and Cisco’s unified computing platforms. What they also do at the end of the day is run a big data analytics environment which combines a large compute grid with NetApp storage and they use that analytics capability to offer value-added services to their member banks.

For example, what were the results of the trades in that day? Which trades were the most beneficial? Which trades were the worst? That compute grid was essentially underutilized for all but four to five hours a day. What we’ve done is keep their data consistent between their on-premises transactional systems and mirror those transactional systems to a secure colocation environment where they have NetApp storage but connect that secure colocation environment to a public cloud. This has allowed them to retire their secondary data center and take advantage of the public cloud for truly what it is capable of, spinning up cores where you need it, when you need it for a period of time.

1 2 3 4 5 Page 4
Page 4 of 5
Bing’s AI chatbot came to work for me. I had to fire it.
Shop Tech Products at Amazon