Q&A: Wells Fargo technology chief talks about keeping costs flat

Virtualizing everything is Wells Fargo's 'preferred' approach

Scott Dillon, head of technology infrastructure services at Wells Fargo & Co., spoke with Computerworld about how his IT team is keeping costs flat while improving performance at the Fortune 50 company.

Cornerstones of the bank's approach are technologies such as storage and server virtualization, as well as management strategies such as extending the life of data center equipment and standardizing server deployment. Dillon also discussed the company's plan for integrating Wachovia Corp., the $12.7 billion purchase completed in December. The following are excerpts from that interview.

How are you dealing with integrating Wachovia's data center technology and architecture? We take the approach of stabilize and standardize. Our fundamental commitment is to ensure great availability and customer experience. We're still early into our integration processes and probably are not going to talk in detail about the merger integration. But, at the end of the day, it's not a decision about virtualization or migration to Vendor A or B. It starts with the customer experience and the availability we want to deliver as well as the data protection we have to put in place. Those will be our foundational elements, and then we'll drive vendor strategies around homogeneity and what technology we should deploy.

What's the best advice you can offer to other CTOs and CIOs looking to cut costs? Think about your customer experience. You have to know what you're trying to do. You have to drive costs down and drive compute power up. Don't get focused on a particular technology. They come and go. Where you get into trouble as a CIO is when you bet the farm on a particular technology or a bunch of PowerPoint slides, and then you're left later on trying to figure out where the TCO and ROI is. At the end of the day, you have to be driving up your availability, which means you have to be thoughtful about your technology migrations and customer experience.

If you alienate the customer in the process of driving IT costs down, it will only yield more costs later and attrition at the business front. Keep the close connection between the business and the technology. Be thoughtful about being sure you have an extensible infrastructure. And, as you realize where things are going as it pertains to virtualization, take the first critical steps, which are to get it up, get it going and do it in a safe environment. But do it first in a lower-tiered availability application. Get your internal capabilities built, and then extend those quickly once it proves out. Don't doddle.

How many virtual machines do you have? We're well over 1,400 VMs, and are growing very rapidly. Our preferred deployment methodology resides around virtualization, but it doesn't mean we only do that. You have to consider what an application does and where does it need to run. There are cases where customization makes sense or dedicated deployments make sense. But our preferred approach is to virtualize most anything we're putting on the floor.

How big is your storage-area network? It's doubling with the inclusion of Wachovia. We're into the high five-plus petabytes of storage. Then as you define [network-attached storage], you end up with multiples upon multiples beyond that, and then you have virtual tape, which is something we committed to, and there's even more of that.

What do you consider the most promising technology for saving money and creating efficiency in the data center? The convergence of connectivity in the data center. The coming together of our [Ethernet] network and SAN is a big deal to us, as well as being more effective at reclaiming space in the data center. So we have a program around service-life extension that puts all those layers in play. Virtualization is about increasing your density and compute power per square foot. We're very focused on technologies that allow us to increase our compute power and do it in a more efficient manner. If we can do that around all of our data centers, then we're able to extend the life of those data canters.

If you think of a data center as $250 million-plus in expense [per year] ... and you can delay [those expenses] by 12 or 15 months, the benefit to the organization is staggering. And if you can also do that in a thoughtful way by leveraging technologies to make it better from an availability perspective, where you have more redundancy and resiliency inherent because of virtualization, or you have a device that allows you to migrate data dynamically, then you're both extending the life of the data center and driving to efficiency. And at the same time, you're enhancing your effectiveness.

Are you considering using Fibre Channel over Ethernet (FCoE)? Yes.

Have you deployed it yet? Yes. It is in, but I don't know where we are right now with it. We're positioning our infrastructure to be adaptive [for when] we look at commitments to new vendors. That said, FCoE is definitely going to be relevant and important ... but I'm not sure we're ready to share our plans on that right now.

Are you deploying solid-state disk as a Tier 0 over Fibre Channel disk in your SAN? It's a very interesting technology. The power benefits it could yield is really intriguing, but I'd say we're not wholesale switching anything tomorrow. So it's on the radar. We'll continue to evaluate it, and when we have the right vendor relationship, then we'll look at putting it in. From our perspective, there's still some questions around solid-state disk. Our general philosophy is never to be first. At the scale we're at, we have to be thoughtful about anything we put into the environment, both from an availability and resiliency perspective. So, I still want to make sure how SSD is going to play out.

Wells Fargo is a diverse organization with thousands of bank branches and retail brokerages. You're getting even more diverse with the Wachovia acquisition. How are you dealing with that kind of technology heterogeneity? When we started a couple years ago, we put into our thinking that the entire data center is underpinned with the concept of supply and demand management. That's a different way of thinking for IT. Typically, IT shops run a good supply shop. It's like that warehouse group that does a good job of optimizing all the boxes. As more stuff comes in, you try to optimize it.

The other thing we're doing a lot of now is standardization of the way we deploy servers. So they're deployed in a much quicker way. We also incent our businesses to choose standardization over customization.

Do you have an example of your supply-and-demand approach to IT management? Tiered storage is a great example of what comes out of a supply-and-demand management approach. We can give you Tier 1 storage or Tier 5 storage for your business. One's higher cost, and one's lower cost. If I can work with you on your business characteristics and what you need, then you're going to help us drive our costs down. We've seen tremendous benefit on that front. Then we've put a lot of work into getting our cost per gigabyte [of storage capacity] down and focusing on those types of matrixes. As [we are] focused on being a supply-and-demand strategy ... we created a three-pronged approach of stabilize, standardize and optimize.

So you also have to be mindful that not everything will be in a stage where it's ready to be optimized. We think many IT professionals make that mistake -- getting into an optimization phase of a technology when, in fact, they should be stabilizing and standardizing it first in order to position it. In the case of storage, or even virtualization, we take the approach that we should walk before we run. Let's not put the production environment at risk. Let's get to a common platform and put common technologies in place.

How do you integrate a heterogeneous storage environment? You can put things like USPV [Hitachi Data System's Universal Storage Platform V] devices in front of your storage that will allow you to virtualize and provision it on the fly, as well as allow you to have multivendor technology behind it.

I can have a Hitachi device or maybe an old IBM Shark [storage array] sitting back there, yet I can dynamically get to each and every one of them. That allows us to stabilize and standardize our environment, as well as get better tools in place. Then you can get into optimization -- you know, how do we work with business partners to migrate things from high-cost storage to low-cost storage. We do that with everything, not just storage.

How do you measure your success? Our 80 diverse businesses have been growing anywhere from 15% to 20% [per year] in their demands on technology -- more servers, transactions volumes -- yet, as we've been seeing that demand requirement, we've been able to keep our costs flat. So in a growing environment with more complexity and more demand around improving the environment -- better monitoring and a whole bunch of infrastructure improvements -- we're able to keep our costs flat or even decrease them. That's been very powerful for us.

Editors' Picks
Join the discussion
Be the first to comment on this article. Our Commenting Policies