Simon Crosby, the godfather of Xen, on virtualization, security and wimpy private clouds

Bromium is a well funded startup that promises to tap some little-used inherent strengths of Xen virtualization to secure public clouds, opening up the possibility of greater cost savings for businesses that will be able to trust more data to these services.

According to one of its founders, Simon Crosby, isolating functions and establishing a trusted core to hardware systems can create public cloud environments able to meet the scrutiny of regulators concerned about the safety of data.

MORE ON CROSBY: Godfather of Xen: Virtualization holds a key to public-cloud security

Because Bromium is still in stealth mode, Crosby is purposely vague about some of this, but he does indicate that the technology exists to package secure systems that can be deployed within public networks and that can assure customers that privacy of data will be maintained.

Network World Senior Editor Tim Greene recently talked to Crosby about this. Here is an edited transcript of that conversation.

How do you feel about leaving behind dealing with Xen day-to-day?

Well I didn't say I'd left it. It's an open-source code base, and everything we do at Bromium is based on everything we've ever learned how to do well, which is develop software and deliver better systems relative to open source. So open source is at the heart of everything we do at Bromium without exception. Ian [Pratt, the father of Xen and co-founder of Bromium] still remains chairman of, and we are very active still in the Xen world. It was hard leaving behind the products we had built, specifically in that category XenServer and XenClient, but Xen remains extremely productive as a technology, and it's going into incredible places. It's very interesting.

What do you mean by incredible places?

Every time I peel the cover off some new widget that's being delivered -- so it's gone deeply into the science world -- lots of appliances being built with Xen-based virtualization. It's everywhere in the cloud in places I never would have imagined, some of which I'm not even allowed to tell you about. Xen has really dramatically transformed the whole cloud business and I think continues to do so.

Why can't you talk about some of the places Xen has been deployed?

Bromium does a lot of interesting things in a world that you might think of as security related that I think are actually more related to trust or being trustworthy. Many of the people we have dealt with, certainly the people when we were dealing with in the federal government when we built XenClient. These folks run deeply secure systems that they won't even tell me about because I have no security clearances. So often the conversations are quite one-way, and they're always with somebody named Bob even though they all look different. It's remarkable that open source has provided a fantastic vehicle for delivering technologies into communities where trust is absolutely fundamental, and there they seem to prefer the open-source methodology because everything is in the open. Then they can get their own hands on it, and they don't have to believe anybody. They don't have to believe me or anybody else. They can put their own eyes on the code and particularly in the case of XenClient the core security modules were written by contributors from federal security agencies, people you would never normally expect to do this work.

Xen is still the smallest, still the most mature [virtualization] platform that's ever been built. We can always make it smaller and make it more secure.

Smaller is always better. In general systems that people are dealing with today with XenServer or even with what VMware does, these are small systems, but where they become larger is courtesy of all those device drivers they have to lug around with them because they end up running all the hardware. In general that is a problem that you have to deal with. So Hyper-V is small but given all the device-driver infrastructure it becomes bigger. Getting these things smaller and more and more invisible and tinier is far better from a goal perspective. Ultimately what you want to be able to do is to bare the hypervisor within the platform in some way that you can deal with a finite set of hardware and you don't have to carry a whole ton of drivers around. XenClient does that for a relatively limited [hardware compatibility list]. But yes, absolutely, getting things smaller and faster and leaner is always the goal. A counter example would be, say, Windows, which is 60 million lines of code, right? If you simply assume your vulnerability is proportional to the number of lines of code then you want to get it down.

By the way, KVM has the same challenge, which is in general that it is as big as Linux. The KVM driver itself is tiny; it's very elegant. It's just that when you implement KVM you have Linux running underneath it. Now that brings with it its own challenges.

What do you see then as the best model for dealing with security in virtual environments?

A: I think that when we look back in five years we will actually figure out that the core value of hardware virtualization is security. Actually it's better trust or better isolation, and not all of the grandiose cases we've come up with for virtualization today. So that even in the cloud the primary use case for virtualization will, in five years or so, be security and security through isolation. Right now I think we're in a woeful state. ... It's absolutely the case that there is no Fortune 500 company out there that has not been compromised, and it is really scary what's going on out there. And I think it's mostly because for the past 10 years or so we've been enjoying the benefits of doing wonderful things and other people have been focused on how to derail that. And we're behind.

Can virtualization help in the security effort? To be absolutely clear, virtualization is an isolation technology, and I think we're starting to see the first cases of virtualization being used as a security technology in a couple of ways. I think one will be to create a highly secure cloud system which can be used to deliver multilevel secure systems. Intel recently announced its DeepSAFE technology with McAfee, a Type 1 hypervisor early load, which has a sole purpose to secure the runtime. So you start to see the specific use of virtualization security on clients. I think it will eventually be the same on server systems, too. Obviously you've got to get the server hypervisor to learn new things.

What exactly do you mean by isolation?

I'll talk about it in the context of the desktop, which is 60 million beautiful lines of code from Microsoft and every single website I've ever visited is a different domain of trust. And yet they're all cohabiting those 60 million lines of code. And that's just the problem because the structure that we use within an operating system to isolate different domains of trust from one another are very coarse and often pretty easy to compromise. For example, when a website downloads an Active X that gains a privilege it's very easy to extend those privileges across those two domains of trust. You have maybe processes as one abstraction or user identifiers.

These are extremely coarse and arguably every single website or every single application I ever touch is a different domain of trust and must be respected that way. The problem we have in general is we have too many trust domains cohabiting large blobs of relatively porous code. Therein lies the opportunity for somebody to cross from the open public insecure world to the private world. Maybe it's an interpretation, but I am the biggest threat to the enterprise because every day I walk in with my milieu wrapped around me, which is all of my friends and all the people I like to talk to and a whole bunch of enterprise tasks to do. And when one of my friends says to me to open an attachment, bang, the guy is now in the enterprise.

The challenge is that we have centralized. We have these large blobs of code that don't do the job of isolating realms of trust and in addition we've made very poor assumptions about how users behave in the context of security. What ought to scare us all witless is how tolerant we are of the invasion into our personal privacy zones in our consumer identities, on Facebook and everything else. But we bring that into the enterprise with us every day, those behaviors. So when I get the email from my friend saying happy birthday, I'm going to click on it, and we're done. Users, no matter how much you train them, they're going to make mistakes. Users go for fun instead of functionality. Security generally limits functionality and that makes users want to get out of the secure world into something like Dropbox. I can't send a big PowerPoint by my Exchange server? I'm going to have to send it by Dropbox. In general everybody is using the cloud, they just don't know about it. That's a woeful state. The only reason that is the case is we're so poor at architecting trust. I think Microsoft is going down the right path, by the way. In Windows 8 they're doing a much, much better job, but it's still pretty bad.

Are you proposing end devices that can support different security domains depending on what they are communicating with?

The same arguments apply server side. Everybody who logs onto the same Web server is in the same context of some process. Some of us are attackers and some of us are just people who want to do our banking transactions. The problem's not a particular cloud problem it's a client server problem, that we are incredibly poor at isolating units of computation which ought to be isolated from each other because they have to put trust relationships with the provider or don't trust each other. So that is the problem that Bromium is going after. It's not a security company in that it finds the bad guys. I think we're useless at it, in general are useless at it and generally the industry is useless at it. And that, by the way, is nothing more than a restatement of a well-known result of computer science which is it is not possible for one program to decide whether another program is good or bad. We need to just face up and get out of the stupid game of trying to decide whether a piece of code is an attacker or not. Blacklisting? It's done. Over. We should get out of it. It's ordinary enough on any system for the bad guys to change before you can get any new signature. So we need to just admit blacklisting is done. Whitelisting doesn't go far enough. The code that you know about is fine, you know that it's fine. But it doesn't say how trusted code -- that is well-intentioned code -- behaves when it is combined with untrustworthy data. That's a very challenging problem.

Virtualization technology can help a lot there because first, if you have the trusted components of a system there like the hypervisor, there ought to be a couple of hundred thousand lines of code, which is a far smaller vulnerability footprint. Second, we need to architect systems knowing that users will make mistakes. We are the vectors of attack, and we must be able to protect the system even when the user makes a mistake. And third, we have to be able to deal with horrible things like zero days. We have to know that there are vulnerabilities in our code and even when our code lets us down -- because we are just human after all and we have written bad code -- we must be able to make concrete statements about the trustworthiness of the remaining systems and whether or not they have been lost or compromised. It's an absolutely fundamental requirement; we have to. In the specific context of cloud systems there's no excuse for service systems to be sold anymore without TPM (Trusted Platform Module) hardware subsystems. So you are able to reason about the security of the code base. There is no excuse for every block of data in the cloud to not be encrypted. You can encrypt it at wire speed and there is no excuse ever for the cloud provider to manage the key. So what should happen is when you run an application in the cloud you should provide it with the key and only in the context of the running application as the data comes off some storage service it is decrypted and goes out re-encrypted on the fly. That way if somebody compromises the cloud provider's interface or if someone walks into the cloud provider and walks off with a hard disk, then you are OK. And there is no reason that people should not do this.

All of these technologies are there. There is no excuse for server vendors not to put this on every server. My advice to every enterprise is do not buy a server without a TPM. And do not use a hypervisor that doesn't use it. We need to use all of the capabilities that are in the hardware to make the world more secure. People should beat the heck out of their vendors until they do a better job of it -- hypervisor vendors, server vendors and everything else.

1 2 Page 1
Page 1 of 2
7 inconvenient truths about the hybrid work trend
Shop Tech Products at Amazon