Simon Crosby, the godfather of Xen, on virtualization, security and wimpy private clouds

03.11.2011

Are you proposing end devices that can support different security domains depending on what they are communicating with?

The same arguments apply server side. Everybody who logs onto the same Web server is in the same context of some process. Some of us are attackers and some of us are just people who want to do our banking transactions. The problem's not a particular cloud problem it's a client server problem, that we are incredibly poor at isolating units of computation which ought to be isolated from each other because they have to put trust relationships with the provider or don't trust each other. So that is the problem that Bromium is going after. It's not a security company in that it finds the bad guys. I think we're useless at it, in general are useless at it and generally the industry is useless at it. And that, by the way, is nothing more than a restatement of a well-known result of computer science which is it is not possible for one program to decide whether another program is good or bad. We need to just face up and get out of the stupid game of trying to decide whether a piece of code is an attacker or not. Blacklisting? It's done. Over. We should get out of it. It's ordinary enough on any system for the bad guys to change before you can get any new signature. So we need to just admit blacklisting is done. Whitelisting doesn't go far enough. The code that you know about is fine, you know that it's fine. But it doesn't say how trusted code -- that is well-intentioned code -- behaves when it is combined with untrustworthy data. That's a very challenging problem.

Virtualization technology can help a lot there because first, if you have the trusted components of a system there like the hypervisor, there ought to be a couple of hundred thousand lines of code, which is a far smaller vulnerability footprint. Second, we need to architect systems knowing that users will make mistakes. We are the vectors of attack, and we must be able to protect the system even when the user makes a mistake. And third, we have to be able to deal with horrible things like zero days. We have to know that there are vulnerabilities in our code and even when our code lets us down -- because we are just human after all and we have written bad code -- we must be able to make concrete statements about the trustworthiness of the remaining systems and whether or not they have been lost or compromised. It's an absolutely fundamental requirement; we have to. In the specific context of cloud systems there's no excuse for service systems to be sold anymore without TPM (Trusted Platform Module) hardware subsystems. So you are able to reason about the security of the code base. There is no excuse for every block of data in the cloud to not be encrypted. You can encrypt it at wire speed and there is no excuse ever for the cloud provider to manage the key. So what should happen is when you run an application in the cloud you should provide it with the key and only in the context of the running application as the data comes off some storage service it is decrypted and goes out re-encrypted on the fly. That way if somebody compromises the cloud provider's interface or if someone walks into the cloud provider and walks off with a hard disk, then you are OK. And there is no reason that people should not do this.

All of these technologies are there. There is no excuse for server vendors not to put this on every server. My advice to every enterprise is do not buy a server without a TPM. And do not use a hypervisor that doesn't use it. We need to use all of the capabilities that are in the hardware to make the world more secure. People should beat the heck out of their vendors until they do a better job of it -- hypervisor vendors, server vendors and everything else.

I think many of the excuses for building private clouds are wimpy, too. People want to build private clouds because they don't want to lose control. By the way, there's always a good reason for not wanting to lose control. One of them is, it's my job. The other one is the regulatory frameworks within which we work today really are articulated in terms of technologies that were cool 20 years ago. And you can't really state anything to a regulator in terms of the data if you can't find the hard disk. So how is the guy supposed to allow the data out of the ? People will continue to build private clouds, spend a bunch of money on servers they don't need when it would be much better to use some shared resources that providers could do for you and do it at much better cost we'll simply be on to an opex based equation instead of a capex dominated. They could do it in a heartbeat if we could actually secure the regulatory frameworks for it and if we could just get the vendors to do the obvious things in terms of adopting security technologies.