GM security chief on vulnerability disclosures

27.04.2006
It's not every day that the Chief Information Security Officer (CISO) at the world's largest automaker gets to present a keynote talk at a hacker convention. So when General Motors Corp. CISO Eric Litt was asked to do precisely that at the European Black Hat Convention in Amsterdam earlier this year, he used the chance to reach out to the hacker community. His goal: present to the hackers his look at the problems large corporations face when dealing with software vulnerabilities -- and the manner in which they are disclosed and remedied. Litt spoke Wednesday with Computerworld about those same issues. Excerpts from the interview follow:

Why is this issue of vulnerability disclosure practices so important to you? If you are a CISO, you really are stuck in the middle between a bunch of different constituents that are out there. You have the researchers and the academic folks and then you have the software vendors -- and we have to deal with the cards that get dealt to us. Somebody releases off-the-shelf software and it has got vulnerabilities in it. If those vulnerabilities don't get plugged, I have to deal with the fact that I have vulnerable code in my environment. Then we have people out there who are trying to figure out how to hack into an environment or to exploit a vulnerability and they may be doing it for different ethical or non-ethical reasons. And I have to try and protect my environment.

In your opinion what should responsible vulnerability disclosure and remediation practices be about? I broke the problem into a bunch of different viewpoints when I did the Black Hat thing. If you take a look at the exploiter's view of the world, what's in it for them? What motivates them? Fame, fortune, curiosity and creativity. They want attention, they want money. If you look at the ethical researchers' world, they are out there motivated by the same things. The differentiator is what they do with the information they get. So as I sit here as the CISO of a large company, don't I want things to be discovered? Absolutely, because I want to make sure vulnerabilities are plugged. Don't I want people to be rewarded for the work they have done? Absolutely. If they are not rewarded on the clean side, they'll be rewarded on the dirty side. People always will find a way to get rewards.

So what is responsible disclosure. Suppose there's a vulnerability in some platform and you discover it right now and you go tell the world about it. Some researchers would say that's exactly what you should [do] because otherwise the vendor won't address it. And I say, 'Wait a minute. Time out. You are now telling people how I can be compromised and that's a big problem.' [On the other hand], you discover something and you tell vendor XYZ that there's a vulnerability in their product and they do nothing for 200 days, they simply are not responsive. What do you expect the researcher to do? So we haven't created between the vendor, the ethical researcher and the business consumer an environment that is synergistic and that we can all benefit from. I think it is doable, but I'm not sure anybody is really taking on that challenge.

How should vendors be responding to vulnerabilities that are discovered in their products? In an ideal world there wouldn't be any vulnerabilities and they wouldn't have to disclose anything. But that is not the world. Really critical vulnerabilities must be plugged immediately, whatever 'immediately' might be. On the other hand, what is critical? I think what you are seeing in the industry today is that most of the vendors are trying to be very conservative in their ratings of vulnerabilities. What they are really trying to do is limit the exposure that gets generated from them having had a vulnerability. As a vendor, if you call everything critical then you've covered your bases. You've said here's the vulnerability, here's the fix for it and you need to do it right away.

How is this affecting you? One of the challenges that we are having as a group is that vendors don't tell us everything they have fixed. As an example, when a product vendor releases a patch they say it is critical and addresses a vulnerability in XYZ service. At the same time, the patch addresses vulnerabilities in three other services -- but we don't know that when we review the release notes associated with the patch. And we say that it's not really critical because we have other mitigating controls that can address the vulnerability. So we are not going to make it a fire drill, critical patch deployment -- we are going to treat it as medium criticality in our environment. What we don't know is that there are three other holes that have been plugged, and, by the way, we don't have mitigating controls for one or more of those and we may get bitten.

So what would you rather see happen? It's very context dependent. In my position, of course, I'd like to know everything [relating to a vulnerability]. But is that reasonable? And if you are in a vendor's shoe could you do that? As a vendor, who can you trust and who can't you trust [with vulnerability news]. Can you trust every CISO out there, every CIO out there, every CTO out there? The answer is clearly, no. So how do you differentiate? If you go tell the federal government something and are not telling me about it, then I get mad at you. Or if you go and tell the press something and don't tell me about it, I get mad at you. I think enough information should be released so that people can make a reasonable assessment of how vulnerable they are. But we don't want to provide information that helps unethical people compromise systems before those issues can be addressed

How is all of this forcing you to respond? We don't continuously want to have our environment in turmoil from being forced to constantly patch and patch in rapid fashion without having the ability to validate that these patches are not going to hurt us. If we roll something out and then we have to fix it because we blew up our own environment, we did worse perhaps than if we had done nothing at all. So we need to have time. CISOs need to have a strategy to buy themselves time so they can do due diligence.

What's your strategy for doing this? If you think about it, many exploits try to load code onto a device for a variety of reasons -- either to directly compromise a system, take control of it over time or to use it as a bot in future etc. So a technology that for example, would prevent automated, non-authenticated deployment of executable software on a device could help prevent those types of exploits. If you can't put the code on my machine, that thing can't do a thing to me. And this can happen not just at the client level, but the server level and the network infrastructure as well.

Microsoft has drawn a lot of criticism for its security failures. How do the other major vendors compare? I think Microsoft is an easy target, so people pick on Microsoft all the time. You can complain about Microsoft all you want, but you also have to recognize they have made significant investments and I think they have made significant progress. They are still not where we want them to be, but they are significantly better. Then you start looking at the other people and say 'What kind of a job are they doing'? There was one who released 82 or 83 patches very recently. Not stellar, right? I am glad they were released, but on the other hand you are talking core business systems here. With those kind of changes in the software, did they all happen to be discovered at the same point in time or were they just held back? And how long was I vulnerable that I didn't know I was vulnerable? So Microsoft is not the only one with this problem. And quite frankly, I think some of the other vendors are in denial and they are the ones that worry me the most.