Are we feeling safer yet?

02.08.2005
Von Pete Lindstrom

Last week, Mike Lynn, a researcher formerly working for Internet Security Systems Inc. (ISS), gave a presentation at the Black Hat conference on the internal workings of Cisco Systems Inc."s Internetworking Operating System (IOS) and exploit techniques that, in short, rocked the security world, created quite a legal stir and caused some security pundits to accuse the usual characters (big business) of being "thugs."

This commentary isn"t about whether Cisco and ISS acted like thugs or whether Mike Lynn broke the law. That"s a very interesting discussion, but not for practical-minded enterprises that need to understand the nature of the information that"s now in wide release. The real question for enterprise security professionals should be, "Are we safer based on this new information?"

As is always the case, Lynn and those rallying behind his cause claim that presentations like his make the world a safer place. I am going to go out on a limb and suggest a definition for "safer" that simply means we are less likely (either collectively or as individual entities) to suffer a loss now that this information is available. So the true test for anyone wanting to provide this information is whether the net effect is a higher or lower risk of loss; higher risk equals less safe, and lower risk equals a safer environment.

In assessing the risk, it"s important to fully evaluate both the external threat environment and the vulnerability of the target. In cases of disclosure, threats (potential attackers) and vulnerabilities (target weaknesses) are intended to oppose each other so that the higher risk associated with the increased threat after disclosure should be more than offset by the reduced vulnerability level to create a lower-risk, safer environment.

In the case of Lynn"s research and any other "how-tos" on vulnerabilities and exploits, the threat increases when more of the likely attackers gain access to information they didn"t previously have nor could they have developed. The effect is to support their efforts toward developing exploits to compromise systems. With the publication now available to every likely attacker with Internet access, the threat is certainly increased. We see evidence of this increased threat routinely in security advisories that come out announcing new vulnerabilities, and it"s borne out often in the form of worms like Blaster and Sasser.

Since the increased threat level is obvious -- in fact, in Lynn"s case, there are already reports out of DefCon of hackers working "around the clock" to create new exploits against Cisco -- the question becomes, how can Lynn"s information be leveraged to reduce the vulnerability of Cisco"s routers enough so that the overall risk is decreased?

And the answer is, it really can"t be for the Internet overall. It can only reduce the risk for people who haven"t been updating their Cisco firmware. You see, the Lynn information only provided new details about IOS as "proof" of the possibility of remote-code execution on that system. That is, many denial-of-service bugs are now remote-root bugs on the IOS platform.

So, if you haven"t been updating your firmware and/or using a separate management channel for your routers, then you may be able to reduce your risk by doing these activities. Any high-security environment that has these practices firmly in place only sees an increased risk (via the threat) but presumably feels confident enough about its protection to withstand the increased risk, which brings us to the concept of inevitability.

One argument about information disclosure is that it"s inevitable that the information will be made available to the "bad guys" and not the "good guys," and thus the bad guys would have an advantage. If this is the case, there has been meager evidence of any real-world exploits against production systems with unknown vulnerabilities occurring in the history of the Internet. Think about it -- it"s much easier to compromise people, of which there is ample evidence in the annals of social engineering techniques, than it is to attempt to hack into a system without getting caught or at least leaving a trail.

Although there is no proof, I agree that it"s possible, even likely, that bad guys are compiling their own knowledge base of vulnerabilities and maybe even exploiting them. However, their practice and our own good-guy practices aren"t related (right?), and so there is no reason to believe that we are finding the same vulnerabilities that they are (there are many to choose from, on many different platforms).

That means we are wasting our time with the "comfort food" of known vulnerability protection while bad guys are laughing at the histrionics and continuing to exploit us. After all, it"s the unknown vulnerability that we"re really worried about, right?

But let"s assume for a minute that we really are finding the same vulnerabilities that the bad guys would have found. Then the discovery and disclosure process would be a good thing for the overall Internet, right? I don"t think so. We"ve been fooling ourselves about disclosed vulnerabilities for so long that we"ve given ourselves the crutch of patches while we ignore the true threat -- the bad vulnerabilities that still may be out there. If the threat is real, then we will find out about it very quickly and we will develop means of protection, some of which might not exist today.

But we"ve known for some time how to secure our environments: attack-surface reduction through stronger configuration management and monitoring. A secure environment doesn"t rely on protection against new vulnerabilities for its strength; it hardens configurations, segregates activities, validates trusted components and monitors all activity.

The dirty little secret of many security vendors is that they don"t believe the threat is real and therefore must manufacture it themselves by publishing new vulnerabilities that can be exploited by script kiddies. They are the ones that find most vulnerabilities. I believe the threat is real, but the current process of vulnerability discovery and disclosure is a security facade that creates a lot of distracting script-kiddie hacking noise.

It"s absolutely necessary to address this noise in today"s environment, but only because we"ve made it so. I would rather have researchers, vendors and enterprises spend all of their time and resources focused on real threats rather than the ones we conjure up to make us feel good.

They say knowledge is power, and it seems pretty clear that security professionals really want to feel a bit more in control. That"s why we rail against big business "thugs" like Cisco or our favorite target Microsoft, but it may be we are all recommending tanks against an enemy with bows and arrows simply because we are afraid of friendly fire from our own machine guns from smart guys like Lynn and other vulnerability researchers. We don"t really know if their software is "secure enough" to withstand the true threat because we ensure that it isn"t. Don"t fool yourselves into thinking there is a "total security" nirvana. Everything is relative, even with security.

We like to convince not-so-security-minded individuals of the threat and often state that we"d rather know about a vulnerability than not know about it. So would I. But we can"t know about all vulnerabilities, so we really ought to operate on the basis of reality - there are many latent vulnerabilities out there that could be discovered by bad guys and exploited regardless of what we do with our "formal" process. The real threat is still the one you don"t know about. Plan your security around that philosophy, and your defenses will be much stronger.