The full disclosure debate

30.09.2005
Von Roger A.

As the new InfoWorld security columnist, I"ve not backed away from controversy. I have intentionally picked hot topics in order to generate reader interest and feedback. And nothing generates more debate than the topic of full disclosure.

Full disclosure is the idea that all security bugs found, whether by the vendor or a third party, should be disclosed in their entirety in a public forum as soon as possible, whether or not the vendor is notified, and whether or not a reasonable defense is possible. The thinking behind this is that full disclosure forces the vendor to address the problem faster than they normally would and helps administrators to prepare defenses.

Years ago I was a strong advocate on the full disclosure side. Anyone that didn"t believe in full disclosure was an enemy of my utopian world and helping to perpetuate bad coding. But lately I"ve been re-thinking my decision.

What changed? Well, my collective personal experience over the last 19 years. Full disclosure advocates claim that all defects should be publicly shared to benefit the common good. If an exploit is known and not shared, then the vendor might be slower to fix the hole. This statement is valid and true in most cases: Nothing focuses a vendor"s attention than the whole world reading about the exploit and hackers looking to take advantage of it.

Practically, if a hole has been discovered by someone, it has probably been "discovered" by lots of other people who aren"t as vocal. Some of those people are bound to be black hat hackers, who will use the holes to exploit systems.

If the vendor does not publicly reveal the hole, the people who know about the hole are free to exploit it while the consumer remains clueless. Fortunes can be stolen, private information accessed, and secrets revealed. But if the hole is publicly disclosed, administrators have an opportunity to react and put up defenses to counter the exploit, even before the vendor has had a chance to patch the hole.

I still believe most of that line of thinking, but the practical reality of history has challenged my original beliefs. Here"s why:

First, most fortunes are stolen using disclosed vulnerabilities. Forget the nebulous theory that black hat hackers use undisclosed vulnerabilities to steal data and money. They can, and they do, but the overwhelming majority of black hats use publicly disclosed vulnerabilities and vulnerabilities from mis-configuration and low-hanging fruit. Why invent something new if you can use publicly available tools against publicly available exploits?

Second: user responses. Research paper after research paper shows that a large percentage of computers remain unpatched over a year after the patch is released. The admins that are going to patch systems do so relatively quickly, within the first month after a patch"s release. This group is less than 50 percent of the admins out there. The rest don"t patch until much later, often not until after a successful exploit causes damage.

Some computers are never patched. Sniff the Internet and you"ll see Code Red exploits coming from vulnerable IIS 4 servers (the patch was first released in June 2001), scans for blank SQL passwords, and scans for Apache Web server exploits from 5 years ago. Rarely does a large financial theft result from a zero-day exploit. Almost all occur from aged exploits with published patches available.

My biggest problem with full disclosure is that it leads to worldwide exploit on a grand scale. Prior to the public disclosure of a vulnerability, the only ones exploiting are the ones who discovered it. This list probably includes less than a dozen people (I"m purely guessing on this point), and in many cases, maybe just one person. If they decide to use the exploit, they can compromise only a limited number of machines. They have to do it manually and be careful about detection. If they do automate the process or use a worm, the chances of being discovered are high and their zero-day backdoor becomes publicly known and subsequently patched.

So, prior to the public disclosure of the vulnerability, the hacking is limited. But once publicly disclosed, the world learns about the flaw. Every interested hacker begins to use it. One or more groups write and publish public exploit code. Somebody makes a worm, and within three days every computer hooked to the Internet containing the vulnerable code becomes a victim. If a patch isn"t available, most admins don"t know how to launch a valid defense. They just remain vulnerable.

So, prior to the public disclosure, the number of victim computers in a given night is small and finite. After the public full disclosure, we have millions of victims. The consolidated work effort is millions of times higher after the announcement than before.

But if the exploit finder notifies the vendor, gives them a reasonable amount of time to research and create a patch, it at least allows listening administrators to protect themselves. The majority of the workload is patching work, not repair and removal of malware.

I don"t think that full disclosure advocates are my enemy. My only foe is the malicious hackers that are responsible for harming legitimate users and computers. But responsible disclosure versus full disclosure seems more reasonable every day.