When the 'solution' is worse than the problem

03.05.2006
If my school-age son lies about a mistake he's made, it reflects poorly on me. After all, it's my job to teach him to own up to his actions, learn to be more careful with fragile objects, and not leave all his homework to the last day of vacation, and to make it right -- if not now, the next time.

It's a shame then, that it's rare to find a good understanding of that same principle among computer operations or development managers.

The scene replays all over the corporate world: It's time to wrap up the findings from a network or application security test, and the assessor -- a tester or perhaps an independent consultant -- says that there are unpatched security vulnerabilities. After an uncomfortable silence throughout the conference room, someone from the team quietly says that that's terrible, just terrible. Is the in-house team unnerved by the risk of losing data? Getting 0wn3d?

No, it's awful news because the team will lose their bonus or perhaps even be penalized if the group hired to detect bugs or vulnerabilities actually finds some.

Inevitably there's a request to change the report -- either redefining terms to hide the risk under lower ratings, or triaging the vulnerabilities so that none of them qualify as an imminent risk.

Sometimes the response is even more hostile towards an outside security consultant; clients will claim that the findings are bogus, the assessor is incompetent, or that the methods were unusual and substantively incompatible with the organization's risk management.

The findings and report are quickly modified before the internal auditors or officers can read them, or buried as deep and fast as possible.

In one recent case I've been told about, internal legal counsel reviewed security audit findings before relaying them to the company officers, specifically so that the report would be protected communications and not discoverable if the company was sued. It's dishonest and counterproductive -- and, yes, childish -- behavior.

But in the bigger picture, it's one of those misrepresentations that requires more and bigger lies to sustain it as time goes on, growing eventually from a bump in the IT road to a quagmire of legal and financial woes.

Assuming the assessor is competent and the findings are real, why should anyone be punished for discovering flaws?

Many security problems are caused by a mismatch of security controls, or through completely innocent mistakes.

On one hand, you might find an online bank's intrusion detection system tuned down because a web application's transaction validation causes chatter on the network -- an instance in which two good security controls unintentionally butt heads.

On the other hand, when the frustrated developer can't get a banking applet in the same system to work until her network admin buddy wipes the firewall rules clean with a bidirectional "allow all," someone needs to light up the firepit and break out the barbeque sauce. The two are not equivalent situations. A sane workplace wouldn't treat them as equivalent .Clearly a bug, a vulnerability, or any other error condition is not prima facie evidence of incompetence or wrongdoing.

True, any IT or development group should reward good performance and weed out the bad.

It's perfectly reasonable to dig into flaws and vulnerabilities to see if they were caused by incompetence or malice.

But the superficial act of punishing discovery doesn't substitute for conducting introspective rootcause analysis.

Without understanding vulnerabilities -- or any error condition or bug for that matter -- actual causes go undiscovered and unaddressed, administrators and developers don't learn from their errors, and quality invariably and inevitably declines.

Everyone makes mistakes, and a healthier organization might even make good fun of giving a "fat finger" award for the most interesting configuration error, or take a developer out for a plate of spaghetti when a convoluted or difficult bug is traced back -- served con funghi for missing specification changes, alla puttanesca for ignoring the spec altogether.

My son understand the meaning when I serve him an extra slab of cheese for a lame excuse -- it's all part of teaching them that it's better to admit and learn than to hide and lie. A tongue-in-cheek acknowledgement lets him know that while he may get a ribbing, he should not fear repercussions from admitting the truth.But burying the findings of a security assessment, or worse yet, breeding an organizational culture of fear around the discovery and reporting of security vulnerabilities has a much more insidious long-term effect than reducing quality.

Demoralization and dishonesty aside, it's downright dangerous because of the urge to modify the audit findings and reports. And as we're seen, reacting poorly to a small problem is an excellent way to create a large one that'll reach far beyond the IT department. Keep in mind that in the current regulatory environment, many IT security controls are closely connected to, and in some instances covered by, rules and regulations such as Sarbanes-Oxley or HIPAA.

For example, a public financial company's firewall configurations are now considered financial controls akin to any process or accounting application that keeps the right funds in the right place.

Burying, lying, or otherwise misrepresenting the state of these controls to a company officer whose signature represents their proper function to the SEC is not a good thing.

Misrepresentation of controls could cascade to other areas of regulatory noncompliance, causing assertions such as a SOX compliance letter or a SAS-70 audit report to become suspect. Worse, I've seen a fearful but conscientious IT organization change the report that goes to the auditors or officers of the company, but send unmodified vulnerability information to development or operations so the problem can be quietly fixed.

That, my geeky friends, is what our financial counterparts refer to as keeping double books.

Once that road is taken, it can be extremely difficult to go back and fix.

Doing so may require technical changes to be put into production outside of the change control systems, surreptitious report modifications, and deepening lies if someone with an overly honest streak starts inquiring.

Tossing ethics into the heap with morale and honesty, one has to realize that much as some development and IT managers look upon outsourcing as a way to form a shell company, cooking the IT change control records is nowhere near as refined an art as keeping financial books on a slow simmer.

It's a safe bet that any substantive discrepancies in the IT records will lead to a thorough roasting, whether they're discovered next week, or pulled off the shelf years later after a security breach. The solution depends on the organization, but the principle is clear: Don't shoot the messenger.

More pointedly, IT managers should pay attention to bad news that's accompanied by information useful for fixing the problem(s), and reward forthright honesty.

I'm sure lazier organizations or those tied to inflexible technical methodologies will continue to focus on the placement of blame immediately following the discovery of unexpected risk.

However, they would do well to learn from the professional tone of more mature and constructive organizations, in which honesty is rarely a career-ending mistake. In short, firms that play hide-and-blame need to grow up.