Do ratings make sense for open source?

08.08.2005
Von Neil McAllister

Some years ago, I installed an application on a plain-vanilla Sun Solaris server. I read the instructions, copied the required files to the server"s hard drive, ran the install script, answered a few questions about my configuration, and ... nothing. The script bombed out with an error message a few seconds later. Subsequent attempts proved no more fruitful.

When I contacted one of the application"s developers, he said it would be difficult to diagnose my specific circumstances, but suggested that "a great many users prefer to install the software by hand." Would I like instructions for the manual process?

Faced with the alternative, I agreed that doing it by hand was my "preference," too. And secretly I began to dread what would come next, when even the install scripts didn"t work properly.

Situations like this are an IT manager"s nightmare. Software that doesn"t perform as expected is difficult to configure or is fraught with hidden bugs that can set a project back by weeks. The problem is that it"s virtually impossible to find out how a highly touted software package performs under real-world conditions without making the leap yourself.

Helping to reduce that uncertainty is the goal of the Business Readiness Rating (BRR), a newly proposed standard cosponsored by the Carnegie Mellon West Center for Open Source Investigation, Intel, O"Reilly CodeZoo, and SpikeSource.

BRR groups numeric ratings for attributes such as functionality, quality, security, documentation, and community into an aggregate score that ranks the software"s overall readiness on a scale of one to five. In short, it"s a system not unlike the one that the InfoWorld Test Center uses to rate the products it reviews.

The BRR consortium doesn"t intend to rate every open source project itself. Instead, BRR is meant for everybody. Independent reviewers can adopt it as their rating metric, if they choose. Developers might even post evaluations of their own code, in effect letting prospective users know if a particular release isn"t ready for mission-critical deployment.

If BRR catches on, I can foresee all kinds of interesting uses for it. Imagine, for example, searching online open source project repositories with the added constraint that you"re only interested in projects with a BRR rating of four or above. Just that step alone could narrow down your options considerably, streamlining the decision-making process and reducing the likelihood of nasty surprises.

There"s just one problem. That installation ordeal I mentioned earlier wasn"t for an open source package. On the contrary; it was an enterprise-class e-commerce package from a top-tier vendor -- and it was junk. It cost nearly a half-million dollars, but you couldn"t even install it without a call to tech support.

What"s more, my experience was hardly unique. Why point the finger at open source, when it has no monopoly on bugs, sloppy coding, poor documentation, and low reliability? Anybody who"s worked with commercial enterprise software knows that it has its share of problems. Creating a rating system for open source projects can serve a useful purpose, but if it ignores the commercial competition it could also have a negative result, by helping to "ghettoize" open source.

A rating says to potential users: Watch out. Think twice. Double check. Get the facts.

That"s all good advice for any IT project. But if promoting open source is the goal, is it really the best message to lead with? Or will it just give that slipshod e-commerce vendor I mentioned more ammunition for its next sales pitch?