Accessing the web of databases

02.05.2006
I've just posted the fourth installment (http://www.infoworld.com/4109) in my new series of Friday podcasts. It's an interview with Kingsley Idehen, CEO of OpenLink Software.

OpenLink's (http://www.openlinksw.com/) flagship product is a universal database and application server, Virtuoso (http://www.infoworld.com/699), which I last wrote about in 2003.

I convened the interview mainly to discuss Virtuoso's recent transition to open source (http://www.openlinksw.com/blog/~kidehen/?id=951), but our wide-ranging conversation helped me clarify a theme that's been central to my own work, and will dominate the next phase of the Internet's evolution. The Web is becoming a database -- or, more precisely, a network of databases. All of the trends that inform this column -- including Web services, REST (Representational State Transfer), AJAX (Asynchronous JavaScript and XML), and interpersonal as well as interprocess collaboration -- can be usefully refracted through that lens.

I've always regarded the Web as a programmable data source as well as a platform for the document/software hybrid that we call a Web page. Early on, programmable access to Web data entailed a lot of screen scraping. Nowadays it often still does, but it's becoming common to find APIs that serve up the Web's data. If you want to remix the InfoWorld metadata explorer (http://www.infoworld.com/4110), for example, as Mike Parsons did, you can fetch its data directly as XML.

Free text search is an even more popular access API. Nearly every site provides that service, or outsources it to Google or another engine.

And, of course, sites that act as database front ends support canned queries, the results of which may (if you're lucky) be accessible by way of APIs such as RSS.

What you can't typically do, though, is create mashups by running ad hoc queries against remote Web data. There are good reasons to think that it's just crazy to export open-ended query interfaces over the Web. No responsible enterprise DBA would permit such access to the crown jewels. But there are all kinds of data sources -- or what Idehen likes to call data spaces -- and a range of feasible and appropriate access modes.

Consider the data space that is my blog. I maintain the data as XML and provide open-ended query access by way of XPath. Want to extract the set of Python code fragments from my corpus? Be my guest, it's just a query on the URL-line. Want to repurpose that data? Go for it -- the output of that query is well-formed XHTML that displays in the browser but is also software-friendly.

If you're clever, you can probably write an XPath query that will stall or crash my service. If you do, one minor node of an emerging network of Web databases will drop off the grid until I notice the problem and restart it. But it won't ruin your day or mine. And as we gain more experience with these modes of access, we'll learn how to make them more resilient to attack.

The holistic view of that network should be our focus. In Idehen's view, you'll use something like SPARQL -- a query language for the semantic Web -- to traverse a graph of interlinked sites, and to merge interesting sources into a virtual collection. Then you'll dispatch queries to each member of that collection. They'll offer a range of query styles ranging from free text search to iteration over simple key/value pairs (accessed by way of RSS or Atom) to tree traversal (XPath, XQuery) and relational query (SQL). I think he's got it exactly right.