The Internet Archive's Wayback Machine gets new data center

25.03.2009
The Internet Archive Wednesday announced that it has a new computer behind it its library of 151 billion archived Web pages. The machine fits in a 20-foot-long outdoor metal cargo container filled with 63 server clusters that offer 4.5 million gigabytes of data storage capacity and 1TB of memory.

The Internet Archive has been taking a snapshot of the World Wide Web every two months since 1997, and the images are made available through , a Web site that gets about 200,000 visitors a day or about 500 hits per second on the 4.5 petabyte database.

"It may be the single largest database in the world, and it's all in a shipping container. I think of the shipping container as a single machine or expression made up of many smaller machines," said Brewster Kahle, digital librarian and co-founder of the Internet Archive, the nonprofit organization that runs the Wayback Machine site.

For the past 13 years, the Internet Archive has been growing rapidly, most recently by about 100TB of data per month. Until last year, the site had been using a more traditional data center filled with 800 standard Linux servers, each with four hard drives. The new Sun Modular Datacenter that powers it now is on Sun's campus in Santa Clara, Calif., and houses eight racks filled with 63 Sun Fire x4500 servers with dual- or quad-core x86 processors running Solaris 10 with ZFS. Each Sun server is combined with an array of 48 1TB hard drives. The server unit is referred to as a "Thumper."

"The only thing needed besides [the shipping container] are the network connections, a chilled water supply and electricity," said Dave Douglas, Sun's chief sustainability officer. "Customers using this tend to be people running out of data center space and need something quickly or need a data center in remote area where mobility is key."

The nonprofit Internet Archive, which is based in the Presidio in San Francisco, uses an algorithm that repeats a Web crawl every two months in order add new Web page images its database. The algorithm first performs a broad crawl that starts with a few "seed sites," such as Yahoo's directory. After snapping a shot of the home page, it then moves to any referable pages within the site until there are no more pages to capture. If there are any links on those pages, the algorithm automatically opens them and archives that content as well.