How to Implement Next-Generation Storage Infrastructure for Big Data

16.04.2012
Everyone is talking about Big Data analytics and associated business intelligence marvels these days, but before organizations will be able to leverage the data, they'll have to figure out how to store it. Managing larger data stores--at the petabyte scale and larger--is fundamentally different from managing traditional large-scale data sets. Just ask Shutterfly.

Shutterfly is an online photo site that differentiates itself by allowing users to store an unlimited number of images that are kept at the original resolution, never downscaled. It also says it never deletes a photo.

"Our image archive is north of 30 petabytes of data," says Neil Day, Shutterfly senior vice president and chief technology officer. He adds, "Our storage pool grows faster than our customer base. When we acquire a customer, the first thing they do is upload a bunch of photos to us. And then when they fall in love with us, the first thing they do is upload a bunch of additional photos."

To get an idea of the scale we're talking about, one petabyte is equivalent to 1 million terabytes or 1 billion gigabytes. The by NASA's Hubble Space Telescope comes to a bit more than 45 terabytes of data, and one terabyte of compressed audio recorded at 128 kB/s would contain about 17,000 hours of audio.

"Petabyte-scale infrastructures are just an entirely different ballgame," Day says. "They're very difficult to build and maintain. The administrative load on a petabyte or multi-petabyte infrastructure is just a night and day difference from the traditional large-scale data sets. It's like the difference between dealing with the data on your laptop and the data on a RAID array."