Petascale storage may trickle down to you

06.11.2006
Discussions about super computer performance almost always center on processing speed -- how many gazillion operations per second can be performed by the giant machines. Makers and users of supercomputers also like to brag about things like the number of processors, the amount of memory and the bandwidth available for moving data about.

Such metrics are important determinants of how much work the machines can do. Less often focused on, but becoming critically important, are questions of storage: How much disk capacity do the computers have? How fast can data be written to and read from storage? How easily and quickly can an application be restarted when a disk fails? How can file systems be scaled up to efficiently handle petabytes of information? How the heck can you find something when your system has 30,000 disks?

-- Those questions and more will become the focus of the Petascale Data Storage Institute (PDSI), which was recently founded by computer scientists at three universities and five of the U.S. Department of Energy national laboratories with a five-year, US$11 million DOE grant. 'The overall goal is to make storage more efficient, reliable, secure and easier to manage in systems with tens or hundreds of petabytes of data spread across tens of thousands of disk drives, possibly used by tens of thousands of clients,' says Ethan Miller, a computer science professor at the University of California, Santa Cruz.

-- That system may not much resemble the one used by your accounting department, but the computer scientists at the institute say -- and the vendor sponsors are hoping -- that new technologies from petascale storage research will trickle down to commercial users.

-- 'The use of high-performance computer clusters in many commercial applications, [such as] oil and gas, semiconductors and biotechnology, is growing substantially,' says Garth Gibson, a principalinvestigator for the PDSI and a professor at Carnegie Mellon University. He adds that companies are increasingly using supercomputers to boost revenues. 'High-performance computing is not so much about cost reduction as it is about improving the quality of products,' Gibson says.

Disk dilemmas

Storage systems have the unfortunate quality of not scaling well. Here are some of the problems that PDSI researchers will try to solve:

-- Disk access times have not kept pace with disk capacity. In 1990, a computer could read an entire hard drive in under a minute. Now it takes three hours or so to read the largest disks. 'It's only going to get worse, and it will take longer and longer to recover from a disk failure,' Miller says.

-- As the number of disks in a system increases, so does the probability that one will fail in any period of time. Right now, big systems at the national laboratories fail once or twice a day, but with multi'petabyte systems, that rate could increase to a failure every few minutes.

-- When a disk does fail, the ones that must restore the affected data to another disk have to work even harder, increasing the chances that one of them will fail too.

Applications at the national labs -- for example, simulations of the aging of nuclear weapons -- can run for months. They generate huge amounts of data, in part because they periodically copy the contents of memory to disk as 'checkpoints' in case a disk or processor fails. Researchers will look for faster checkpoint/restarting methods, better fault-tolerance technologies and more efficient file systems.

-- One promising approach that's now coming into use at the national labs is a technology called object storage, by which clients can access storage devices directly without going through a central file server. Object storage devices have processors attached to them so that lower-level functions, such as space management, can be handled by the devices themselves. And because data objects contain both data and metadata, it's possible to apply fine-grained, highly intelligent controls for security and other purposes. What's more, object-based storage systems tend to be much more scalable than traditional ones.

-- Researchers will also work on protocols and APIs, especially those related to Linux. They will help develop extensions to Posix, the portable operating system interface for Unix, to enable more effective use of file systems in highly parallel computer clusters. Researchers will also work with The Open Group and the Internet Engineering Task Force to make the Network File System protocols for file access more capable in highly parallel systems.

-- The PDSI will explore a number of emerging technologies, such as phase-change RAM, Miller says. PRAM, recently announced by Samsung Electronics Co., offers the speed of dynamic RAM with the nonvolatility of flash memory. Miller says it's the perfect place to put metadata because it can be accessed much more quickly than if it were on disk, thereby making object storage systems much faster.

-- Miller says PRAM might also be used to store indexes used by search engines, greatly accelerating them as well. That increased speed may prove to be of interest to businesses such as oil companies that have huge stores of private data but lack the enormous resources of a company like Google Inc.

-- Few corporations will ever have systems the size of those at the national labs, with tens of thousands of disks, says Miller. But even desktop systems, which will have more and more disk drives over time, will experience some of the challenges the PDSI will address.

'I can't tell you yet which ones they will be,' he says. 'But problems at the high end have a nasty habit of trickling down to the low end.'