Supercomputing turns toward productivity

06.03.2006

Plan Ahead

To exploit the parallelism inherent in software as fully as possible, IBM Research, among others, is trying to allow processes to run out of order and then be reassembled on the fly in a process it calls speculative multithreading. Al Gara, chief architect for IBM's Blue Gene, calls it a potential breakthrough, but as the number of threads increases, the components that must eventually be reassembled into the final result grow exponentially. "You need a combination of hardware [and] software to guarantee correctness," he says, "with an assist from the compiler."

The Defense Advanced Research Projects Agency has a program called High Productivity Computing Systems aimed at doubling the productivity of scientific computers every 18 months until at least 2010. In response, several academic researchers are working on languages and compilers for parallel and high-performance computing.

For example, Ken Kennedy, director of the Center for High Performance Software Research at Rice University, is developing a system that uses a library of components to generate high-performance compilers for specific scientific domains, such as biological computing or signal processing.

He's also exploring ways to "parallelize" Matlab, a favorite of scientists and engineers, so that Matlab arrays can be distributed across a parallel machine. Matlab would then be far easier to use than that old standby Fortran, which must be carefully crafted for specific computer architectures, Kennedy says.