IT struggles with climate change

06.02.2006

So it seems you must beg, borrow and steal computer resources for this work.

Heimbach: We have to find the cycles where we can find them. But even for the machines that are available, if we really wanted to go to the actual [spatial] resolutions that we need, we probably would not be able to fit those problems on those machines. Give us any machine, and we can immediately fill it with an interesting problem, and we'll still have the feeling we are limited.

Hack: Climate and weather applications... push high-performance computer technology. A decade ago, global climate applications benefited from the extraordinary memory bandwidth of proprietary high-performance architectures, like the parallel vector architectures from Cray and NEC. As scientific computing migrated toward the commodity platforms, interconnect technology, both in terms of bandwidth and latency, became the limiting factor on application performance and continues to be a performance bottleneck.

Is the Internet adequate for connecting you to the supercomputer centers you use around the U.S.?

Heimbach: Transferring several terabytes of data from NASA Ames [Research Center] to MIT is just overwhelming to do in a reasonable time. As of a year ago, we were limited by the 100Mbit/sec. bandwidth of the network that connects our department to the outside world. The best sustained rates that could be achieved were on the order of 55Mbit/sec. That would bring us to a transfer time of 1.7 days per 1TB of data.