US to spend millions on ultrafast supercomputers

27.06.2006
The U.S. government is planning to spend hundreds of millions of dollars over the next several years to develop huge supercomputers with power beyond anything available today. The aim is to address the most challenging problems facing science, as well national security and industry.

Once completed, these systems will be capable of sustained petascale computing speeds, which are equal to quadrillions of calculations per second. To understand the scale of these planned systems, the leading machines on the current Top500 Supercomputer List are capable of reaching the range of only multiple TFLOPS (trillion floating-point operations per second). The latest Top500 list, updated twice a year, is due out tomorrow.

But PFLOPS (or "petaflop") systems are coming. Earlier this month, Seattle-based Cray Inc. said it had signed a contract worth US$200 million to deliver a PFLOPS-capable system to the U.S. Department of Energy's (DOE) Oak Ridge National Laboratory. That system, based on Advanced Micro Devices Inc. processors, will be built in phases of ever-increasing speeds, and is due to be completed in 2008.

The National Science Foundation (NSF) this month began seeking proposals for a supercomputer that could cost as much as $200 million. And in July, the Defense Advanced Research Projects Agency (DARPA), which was responsible for creating the Internet, will award two supercomputer development projects expected to cost several hundred million dollars.

The scale of the computing power on its way will be so enormous that "we have to change the way we do computational science to really take advantage of these machines," said Dimitri Kusnezov, head of the DOE's advanced simulation and computing program, which operates the world's most powerful supercomputer, the IBM BlueGene/L. That supercomputer, with more than 131,000 IBM Power processors, was the No. 1 system on the Top 500 list when those rankings were last updated in November.

This DOE BlueGene system broke a record this month when it ran scientific code, called Qbox, at a sustained level of 207 TFLOPS. While the system benchmarks higher on test codes, achieving high levels of performance with a real-world application is a more difficult task because of complexity and size of the code, according to those involved with the project.