Supercomputing turns toward productivity

06.03.2006
To IT managers, high-stakes supercomputing may seem like the land-speed record: a freak show, amusing but hardly relevant. Oh, a car broke Mach 1? And a defense lab has a 280 TFLOPS computer? Cool. Now let's get back to work.

However, supercomputing specialists are wrestling with problems that will affect everyday IT within the next two to five years. Essentially, improvements in processors have outstripped those in data movement. For some time now, the limiting factor in high-performance computing has been the speed with which data can be moved to and from the processors. Indeed, the cylindrical shape of the iconic Cray supercomputers is an effort to limit the distances data must flow.

Because supercomputing is the sharp end of the technology spear, these data-flow problems -- still manageable in most corporate data centers -- are quickly reaching critical mass in the world's top research facilities. Breakthroughs are needed, and experts acknowledge that answers are elusive.

Backdrop: Multicore, Clusters

There are two key factors in today's supercomputing tumult: multicore chips and the rise of "cluster" supercomputers composed of hundreds or thousands of humble Intel-style CPUs.

Multicore chips place more than one processor on a single integrated circuit. Dual-core PCs are already common, and experts believe this Moore's Law-driven progression will continue so that by 2010, your garden-variety chip will house 64 processors. With each of these processors running four software threads at once, 256 threads could be simultaneously executed on a single chip.