CPUs rev new engines

28.10.2004
Von Gary H.

In 1969, Intel Corp. introduced its first microprocessor chip, the 4004. The 4-bit processor chugged along at a mere 104 KHz. In the 35 years since then, processor clock speeds -- and performance -- have doubled about every 18 months.

Today, however, it"s becoming more difficult and expensive to boost the speed of processors while keeping them cool. Chip designers use many techniques to wring more throughput from a processor chip without increasing its clock rate. Those techniques include multithreading, instruction-branch prediction and clever uses of cache. But the most promising approach is to put more than one processing engine on a chip.

In 2001, IBM Corp. introduced the first mainstream "dual-core" chip, the Power4, for its IBM eServer pSeries and iSeries servers. Early this year, Sun Microsystems Inc. shipped its UltraSparc IV with two cores for its Sun Fire V series servers, and Hewlett-Packard Co. unveiled its own dual-core PA-RISC 8800 processor. Advanced Micro Devices Inc. responded last summer by demonstrating an x86-based, 64-bit, dual-core Opteron processor. Intel Corp. subsequently announced plans to ship its Itanium 2-based, dual-core Montecito CPU in 2005.

The chip makers say that within two years, most processor chips -- from desktop systems on up -- will have two or more processing units. The reasons for this are compelling. A dual-core chip might provide twice the performance of a single-core chip at a much lower cost than two single-core chips can. Communication between two processors is faster when they"re on one chip, and cache sharing can make processing more efficient. Dual-core processors also use less space, consume less power and generate less heat than separate processors do.

Reality check

Vendors claim that multicore chips are well suited for transaction processing and for database and scientific applications.

"It"s probably fair to say that the realistic range is 40 percent to 80 percent faster," says Kevin Krewell, editor in chief of the "Microprocessor Report" newsletter and an analyst at In-Stat/MDR in San Jose. They"re less effective on single-application machines and for applications whose instructions can"t be broken into parallel streams, he adds.

While the number of transistors on a chip is still doubling every 18 months, how that extra capacity is used is about to change. "This is the end of the clock-speed race," Krewell says. "As more transistors are available, do you go for higher instructions per cycle? Most people think we have come close to the limit of what can be done there." So those extra transistors are used to build another processing engine -- and to enable multithreading, in which multiple instruction streams, or threads, execute in parallel. Indeed, earlier this month, Intel scrapped its plan to boost the speed of its Pentium 4 chip from 3.6 GHz to 4 GHz in favor of enlarging on-chip cache.

Vendors are working on designs that go beyond two cores, but they face a few challenges. First, at current semiconductor circuit sizes of 130 and 90 nanometers, putting more than two cores on a chip is difficult. But chips with four or more cores will become common as the industry moves to 65 nm technology.

Sun is already working on a multicore chip. The 90 nm Niagara chip, due in 2006, will support Solaris and hold eight cores. Niagara is intended to be "Web-facing, the first tier in the server room," where it might, for example, handle 32 user searches at once, says Marc Tremblay, a chief architect for processors at Sun.

Another problem with multicore chips is software, says Krewell. To use that many processors efficiently in one die, the operating system must perform a fair amount of work. "Windows XP scales reasonably well in four-way and eight-way systems, but it"s not going to apply so well to 16- or 32-way systems," he says.

And even with dual-core processors, software licensing issues could trip up early adopters, says Krewell. Many vendors license software by the processor. Until now, each generation of CPU chips has increased processor clock speeds, often doubling performance without affecting software costs. If the next generation doubles performance by adding a second processor core, software licensing costs could double as well.

As dual-core processor chips become the norm over the next two years, software vendors" attempts to charge per core could backfire, says Krewell. "Does it make open-source software, like MySQL, more attractive, and does it cause a shift in corporate buying to open-source packages that are much more flexibly priced?" he asks. That"s a question users are likely to answer over the next 24 months.