Multicore chips pose next big challenge for industry

20.03.2009

That means mainstream applications have to be written in a different way to take advantage of the additional cores available. The work is hard to do and creates the potential for new types for software bugs. One of the most common is "race conditions," where the output of a calculation depends on the various elements of a task being completed in a certain order. If they are not, errors can result.

A few parallel programming tools are available, such as Intel's Parallel Studio for C and C++. Other vendors in the space are Codeplay, Polycore Software and Clik Arts. There is also a new C-based parallel programming model called OpenCL, being developed by The Khronos Group and backed by Apple, Intel, AMD, Nvidia and others.

But many of the tools available are still works in progress, participants at the Multicore Expo said. Software compilers need to be able to identify code that can be parallelized, and then do the job of parallelizing it without manual intervention from programmers, said Shay Gal-on, director of software engineering at EEMBC, a nonprofit organization that develops benchmarks for embedded chips.

Despite the lack of tools, some software vendors have found it relatively easy to create parallel code for simple computing jobs, like image and video processing, Gwennapp said. Adobe has rewritten Photoshop in a way that can assign duties like magnification and image filtering to specific x86 cores, improving performance by three to four times, he said.

"If you are doing video or graphics, you can take different sets of pixels and assign them to different CPUs. You can get a lot of parallelism that way," he said. But for more complex tasks, it is difficult to find a single approach for identifying a sequence of computations that can be parallelized and then dividing them up.