IT on a chip

16.01.2007
Hardware performance is about much more than clock speed and raw processing power these days, thanks to embedded functions that are helping do things from improving security to virtualizing servers.

Chip makers, including Intel Corp. and Advanced Micro Devices Inc. (AMD), are ushering in a new era in processor design by adding hardware-enabled features to their wares. The goal is to either replace functions that have traditionally been done via software or, more often, significantly improve the operation of the software.

As an added bonus, those hardware-assisted processor functions improve overall system performance without increasing the heat generated, the vendors claim, allowing corporations to keep a lid on utility costs and reduce the need for exotic cooling strategies.

"This is something that has been coming for a long time," says Rick Sturm, president at Enterprise Management Associates. "It's the natural course of evolution, and an affordable and rational thing to do to put some of this functionality down on the chip level."

As computer platforms and overall system management increase in complexity, IT professionals are demanding that systems have 100 percent availability, subsecond response times and instant problem resolution, Sturm says. Those goals are no longer strictly the purview of any one area -- silicon, software or human intervention -- but are now being addressed by taking advantage of advances on all fronts.

"IT is strangling" from the costs of operations, Sturm says. "We're spending so much money on management that it is preventing us from innovating and addressing the needs of business."

Early customers

The Charlotte Observer, North Carolina's largest daily newspaper, in December began migrating some of the publication's most important applications to a virtualized environment.

The paper is moving its Oracle-based circulation system database to servers that have Intel's new quad-core Xeon processors with baked-in, hardware-enabled virtualization technology. Also being placed on these same virtualized servers is the paper's editorial content workflow system.

Geoff Shorter, IT infrastructure manager at The Observer, says he found out during his testing phase how these new servers can run virtualization in near-native speeds. The database used for the test prepares subscription renewal notices and determines which accounts need to billed, how much to bill and for what period of time.

Mike Grandinetti, chief marketing officer for virtualization software provider Virtual Iron, says virtualization often results in overall hardware performance penalties, ranging from 10 percent to 50 percent. But when using chip-enabled virtualization, that penalty can drop to 4 percent or less.

That's indeed what Shorter's group found. "Virtual Iron will tell you their overhead is between 1 percent and 3 percent, but a 3 percent difference on a 10-minute [database run] is not noticeable," Shorter says. "It's just like native. The driving force for going to a virtualization strategy was cost, but we've tested it, and performance is also a driving factor."

Shorter estimates he can run seven to 12 virtual servers per single-core processor node on existing systems. As the newspaper transitions to quad-core systems over the next year, he expects to be able to support around 30 virtual servers per physical node.

Jason Lochhead, principle architect at managed hosting provider Data Return LLC, says the company is already seeing benefits from hardware-assisted virtualization within the server infrastructure it offers its customers.

A year ago, Data Return introduced its Infinistructure utility computing platform intended to allow customers to maximize server utilization and more economically create on-demand compute resources through the use of server virtualization. Using Hewlett-Packard servers based on AMD Opteron processors, Data Return has been able to create hundreds of virtual server instances for customers within its data centers in Dallas and Pleasanton, Calif.

"We don't have as much wasted hardware capacity and have lowered power and cooling bills by consolidating these physical servers with the use of virtualized machines," Lochhead says. "It's much cheaper, particularly when you're talking about adding servers for redundancy rather than performance."

The hardware-assisted virtualization capability within the AMD Opteron processors allows Data Return to run many more varieties of operating systems in both 32-bit and 64-bit versions on the same base hardware, he says. In the future, additional hardware-assisted abilities within Opteron are expected to include memory translation and virtualized access to I/O devices, he says.

"We're enthusiastic about it," Lochhead says. "When we were first going down this road, virtualization was pretty new, and customers were a little leery of accepting it. But when someone like AMD comes out and says they are putting these technologies into hardware, it's a vote of confidence."

What the future holds

Michael Cote, an analyst at RedMonk, says adding hardware-assisted functions to replace or augment software capabilities will continue to increase this year and next, as mainstream microprocessor manufacturers attempt to differentiate their product lines.

In addition, Cote says, "These capabilities will continue to increase as more IT professionals gain a greater understanding of what is available and the potential benefits."

In most cases, rather than fully replacing traditional systems management software applications within a corporation, the new hardware-assisted capabilities will make that software operate more efficiently. Kevin Unbedacht, senior platforms strategist at Altiris, a provider of IT asset management software and services, says that Intel's new Active Management Technology (AMT) is a good example.

Altiris' software has traditionally been able to analyze only those systems that are on and running an operating system. If a system is off, or not operating properly, the Altiris software can't collect a full inventory analysis.

By using the AMT capability embedded within the chip set of VPro systems, however, the Altiris tracking and inventory software can detect systems even when they are off or not operating properly.

In addition, flash memory inside the VPro chip set stores system information each time the PC is booted, providing up-to-date information on the system status. The out-of-band alerts enabled by AMT can allow an IT department to make a single dispatch call, instead of the two that have been traditionally required for analysis and repair, he sa.

The end result, Unbedacht says, is a hardware/software combo that can proactively monitor IT infrastructure instead of reacting only when something is wrong.

In addition, having basic management capabilities hardwired into silicon will make it simpler for new entrepreneurial systems management companies to add product offerings that can rapidly be adopted by IT professionals and integrated in enterprise-level applications, RedMonk's Cote says.

For its part, Intel calls its effort Embedded IT, and is attacking the problem with a variety of new or planned capabilities. Competitor AMD has similar efforts within its Trinity and Torrenza programs.

Measuring success: Not so fast

The biggest boost to processor performance in the last two years has been the move to multicore processors. The migration from single-core to dual-core processors within the x86 market provided direct performance gains of 80 percent or more, and the first quad-core processors from Intel are providing another 50 percent improvement, says Nathan Brookwood, an analyst at Insight 64. How much hardware-assisted features or embedded IT will add to performance is debatable, with the real measure of worth to be determine by how the efforts improve such things as manageability.

"The ultimate test is whether it works for the IT professional for their specific application," Brookwood says. "Things like embedded IT are really designed to increase functionality rather than performance."

Markus Levy, an analyst who serves as president of The Multicore Association and the Embedded Microprocessor Benchmark Consortium, says the move to embed more hardware-assisted features will undoubtedly bring performance gains. But measuring any specific gain is a new challenge that industry groups are only beginning to address.

Increasing the clock speed of microprocessors has provided only minimal performance gains in the past few years as processor manufacturers have hit the wall in the trade-off between speed and the heat generated by the chips. Even the addition of multiple cores within processors running at lower clock speeds to reduce heat is expected to see diminishing returns as those chips move to eight or more cores, Levy explains.

In traditional architectures the use of additional cores will not necessarily help applications that require specific optimization, he says, adding to the need for hardware-enabled assists. "When you are trying to do a specific function like security acceleration, adding another processor core can be an expensive piece of hardware, compared to enabling that capability by using only 100,000 or so gates inside the existing chip," Levy says.

Determining the level of performance enhancement that is associated with those hardware-assisted hooks and accelerators is a task the technology is just beginning to tackle.

"We're going to have to have benchmarks that are specifically tailored toward the use of those features," Levy says. "It is also going to require that we think of performance in a different way. It is going to be pretty challenging to develop a benchmark suite that will work on everybody's platform as they become increasingly custom."

Management by hardware

The past year has seen the advent of hardware-assisted features within mainstream x86-based microprocessors from Intel and AMD. Even as the those chip vendors have turned to multicore implementations as the primary source for boosting performance, they are adding hardwired features into their processors and associated chip sets.

These features were previously left solely to software or were not addressed at all.

"We are looking hard at what technologies are right to be moved into silicon and placed within our platforms as opposed to technologies that need to stay in software," says Margaret Lewis, director of commercial solutions at AMD. "As a result, we are on the brink of a lot interesting new concepts in performance. It's no longer simple. In many cases, it won't be necessarily be how fast you complete a task, but how satisfied you are with the result."

AMD's Trinity platform is intended to allow processors to handle virtualization, security and management. One of the first commercialized efforts has been technology originally developed under the code name Pacifica, to allow hardware to more easily run multiple operating systems.

Also introduced in the past year was AMD's Torrenza platform. Torrenza uses AMD's existing interconnect technology to allow third parties to create application-specific coprocessors that can work alongside AMD processors in multisocket systems.

For its part, Intel's embedded IT capabilities include its already released Virtualization Technology, which like AMD's Pacifica provides a hardware-enabled ability to more effectively create virtualized infrastructure installations. Also introduced by Intel is Active Management Technology (AMT), embedded in client-side processors. AMT allows IT managers to remotely access networked computing equipment -- even those that lack a working operating system or hard drive or those that have been turned off.

Also in the works from Intel is I/O Acceleration Technology, a network accelerator that can break up the data-handling job among all the components in a server, including the processor, chip set, network controller and software. The distributed approach reduces the workload on the processors while accelerating the flow of data, Intel says.

Intel's Trusted Execution Technology, originally code-named LeGrande Technology, is a set of hardware extensions to processors and chip sets that enhances security. The technology means to prevent software-based attacks and to protect the confidentiality and integrity of data stored or created on a client PC.

Darrell Dunn is a freelance reporter based in Fort Worth, Texas, with 20 years of experience covering business technology and enterprise IT. Contact him at darrelldunn@sbcglobal.net.