IBM-Server

IBM räumt in seiner Serverangebotspalette auf

28.04.2008
P&T-Aufmacher: Die Zusammenführung der Systemlinien System i und System p

Illuminata-Analyst Gordon Haff hat sich in einem längeren Diskussionsbeitrag darüber ausge-lassen, wie die von der IBM vor drei Wochen bekannt gegebene Zusammenführung der bei-den Rechnerfamilien System i und System p zur Power-Server-Linie zu bewerten ist. Diese Analyse geben wir hier wider.

Erkenntnis Nummer eins: Der Power-6-Prozessor ist nun durchgängig auf allen Rechnertypen von oben bis unten in IBMs Portfolio zu haben. Allerdings betont Haff zu Recht, dass es bei dieser Ankündigung nicht nur um Hardware, nicht nur um Prozessoren, Chips sowie neue und schnellere Server geht. Vielmehr gehen mit ihr zum einen auch erhebliche organisatorische Veränderungen in IBMs Server and Technology Group (STG) einher. Zum anderen werden eben zwei Rechnerlinien zu einer vereint. Zudem erweitert Big Blue das Angebot an Virtuali-sierungsprodukten. Schließlich geht es bei der Ankündigung auch um Veränderungen von Produktbezeichnungen, Logos unter anderem mehr.

In Summe, so der Illuminata-Analyst, ergeben sich aus dieser Neuorientierung Änderungen in der Art, wie IBM das Geschäftsmodell der STG-Division aufzieht, die so fundamental sind, wie keine andere Neuausrichtung seit dem Jahr 2000. Seinerzeit hat das Unternehmen seine unterschiedlichsten Serverlinien unter das Dach einer einzigen Organisation und Forschungs- und Entwicklungsstruktur versammelt. Diese Umstrukturierung firmierte unter dem Marke-tingnamen "eServer".

Haff unternimmt im Folgenden den Versuch, Prozessortechniken, System-Software-Angebote und Virtualisierungsoptionen, die IBM in seinem Portfolio anbietet, zu analysieren.

Die Power-Systems-Hardware

Das Hardwareangebot der neuen Power-Systems-Rechnerlinien ist mehr auf Stromlinie ge-bracht als frühere Generationen, sagt Haff. Als Beleg führt er an, dass jetzt alle Server stan-dardmäßig mit Power-6-Prozessormodulen arbeiten. Bei der Power-5-Generation sah dies noch anders aus: Hier bot IBM Quad-Core-Prozessormodule (QCM = Quad Core Module) an, die mit einer niedrigeren Taktfrequenz arbeiteten, dafür aber eben mit mehr Prozessorkernen bestückt waren als die standardmässigen Dual-Chip-Module. Ferner bot Big Blue bei der Po-wer-5-Generation noch Multichip-Module (MCM) an, die in Highend-Servern eingesetzt wur-den.

Haff wirft den interessanten Punkt auf, dass immer mehr Kunden Blade-Systeme als echte Alternative zu den bisher verbreiteten Rackmount-Servern ansehen. Dies gelte sogar für Standorte, an denen relativ kleine Serverinstallationen realisiert werden. Die IBM hat darauf auch bereits mit einem Angebot für kleine und mittelständische Unternehmen reagiert: Die "BladeCenter"-Chassis - also hier das BladeCenter-S - zielen genau auf diesen Kundenkreis.

Nach Haff mit Sicherheit die größte Veränderung hat sich aber durch die Zusammenlegung der Rechnerlinien System i - ehedem die AS/400 - und System p - also die Linux/AIX-Maschinen - ergeben. Dabei kommt die Zusammenführung nicht mit einem plötzlichen Schlag. Vielmehr hat Big Blue die Grenzen zwischen diesen Systemfamilien schon seit lan-gem aufgeweicht. Beide nutzen die Power-Architektur-CPUs bereits seit Jahren. Nunmehr aber finden sich beide Linien unter dem einen Dach der Power-Systems-Familie. Diese hat - und das ist anders als früher - gleiche Preise und nutzt gleiche Peripheriekomponenten, die ebenfalls gleich viel kosten. Früher wurden die Komponentenpreise für die System-i-Rechner aus politischem Kalkül höher gehalten.

IBM hat auch die Nomenklatur seiner Systeme klarer geregelt. Deutete beispielsweise das im Jahr 2001 vorgestellt "AIX 5L" auf die Version des IBM-Unix und auf die Affinität dessen zu Linux, so lasen viele darin eher die Nähe zur Power-5-Architektur. Ziffer und Buchstaben hat Big Blue deshalb jetzt aus der AIX-Bezeichnung entfernt. Ähnlich das System-i-Betriebssystem, das in seiner Geschichte nun schon mehrere Namenwechsel erlebte. Ur-sprünglich als "OS/400" aus der Taufe gehoben wechselte es seinen Namen auf "i5/OS", mit der 5 diesmal im Gegensatz zum AIX-5L-Betriebssystem tatsächlich als Referenz auf die Po-wer-5-Plattform, auf der die ehemaligen AS/400-Rechner liefen. Schließlich firmierte die Software um in "System i". Mit der Zusammenlegung der System-i- und System-p-Plattformen in die Poer-Systems-Familie heißt das Betriebssystem ganz schlicht "i". Interes-santerweise hatte IBM überlegt, wieder auf den alten "OS/400"-Namen zu rekurrieren - quasi eine Retroliebhaberei. Das soll Big Blue aber ganz entschieden ausgeredet worden sein.

Im Zuge der Umstrukturierung hat die IBM auch den neuerlichen Versuch gestartet, seine Virtualisierungsangebote in eine Systematik zu bringen: "PowerVM" - als Nachfolger von "Virtualization Engine" - ist Big Blues Antwort auf die Veränderungen im Markt. Nicht nur erlangt Virtualisierung eine immer größere Bedeutung für Unternehmen. Virtualisierung wird auch zunehmend komplizierter und komplexer. Zudem treten immer mehr Kandidaten mit Virtualisierungsangeboten auf. IBM reagiert darauf, indem es etwa in PowerVM die Option bietet, eine laufende logische Partition (LPAR, was im Prinzip eine virtuelle Maschine ist) von einem Server auf einen anderen zu verlagern - ohne dass die LPAR erst einmal herunter-gefahren werden muss. "Live Partition Mobility" is so IBMs Antwort auf VMWares "VMoti-on".

IBM hat zudem das LPAR-Konzept ergänzt mit der Option "AIX Workload Partitions" (WPAR, hierbei handelt es sich um Betriebssystem-Container). Auch diese können dyna-misch verschoben werden. Haff vertritt allerdings die Meinung, dass WPARs doch im Prinzip erst einmal heruntergefahren werden müssen.

Im Zuge des Produkt-Revirements organisiert die IBM auch die Software Group anders. Ne-ben den Feldern Virtualisierung sowie Betriebssystem- und Integrations-Layern nennt das Unternehmen jetzt als Ordnungskriterien der Power-Systems-Software-Group die Themen Verfügbarkeit, Sicherheit, Energiekonsum (dies unter dem Schirm "EnergyScale") und Mana-gement - worunter im Wesentlichen IBMs "System Directory" fällt. Auch bei diesen bemü-hungen geht es vor allem um eine nachvollziehbare konsistente Noemnklatur. So wird aus dem seit langemals Wortungetüm existierenden Unix-Cluster-Produkt "High Availability Cluster Multi-Processing" (HACMP) schlicht "PowerHA for AIX" und "PowerHA for Li-nux". Die AS/400"- respektive System-i-Variante "System i High Availability Clusters" (HASM) wird analog zu "PowerHA for I".

Rackmount-Rechnerlinien

The mainstream commercial Power Systems family running on the POWER6 processor now consists of the Power 520, Power 550, Power 570, and Power 595-plus two blade servers, the BladeCenter JS12 and JS22. (See Table 1.)

Also under the Power Systems umbrella are several "specialty” servers such as the Power 575 that is positioned as a heavyweight compute node, whether employed for pure technical high performance computing (HPC) or for more business-oriented analytic tasks. Blue Gene tilts further toward the pure science direction with a huge number of small PowerPC-based nodes running Linux. The QS21 Cell blade, built around the Cell Broadband Engine initially devel-oped for game consoles, can be used as a component of a "hybrid cluster” in which Cell han-dles the heavy-duty floating point while a more general purpose processor handles other tasks. Because these are specialized HPC products, we won’t discuss them further in this note.5

The Power 520 Express,6 to give the system its full name, is the new entry-point to IBM’s rackmount Power lineup. The 520 comes with between one and four POWER6 cores and is pitched for a distributed application server, a small database or consolidation server, or-in the case of the i Edition-a complete business solution with an integrated database and appli-cation server. The Power 550, which can be configured with between two and eight POWER6 cores, targets a similar set of uses-albeit at higher scale points. Both servers are available in AIX, Linux, and i Editions (i.e. base operating system configurations) but can run any combi-nation of OSs using virtualization.

The modular midrange System p 570 was the first IBM server to get decked out with the POWER6.7 The Power 570 unifies this server and the System i 570. There’s no charge to upgrade to the new flavor.

The Power 570 is constructed from up to four discrete building block "drawers,” each with two dual-core POWER6 processors. Multiple drawers can be interconnected by cables to create a larger unified SMP server. The advantage of this modular approach is that it lets cus-tomers "pay as they grow,” incrementally purchasing capacity as they need it, rather than up-front. It’s not a new idea. The high end of IBM’s System x lineup takes a similar approach in the x86 realm.8 Antecedents go back to Data General and Sequent in the mid-1990s.

Although it has a surface similarity to these other designs, the 570 is cut from different cloth. With the 570, calls to remote memory take only about 25 to 50 percent longer than the best-case local memory access; the exact number depends on other factors, including the number of building blocks installed. This isn’t quite a "flat” or purely uniform memory access time-but it’s as good as, or better than, many systems using hardwired, inside-the-box intercon-nects, and significantly superior in both relative and absolute terms to any other modular de-sign on the market.9

The Power 595 is the true "Big Iron” of the lineup, the one that gets to trumpet world-beating benchmark prowess. It can be configured with up to 64 cores running at up to 5 GHz (a new top-end speed grade for POWER6 with this announcement) via 8-core processor books. Each book has 16 to 32 DIMMs per processor book; that’s up to a whopping 4 TB of 533 MHz or 1 TB of 667 MHz DDR2 in a fully-configured Power 595 system.

he Power 595 is constructed out of building blocks consisting of four POWER6 chips with their associated L3 cache fully connected into an 8-core node. (In the past, IBM has packaged such nodes into a multi-chip nodule (MCM) but not in this case.) Up to 8 nodes are then fully connected to form up to a 64-core SMP system. A nice feature of this connection topology is that total bandwidth increases faster than compute capacity. Of course, coordination traffic increases as well. However, IBM says that its tests show that even a 64-core Power 595 will saturate memory bandwidth before it will bottleneck on the communications associated with keeping memory accesses coherent (as can sometimes happen on larger SMP servers).

This quantity of CPU and memory hardware (and equally hefty I/O capabilities) translates into equally impressive performance metrics at the system level. IBM has often capped its high-end systems announcements with new TPC-C high-water marks that purport to model some improbably large OLTP environment.10 This time it’s turned in a two-tier SAP Sales and Distribution (SD) Standard Application Benchmark that bests the next closest number of simulated benchmark users-a December 2006 HP Integrity Superdome result-by about 11 percent.11

However, these days the point of such large servers isn’t so much merely to handle mammoth workloads. That a system like the Power 595 is more focused on doing many things well (ra-ther than just one thing) reflects that system horsepower has grown faster than the ability, need, and desire of single applications to use it. Sure, some big databases, ERP, and technical apps still need the biggest boxes available when a task requires too much coordination to run efficiently on a more distributed cluster of smaller systems. But, more and more, big systems are primarily about consolidating tasks in one place, where they can be controlled from a sin-gle point and their resources resized as the business needs of the moment dictate.

The Blades

With the POWER6 generation, IBM has better aligned its Power Architecture blades with its rack- and frame-oriented lineup. Whereas the previous JS21 blade was built around the PowerPC processor, the quad-core BladeCenter JS22 Express and the new dual-core Blade-Center JS12 Express use the same POWER6 processor as the rest of the servers.

The JS21 had some significant wins in the high performance computing arena (such as Ma-reNostrum at the Barcelona Supercomputing Center), but it only saw significant commercial use in fairly narrow niches. For example, some of the retail applications that IBM deploys run on AIX (while others run on Linux or Windows), so having a Power blade lets IBM integrate applications running on disparate architectures and operating systems in a single chassis, an approach the company calls "Store in a Box.”

The JS22 has seen broader commercial use. In part this reflects that the JS22 just has more processor oomph. This makes it better suited for workload consolidation using virtualiza-tion-a general Power Systems strength and target-as well as capable of handling heavier-weight business applications. It also simply reflects that there is greater commonality with the rackmount server line that is often used in conjunction with blade servers. The reality is that IT shops are far less tolerant of deploying what they consider "one-offs” that are in some way not part of a mainstream offering. The addition of POWER6 helps to more clearly cement Power blades as a standard part of the product line.

With the latest announcement, IBM has also moved down-market with its new single-socket JS12 offering. This reflects the major midmarket and distributed site push that IBM is making with BladeCenter. This is reflected on a couple fronts in addition to the blade itself.

One is the addition of the i operating system to blades. To appreciate this in context, it’s worth remembering that the value proposition of i has often been the integration of disparate appli-cations at midmarket companies. This included not only various business apps written for i5/OS but even Windows applications that ran on an Integrated xSeries Adapter (IXA)-basically, an x86 server lashed to an iSeries server. (i5/OS V5R4 also added the ability for x86-based IBM servers to integrate with System i storage using an iSCSI connection.)

IXA was never an ideal solution however. It was a special piece of hardware and tended to lag the technology in standard off-the-shelf x86 servers. Blades provide an alternative, and now more familiar, path to such heterogeneous integration. One can run i, AIX, and Linux applica-tions on JS21 or JS22 blades-using PowerVM, even on the same blade, in different LPARs. Windows and x86 Linux apps can run, in the same BladeCenter chassis, on standard Intel- or AMD-powered blades.

BladeCenter really has become the integration point, not just for various x88-oriented work-loads, but now equally for environments that want or need to run on POWER. The BladeCen-ter S chassis is optimized for midmarket and smaller customers, who tend to use smaller con-figurations, and often locate their IT outside datacenters. Integration through blade infrastruc-ture should be a great match for the same midmarket customers for whom System i and the AS/400 before it, has held such appeal.

Power 6

POWER6 boosts performance considerably over both the PowerPC and the POWER5+.12 Much higher processor frequency is part of the story; the Power 595 can be configured with up to a 5 GHz speed grade-a new high-water mark for all mainstream CMOS processors. It also has particularly sophisticated Symmetrical Multi-Threading (SMT) that makes each phy-sical core appear to the operating system as two logical CPUs. The goal is to keep execution units in a processor as busy (and therefore as productive) as possible by giving them more than one thread to churn on. POWER6 tracks how threads use shared resources like cache slots and Global Completion Table entries, and adjusts their allocations accordingly.

IBM has also added specialized execution units to accelerate specific types of operations, and further augmented the chip’s reliability features. Notable changes in this area include:

AltiVec (VMX) vector instructions. POWER6 inherits from the PowerPC the AltiVec (VMX) vector instructions that it uses to accelerate floating-point code. This unit is a big part of the Power’s success in HPC applications.

Decimal Floating Point. Another application-specific execution unit accelerates Decimal Floating Point operations. This one’s for the commercial folks who have to work with dollars and cents using a format called Binary Coded Decimal (BCD). This format makes calcula-tions exactly-an important thing when dealing with money. The problem has been that the associated software algorithms are relatively expensive of computer time at the huge scales they’re performed at businesses and financial institutions. The POWER6 accelerates these operations in hardware.

Instruction Retry. Beyond performance, POWER6 also brings in a venerable mainframe reliability feature-instruction retry, dubbed "processor recovery” in this incarnation. The retry operation can occur on a different processor from the one on which the instruction was origi-nally executed. Thus, in the event of a hard error, another processor (such as a Capacity on Demand reserve processor) can substitute for the failed one.

Especially from the perspective of the larger systems in the family, however, just as signifi-cant as the features within the core is the way that the cores communicate with each other and with the rest of the system. As with POWER5, POWER6 processors have integrated memory controllers-a much-touted feature in AMD’s Opteron and an approach that Intel is moving to with its QuickPath Interconnect. This combined approach has consistently delivered very fast memory access times-a key application performance ingredient-even with memory that is physically distant from the processor making the request.

One significant POWER6 architectural change is a new two-tier interconnect architecture and coherency protocol in addition to other changes aimed at increasing bandwidth and lowering latency. For example, POWER6’s mechanisms that make sure that the processor caches are coherent with main memory added an advanced heuristic that has knowledge of the way that memory is allocated and the way that memory is configured. This allows some actions to be kept local to a group of processors and memory and thereby avoid clogging remote parts of the system with unnecessary traffic.13 (Thus, in a Power 595, coherence traffic can often be kept local to an 8-core node so it doesn’t need to go onto the off-node interconnect.)

PowerVM

One major virtualization trend that we’ve observed over the past couple of years is that even companies with very deep capabilities in some particular approach have come to the realiza-tion that one size truly does not fit all. Thus Sun went from trumpeting hardware partitions (and then later Solaris 10 containers) as the ultimate answer to Life, the Universe, and Every-thing to something that’s part of a toolkit. From the client side, we’ve seen Citrix evolve from all-Presentation-Server-all-the-time to a much broader application delivery strategy. And we’ve seen IBM go from "LPARs is the answer. What’s your question?” to matching a vari-ety of technologies to differing sets of requirements.14

One example of this is the bright spotlight that IBM now shines on z/VM, long a red-haired stepchild compared to its z/OS sibling in the mainframe space.15 However, it’s also reflected in the complementary set of virtualization capabilities that IBM now offers on its Power line.

The first is PowerVM-roughly speaking, the set of functions that used to go by "Advanced Power Virtualization,” plus some recent enhancements. (As in the case of PowerHA, PowerVM is both an umbrella term for a related set of capabilities, and part of the name for specific products.)

The foundations of PowerVM are micro-partitions, as many as ten per POWER6 core up to a maximum of 254 per server.16 Relative to the largely software-based approaches of the x86 world,17 the controlling hypervisor is in firmware and tightly ingrained at the lowest levels of the system. The virtualization control logic works with processors, I/O cards, service proces-sors, operating systems, systems management tools, and other components to coordinate the allocation of system resources.

Reflecting its Big Iron roots, IBM’s approach to partitioning and virtualization initially fo-cused on optimizing and consolidating workloads within a single server. However, with the increased buzz around what is often called "dynamic infrastructure” or similar names, IBM has shifted its attention to virtualization that spans multiple machines, and to integrating vir-tualization with management tools such as its Systems Director.

With POWER6, IBM introduced Live Partition Mobility, the ability to move a running LPAR from one physical server to another. IBM says the observed "pause” for the shift is about two seconds-within the TCP/IP timeout window, so that network clients won’t even notice that the LPAR is suddenly running on another server. Although software-based approaches such as VMware make similar claims, their reality is highly load dependent. (We’ve heard anecdo-tal x86 user stories suggesting that, when there are a lot of memory writes going on, it can be difficult to complete the transfer.) To use Partition Mobility, the servers must all be on the same network subnet, and all of the I/O resources in the LPAR must be virtualized through a VIOS (Virtual I/O Server). Unlike "pure” VM approaches, IBM allows I/O resources to be either virtualized or, for physical devices, to be exclusively controlled by an LPAR; but for purposes of mobility, only the first option works.

PowerVM comes in Express, Standard, and Enterprise Editions. Express is the free "try it, you’ll like it!” offering; it’s limited to 3 total LPARs. Standard enables the full 10 LPARs/core level of virtualization, and adds more advanced functions like Multiple Shared Processor Pools-the balancing of processing power among partitions assigned to shared pools. Enterprise adds Live Partition Mobility as well.

As of AIX 6.1, IBM also has Live Application Mobility,18 the ability to move workload par-titions (WPAR) from an instance of AIX running on one server to an instance running on an-other. The transfer is performed using the Workload Partitions Manager (WPM), a new tool that integrates IBM’s previous WLM (workload manager) with the fruits of its Meiosys ac-quisition. WPM provides a single graphical console for managing system and application WPARs across systems, including creating and removing them, starting and stopping them, and relocating them. Mechanically, the transfer works by checkpointing the application to disk and then restarting it on a different server; the application is frozen while the transfer takes place (with the result that network connections to that application would likely timeout and drop while the transfer is taking place). Although Live Application Mobility isn’t as fle-xible as Live Partition Mobility, neither HP nor Sun offer the ability to easilymove workload containers.

The WPAR style of virtualization-often called containers or operating system virtualiza-tion-has proven popular for uses where minimizing overhead is a priority. Hosting providers are the canonical example of this use case, although we’ve also spoken with enterprise cus-tomers who have likewise gone the container route to minimize their use of hardware re-sources for virtualization.19 For example, with WPARs there’s only one copy of the operating system for the entire server or LPAR, no matter how many containers you fire up. For the same reason, you only have to patch that one copy of the operating system. The tradeoff is that you can’t mix operating system types or versions using WPARs, don’t get the same de-gree of fault isolation, and don’t get the same heavy-duty, under-high-load ability to transpar-ently transfer a running workload from one server to another.

Finally, IBM offers PowerVM Lx86. Formerly called the IBM System p Application Virtual-ization Environment (System p AVE or, informally, pAVE), it uses Transitive’s QuickTransit to let 32-bit x86 Linux applications run atop Linux on a Power system.20 IBM’s interest in running Linux on its Power-based servers goes back a few years, but, for a variety of reasons, it never really hit its stride. Now, though, IBM is leveraging LPARs to promote the platform for Linux server consolidation. If consolidating a web farm, for example, most of the applica-tions-such as the Apache Web server-would likely have corresponding native Power Linux binaries available. However, there will often be some Lx86 code or third-party applications that don’t have Power equivalents. That’s where Lx86 comes in-providing a way to run the-se "completers” as dynamically translated Linux-on-x86 applications.

Fazit

As IBM never fails to remind those who want to listen-and, indeed, those who do not-Power has been making great revenue growth and market share strides over the past few years. Indeed, Power, especially running AIX, has been the most consistent star of the Sys-tems and Technology Group firmament. That’s no small feat given that IBM has quite a stable of strong and successful products. It’s quite the turnaround. Set the Wayback Machine to the mid-1990s and it wasn’t clear how serious IBM was about even staying in the RISC processor game.That was then. This is now.The Power landscape can’t be a particularly pleasant one for competitors to view. That’s not to say that the likes of HP and Sun don’t have their own strongholds and their own competitive counters. But the strength of the Power lineup overall, both its servers and its supporting software, certainly doesn’t lend itself well to direct frontal assault. Thus, for example, we see Sun’s focus on running Solaris on x86 and focusing its UltraSPARC efforts on far more aggressively multithreaded designs.21

The task of competitors isn’t made any easier by the fact that IBM has cribbed and incorpo-rated so many of their plays. IBM’s unified Power, combined with virtualization, is cut from a similar cloth as the Multi-OS on Itanium strategy that HP once promoted so strongly. And rather than continuing to disparage the operating system containers that were the arrowhead of Sun’s virtualization strategy, IBM developed its own flavor-and then one-upped Sun by add-ing the ability to migrate them from one server to another.

Yes, the merging of i and p and the related organizational changes are no easy tasks. As a product manager in a past life, I look at the vast roto-tilling of names, trademarks, product and option numbers, order channels, and backroom procedures evident in these recent announce-ments, and I can only imagine the screams of "But it can’t be done that way!” that must have echoed within the walls of IBM’s Austin, Somers, Poughkeepsie, and Rochester locations.

But the results in evidence are salutary. Power Systems now sits atop the Unix hill, planning how to add to the lands under its domain.

Matt Eastwood, Group VP, IDC Enterprise Platform Research

I would say that this isn't any different than what HP is doing with Integrity where the same HP supports multiple operating systems including Linux, HP-UX, OpenVMS, etc. With IBM it is the same thing with the focus on fewer HW platforms but IBM will continue to invest and innovate in the i5/OS ecosystem. AIX and i5 users are very different and you will see IBM's focus on 'enterprise' and 'business' segments from a go to market perspective will take this into consideration.

Jean Bozman

The IBM System i and IBM System p were hardware-defined -- with different models for each product line.

Now, IBM is re-defining its customer communities as "Business Systems" customers in the mid-market (SMB) and "Enterprise Systems" customers in large companies with large data-centers. Both groups of customers will deploy systems based on IBM POWER processors.

The key is that the business needs of the mid-market and enterprise customers are seen as very different -- in terms of the business solutions deployed, the way they're deployed (more reliance on channel partners for midmarket/SMB), and the level of IT skill-sets present in those sites (less for midmarket/SMB, and more for the enterprise).

For example: Large datacenters want to consolidate the System i and System p workloads onto a single POWER platform, running the IBM AIX Unix, System 's i5/OS operating system, and Linux (Red Hat and Novell SUSE distributions). This consolidation, which leverages built-in virtualization on POWER processors, reduces operational costs (opex) through improved server utilization and reduced power/cooling costs due to greater efficiency in deployment. However, SMB organizations are more focused on the deployment of the business solution itself--and its ease of use, over time.

Further, IBM announced in January that it would partner on a WW basis with specific channel partners in specific vertical markets--to provide a highly granular approach to supporting business solutions for SMB/mid-market customers -- such as flower-market vendors in Paris and labor unions in New York.

Jean S. Bozman

Research Vice President

IDC Enterprise Platforms Group

155 Bovet Road, Suite 800

San Mateo, CA 94402

650-350-6429 (office)

650-814-9097 (cell)