Morphing the mainframe

30.01.2006
At Bank of New York, the mainframe is still king. Nearly three quarters of all transactions are processed on big iron, and 20 percent to 25 percent of the remaining transactions rely on the mainframe for at least some business processes. "The mainframe today is still the platform that we are able to drive to the highest level of utilization," says Edward Mulligan, managing director of the technology services division at Bank of New York Co.

That's slowly changing. Like many companies, the bank conducts most software development projects on Windows or Unix servers. These distributed systems are more open, offer more-agile software architectures and are less costly to run and maintain than mainframes, Mulligan says. And distributed systems are increasingly offering traditional mainframe benefits, such as availability, scalability and server utilization. Mainframe technologies, ranging from channel architectures to virtualization, have migrated down to distributed systems and have begun to mature. "Most of the server solutions available today are morphing to become more like a mainframe," Mulligan says.

But the mainframe is also becoming more like distributed systems. Designs are evolving to incorporate technologies such as Fibre Channel, InfiniBand, Unix and Java. The success of those efforts will determine whether the mainframe will survive as a distinct platform or simply be absorbed into the world of distributed computing.

Robert DiAngelo, vice president and CIO at MIB Group Inc., says he doesn't trust distributed systems with his high-end applications for insurance fraud detection. "I'm in an environment that's easy to maintain, very secure, highly reliable," he says of his IBM z890 midrange system. DiAngelo is redeploying his applications in a three-tier architecture that includes Java, WebSphere and DB2. But the entire architecture, plus his development and quality-assurance testing environments, are consolidated into a single logical partition on the mainframe. Everything fits into a cabinet in his data center. "This is a lot easier to manage than 80, 90 or 200 servers that are spread out," says DiAngelo. MIB Group is a poster child for IBM's strategy of promoting the mainframe as a consolidation platform, although DiAngelo acknowledges that he's "out in front" of most organizations in taking this approach. As mainframe technologies trickle down to distributed systems, those systems are getting better at hosting mainframe-class applications. Meanwhile, IBM, Unisys Corp. and others are moving to more open, industry-standard technologies. Distributed systems based on Unix and Windows are eroding the low end of the mainframe installed base. The mainframe still firmly holds its edge in complex environments. But the battle for the midrange -- applications of up to 1,000 MIPS, where the majority of mainframe applications fal -- has already begun.

Unless the relatively high costs of mainframe hardware and software become more competitive, and unless more-agile software architectures, such as .Net and J2EE, can be successfully deployed on mainframe systems at scale, the mainframe could eventually be eased out of corporate IT. "IBM mainframes are going to become marginalized to the high end if IBM can't significantly reduce the cost," says Dale Vecchio, an analyst at Gartner Inc.

Adoption of industry-standard technologies is key to the mainframe's survival. IBM has based its strategy on Java, WebSphere and Unix/Linux and positioned the zSeries mainframe as a consolidation platform. IBM also released last July its System z9, which reflects an investment of more than US$1 billion and includes innovations such as an encryption processor and the ability to support up to 54 processors and 60 logical partitions. "That's an enormously impressive technology. They doubled everything except the price," says Gary Barnett, an analyst at Ovum Ltd. in London. While mainframes are incorporating additional open architectures, they're also likely to continue to be technology leaders, says Chander Khanna, vice president and general manager at Blue Bell, Pa.-based Unisys. "They are at the top of the waterfall. I don't foresee that changing," he says.

At the hardware level, distributed systems have incorporated industry-standard versions of technologies with mainframe roots, such as Fibre Channel, InfiniBand and IBM's Chipkill error-correction technology, which is used in memory for high-availability systems. "Every big server now has dynamic partitioning, a channel architecture -- things like InfiniBand -- and they all have 64-bit support and large memory," says John Abbott, an analyst at The 451 Group in New York. While IBM says proprietary channel architectures such as Ficon and Escon have advantages, Bank of New York's Mulligan would rather have standardized I/O. "An imaging application we have and a storage device we'd like to leverage are not supported cleanly by IBM's Ficon architecture," he says. "You end up buying these esoteric boxes that emulate the protocols."

But for MIB Group's I/O-intensive application, channel performance is more important than using open-standards hardware. Today, InfiniBand can't drive the number of concurrent channels DiAngelo needs. "We need that back-end channel capacity, and that's something the mainframe does very well," he says.

Still, proprietary I/O hardware is costly. "You pay a hell of a lot to get those channels in place," Abbott says. Most mainframe applications would do just as well with InfiniBand and off-the-shelf adapters, he adds.

That's the direction that IBM is moving in, says Guru Rao, an IBM fellow and chief engineer for the eServer line. While the mainframe is the system most capable of handling complex environments, he says, "a high-value system cannot provide only unique technologies. It has to be able to exploit and leverage high-volume capabilities in the industry." IBM already offers some support for Fibre Channel, and the next-generation mainframe will also support InfiniBand, Rao says. That evolution to support standards-based, commodity hardware architectures is necessary if IBM is to narrow the price gap separating the mainframe and distributed systems.

Mainframe vendors have also struggled with proprietary processor designs, which can't compete on price with high-volume Intel chips. The IBM plug-compatible mainframe market all but disappeared as the costs of keeping up soared. Bull and Unisys have both thrown their lot in with Intel (although Unisys says it will continue to offer some designs of its own), but IBM is taking a middle road. Its Power architecture is used in gaming systems, and IBM says it plans to leverage the economies of scale generated from those volume products to develop a more competitive, "higher value" version of the Power5 for the mainframe. "We are going to provide the same benefit to the mainframe ... as we have for the iSeries," Rao says. The zSeries processor will include "elements of the Power5 architecture," but the chip set will remain unique, he adds. IBM's efforts are bringing costs down by about 20 percent per year, says Abbott. However, the price/performance improvements for x86-based systems have been in the 30 percent to 45 percent range, he says.

The emergence of virtualization technology on the x86 and Itanium architectures and the evolution of tools such as VMware are increasing utilization levels for distributed systems, but they still fall short of the mainframe's capabilities. "To be realistic, we still have a ways to go before [distributed systems] can achieve the complete richness that the mainframe environment offers," says Barnett. But the gradual maturation of nonmainframe virtualization technology could eventually make it possible to migrate larger workloads of 1,000 MIPS and higher off the mainframe, he says.

"Virtualization is the key for bringing the mainframe and open-systems technologies together," says The 451 Group's Abbott. IBM offers its Virtualization Engine, which Rao says will increasingly be used to optimize resources across systems. "In our view, the way to deal with customers' complexity is to use a virtualization engine that will run not only on IBM platforms but also on other leading platforms," he says.

Another type of virtualization -- hardware emulation -- is also making it easier to move mainframe application environments by abstracting them from the underlying hardware. Paris-based Bull offers virtualization that enables its GCOS 8 mainframe operating system and the applications on top of it to run unchanged on its Intel-based NovaScale 9000 hardware, says Joe Alexander, Bull's director of strategy and planning. For now, however, high-end customers will have to wait for faster chip sets and performance improvements to the emulation software.

Similar technology for z/OS and OS/360 is available from Platform Solutions Inc. in Sunnyvale, Calif. The Fujitsu Ltd. spin-off sells a system that, when used with the vendor's virtual I/O subsystem, can support z/OS, as well as Linux, Unix and Windows partitions, on one x86-based system. "We bring the characteristics of the mainframe and the execution of the operating system to commodity hardware," says Michael Maulick, president and CEO of Platform Solutions. As a method of balancing workloads, IBM's approach with Virtualization Engine sounds a lot like grid computing -- an activity that's largely being driven in the open-systems arena today. "Even if IBM has an edge, it's not going to last with so much activity going on elsewhere," Abbott predicts.

The bottom line is that the hardware platform is becoming less and less relevant, says Unisys' Khanna. "It's more of what's in the operating environment and what's in the middleware," he says.

Mainframe operating systems, while proprietary, still have some key advantages in several areas. "The operating system provides the efficiency, isolation, the address spaces, the encryption, and supports an efficient clustering model," says Rao. The mainframe operating system is also the most trusted platform for doing key management, he says. Buffer overflows on z/OS are unheard of, says MIB Group's DiAngelo, adding that "it's a hell of a lot easier to secure one box."

But the biggest issue remains what to do with the more than 40 years of mainframe code -- much of it tightly woven into the mainframe operating system and hardware architecture -- that needs to play in a world of distributed computing and Web services. Mainframe users are sitting on more than a trillion dollars' worth of legacy mainframe code, says Rao. Bank of New York, says Mulligan, is "dealing with tens of millions of lines of code." And that amount of code couldn't be ported in his lifetime, he says.

Sidebar

The mainframe mind set

The mainframe's legendary reliability and availability can't be attributed solely to state-of-the-art technologies. It's a cultural mind-set that grew up around the data center, says Robert DiAngelo, CIO at MIB Group. "The disciplines needed to manage data processing are very well defined and controlled," he says.

Alan Walker, vice president at Sabre Holdings Corp. in Southlake, Texas, agrees. "We've spent years building a culture around [the mainframe]," he says.

In contrast, the culture around distributed systems, which grew up from departmental computing initiatives within individual business units, has been much less disciplined. Modernization efforts that don't take culture into account are destined to fail. "Some of the biggest failures in IT history have been associated with package migration. You must change the culture, not just the software," says Gartner analyst Dale Vecchio.

Sabre had to address that before rewriting its fare-search application and migrating it to an open-systems environment. "When you deal with tens of thousands of transactions per second, you can't reboot," Walker says. A staff that manages open systems doesn't always understand the best practices required to maintain that kind of uptime.

A successful migration to open systems can't happen unless those mainframe values migrate as well, Walker says. "Outages aren't caused by the operating system or hardware," he says. They're produced when the programmer or operations staff does something wrong. "Open-systems [staff] may run a little looser," but veteran mainframe programmers and staff have been indoctrinated not to make those errors, Walker adds.

The key is to infuse that culture into the entire staff. That starts by bringing the open-systems staff into the mainframe world - literally. "I'm taking my guys up there, doing bonding with the veteran mainframe guys," Walker says. DiAngelo's mainframe staffers are also learning Java and doing the migration on the mainframe. Robert Rosen, president of the IBM mainframe user group Share, sees a trend of organizations hiring experienced mainframe staffers to run distributed data centers. "They realize that they need that kind of discipline," he says.