Modernizing mainframe code

24.04.2006
By some estimates, the total value of the applications residing on mainframes today exceeds US$1 trillion. Most of that code was written over the past 40 years in Cobol, with some assembler, PL/1 and 4GL thrown into the mix. Unfortunately, those programs don't play well with today's distributed systems, and the amount of legacy code at companies such as Sabre Holdings Corp. in Southlake, Texas, makes a rewrite a huge undertaking. "We're bound by our software and its lack of portability," Sabre Vice President Alan Walker says of the 40,000 programs still running on IBM Transaction Processing Facility (TPF), Agilent Modular Power System and other mainframe systems.

With a shortage of Cobol programming talent looming in the next decade and a clear need for greater software agility and lower operating costs, IT organizations have begun to make transition plans for mainframe applications. The trick lies in figuring out which applications to modernize, how to do it and where they should reside.

Applications fall into one of three groups based on scale, says Dale Vecchio, an analyst at Gartner Inc. Applications under 500 MIPS are migrating to distributed systems. "These guys, they want off," Vecchio says. As organizations begin peeling away smaller applications, they may move to a packaged application; port the application to Unix, Linux or Windows; or, in some cases, rewrite the applications to run in a .Net or Java environment, he says.

In the 1,000-MIPS-and-up arena, the mainframe is still the preferred platform. Applications between 500 and 1,000 MIPS fall into a gray area where the best alternative is less clear. An increasingly common strategy for these applications is to leave the Cobol in place while using a service-oriented architecture (SOA) to expose key interfaces that insulate developers from the code.

"If you expose those applications as a Web service, it's irrelevant what that application was written in," says Ian Archbell, vice president of product management at tool vendor Micro Focus International PLC in Rockville, Md. "SOA is just a set of interfaces, an abstraction."

"SOA at least allows you to break the dependency bonds," says Ron Schmelzer, an analyst at ZapThink LLC in Waltham, Mass.

Cobol isn't going away, but it's also not moving forward. While the Cobol code base on mainframes is projected to increase by 3 percent to 5 percent a year, that's mostly a byproduct of maintenance, says Gary Barnett, an analyst at Ovum Ltd. in London.

"No one is learning [Cobol] in school anymore, and new applications aren't being built in Cobol anymore," says Schmelzer. "Cobol is like Latin."

Vendors such as Micro Focus have abandoned the idea of evolving the Cobol language for distributed application development. "Micro Focus is not about a better Cobol compiler," says Archbell. Instead, its approach is to "embrace and extend," he says. "We expose things like aggregated CICS transactions as JavaBeans, Web services, or .Net or C# code. It's wrappering." But with so much legacy code, that process won't take place overnight. "It could take 20 years," Archbell says.

Sabre still has more than 10,000 MIPS of applications on mainframes, and Walker plans to migrate everything off over the next few years. The company's TPF-based fare- searching application, used by Travel-ocity.com LP and travel agents, has been rewritten to run as a 64-bit Linux program on four-way Opteron servers.

Sabre migrated the back-end data to 45 servers running M ySQL that each contain fully replicated data. The new system is more flexible and "pretty cheap" compared with the mainframe, Walker says. He questions the conventional wisdom that all high-end applications need to stay on mainframes, noting that the search application was in the thousands of MIPS. "It's pretty obvious that you don't need mainframes to do large-scale transactions," he says, pointing to the successes of eBay Inc. and Amazon.com Inc.

Barnett points out that very few of his clients have been successful at completely rewriting large-scale applications.

In Sabre's case, it's worth noting that the application was CPU- and memory-intensive and that competitive pressures would have forced a rewrite anyway. "We solved a larger problem," which was the need to generate hundreds of results instead of the 10 to 20 the TPF system could deliver per search, Walker says.

Simply rewriting millions of lines of code to deliver the same features not only wouldn't cut it financially at The Bank of New York Co., but also would require a lifetime of work, says Edward Mulligan, executive vice president of the technology services division. A gradual transition to packaged applications might help such businesses, says Ovum's Barnett. "Eighty percent of core business processes in banks are the same. In 10 years, it will make little sense to have your own, unique homegrown savings program," he says.

Mulligan has been migrating some smaller applications, freeing up expensive mainframe capacity. The big reason: cost. When the vendor of his problem management software refused to bring licensing in line with equivalent packages in the Windows arena, he migrated to a cheaper Windows version. The total operating costs of running applications on the mainframe can be "easily" 10 times that of a Unix or Windows architecture, says Sabre's Walker.

While IBM has begun offering sub-capacity, usage-based pricing, few third-party vendors of mainframe software have followed suit. "Vendors who don't embrace flexible pricing are accelerating the decline in their business," says Barnett.

At Sabre, Walker plans to continue to migrate off the mainframe, which he says is simply too expensive.

In-place upgrade

Bob DiAngelo, vice president and CIO at MIB Group Inc., is already facing that challenge. His company relies on an I/O-intensive application used to detect insurance fraud for more than 500 insurers in North America. DiAngelo says it was impossible to hire anyone to support MIB Group's IBM mainframe applications, originally written in 1969 in assembler with a back-end VSAM data-base. So a few years ago, he received approval to re-engineer the system. The IT team is developing the new system in Java based on a three-tiered architecture using WebSphere MQSeries and DB2. But the new system, now halfway complete, doesn't run on Unix or Windows hardware. It, along with the systems still in production and the development and quality assurance testing environments, all run within a single logical partition on a 210 MIPS uniprocessor IBM zSeries 880 with a z/OS Application Assist Processor (zAAP) that handles the Java workload.

The new Java code runs on a zAAP. Keeping the applications off the mainframe processor keeps CPU-based licensing for third-party applications from rising while boosting the total system capacity to 366 MIPS. But DiAngelo doesn't have a lot of third-party software to worry about. He says declining mainframe operating costs have allowed the company to grow from an 80 MIPS system to the 210 MIPS box plus the zAAP processor while total costs remained "relatively stable."

Walker isn't convinced. "We could run Java code in a z9, but it would make it the world's most expensive Java CPU," he says.

Barnett agrees -- partially. "If you have Java or workloads that need high-speed access to mainframe data, running it on a mainframe partition is a viable choice," he says. "But ... for generic Linux or Java workloads, it still isn't an obvious consolidation platform."

IBM is hoping that others will follow MIB Group's example. "IBM is pushing one box, multiple architectures," says Gartner's Vecchio. Guru Rao, an IBM fellow and chief engineer for eServer, says consolidating a three-tiered architecture on the mainframe when data resides there makes sense because communications between the front and back end don't have to go over a latency-prone TCP/IP network. On the mainframe, he says, "you can communicate with each of these spaces using instructions as opposed to TCP traffic."

DiAngelo acknowledges that rewriting applications isn't always practical. "Doing a rip-and-replace is a big thing," he says of the five-year project. "There are things you can't afford to re-engineer, and they will probably always sit in the place where they were developed."

The transition also requires more horsepower for an application that consumes up to 300 I/Os per transaction and up to 130,000 transactions per day. "Java requires more CPU power than assembler, [and] as you move from proprietary VSAM to a generated database system, you lose efficiencies. With WebSphere, MQSeries and DB2, you have to crank the dial up," DiAngelo says.

Another question is whether that strategy will scale for applications beyond a few hundred MIPS in size, says Vecchio. On the high end, IT must move to SOA because there are no other options, he says. "The hope for mainframe customers is that WebSphere and Java can perform with the same quality of service that they have come to expect from CICS, IMS and Cobol," he says.

Publicis Group SA moved entirely off of an MVS mainframe and onto an open system. The advertising agency deployed high-density Hewlett-Packard Co. blade servers and VMware Inc. partitions to increase utilization levels. It migrated the primary application -- a financial reporting system that included client billing, ERP and reporting that amounted to 80 percent of the mainframe workload -- to PeopleSoft. Other applications were either ported or retooled entirely, says CIO Christian Anschuetz. "It was a Herculean effort, to be sure," he says of the four-year project.

His main motivation was cost. The mainframe was "extraordinarily expensive" and not agile enough for the organization's needs, Anschuetz says, and "the licensing costs associated with the development tools were just astronomical." Publicis has reduced its operating costs by 10 percent a year.

Even after considering the management costs of a distributed system and the cost of the Intel servers needed to replace the mainframe, the total cost of ownership was still "dramatically lower," Anschuetz says.

He says he did have concerns about moving off the mainframe. "I remember someone telling me we shouldn't get rid of the mainframe, it's five 9s, and you're going to be running this Windows junk," Anschuetz says. "The reality is that [our distributed systems] are up all of the time, and our actual [mean time between failures] is tremendous."

When it comes to dealing with legacy applications, there are no across-the-board answers, says Robert Rosen, president of Share, a Chicago-based IBM mainframe user group. "Where you get into trouble is when you try to force-fit a solution," he says. "Taking the best of both worlds, that's the key."

Sidebar

Growth inhibitors

Data center managers cited software costs as the largest inhibitor to increasing their use of mainframes.

Hardware costs -- 1 percent

IBM software costs -- 15 percent

Third-party software costs -- 47 percent

Base: 100 data center conference attendees

Sidebar

The Cobol brain drain

Colleges aren't cranking out Cobol programmers anymore, and skills availability is one of the top three concerns in mainframe shops, says Dale Vecchio, an analyst at Gartner.

Some organizations say they are already having trouble hiring Cobol programmers. "It's difficult to find people to support it," says Bob DiAngelo, vice president and CIO at MIB Group. That's one reason why his company is migrating to a new application architecture built around Java and WebSphere.

Meanwhile, the ranks of experienced programmers are also thinning. "Many Cobol developers are entering retirement now ... so it's challenging around staffing," says Edward Mulligan, executive vice president of the technology services division at The Bank of New York.

But Gary Barnett, an analyst at Ovum, says IT needn't panic.

"There is no skills crisis," he argues. While there is a shortage of highly trained mainframe programmers, many existing Cobol applications are very stable and don't require much maintenance. Plus, tools are evolving to allow a single developer to maintain more of the code than was possible just a few years ago. Ovum predicts that the amount of Cobol code in use will grow 3 percent to 5 percent annually through 2010, but that mostly involves maintenance work, Barnett says. Most new projects are moving to more modern application architectures.

Organizations that can't find local talent can also outsource. "India provides a very elastic supply of Cobol developers," Barnett says, and others can be cross-trained. "Once you have a proficient programmer, training them on Cobol is not an arduous process." It's true that the ranks of legacy programmers are declining, but, says Barnett, "I don't see it as a major concern for the foreseeable future."