I'll make do with my not-so-virtual world

30.01.2006
As I listened to the webcast from a sister state agency describing what cool new things it was doing, I couldn't help talking back. "Why do technical people always try to make it sound like they are doing something much bigger and better than anyone else? Why are they bragging about implementing videoconferencing? Most of us have already done that. And what was that about Web services? Do they even know what the term means?"

It didn't seem so, since the alleged Web service was a Web-based security gateway for authenticating other agencies' personnel before letting them access the agency's databases. That's a service you access via the Web, but it's no Web service. The fellow talking was rubbing me the wrong way. I have an aversion to people who spout undefined acronyms and make things sound more complicated than they are, and he was doing both of those things.

Then he said that his agency had "virtualized" its environment and suggested that other agencies look into EMC 's VMware. This was too much for me. If I'd been in the meeting, I would have spoken up. I like to work in a collaborative environment, where you ask people to help you understand a problem, not tell them that you've solved it and they should listen up. Chances are, you've probably solved only a small piece of it and you could learn a lot by hearing other people's ideas.

Afterward, I called a couple of colleagues and asked them if they were using virtualization other than disk management and if they had any security concerns around it. None of them was doing much with it. Maybe I don't hang out in the right crowd. What's all the hype about? I wondered.

Virtualization is nothing new, of course. The concept goes back to the '50s and '60s, when IBM led the way with the development of the virtual machine and the virtual machine manager. VMM is software that lets you run several instances of an operating system on a single piece of hardware, with each instance of the operating system located within its own virtual machine along with its companion applications. The VMM controls all interaction between each virtual machine and the hardware (resources, drivers, etc.). You can run a variety of applications on the same server without worrying about incompatibilities or resource conflicts.

Virtualization has been improved upon countless times in the past 50 years. My own experience with it includes managing disk arrays under HP-UX in a "virtual" way, allowing reallocation of disk space on the fly. It was a great boon to systems administration. Another concept on the rise today is grid computing, which is basically the ability to "virtualize" the environment so that provisioning, deploying and reallocating such things as bandwidth, storage and CPU cycles can all be done on the fly. It's a fascinating concept, but I don't know anyone who has implemented virtualization to that level.

Do we need it?

I tried to think of reasons why I might want to use virtualization. The obvious benefits involve cost savings, since virtualization can maximize the utilization of systems and ease systems administration. For example, the approach that IBM is promoting would give administrators centralized control of desktop PCs. Applications would be delivered via Citrix from blade servers virtualized with VMware. In the case of my agency, though, this sort of effort to move computing power off the desktop seems like a waste, since we have a significant investment in desktop systems.

IBM is promoting this vision with a television commercial in which a systems administrator faints dead away because he is overwhelmed trying to manage a huge server farm. The solution, according to the ad, is to buy IBM blade servers. But how many blades would you need to support several thousand virtual machines? The idea is to save big bucks by trading in your server support personnel costs for sophisticated hardware. But that hardware will cost you big bucks as well, and you still need support personnel.

Of course, my real concerns are security issues. One consideration is that you can run an untrusted application in an isolated sandbox or "jail." That sounds like a good thing to do, but unfortunately, I just can't readily think of an application we should apply that to. And when I think of disaster recovery, virtualization looks like a loser. When you're running several applications on a single server, you lose several applications when that server goes down. I'd rather run more servers, each one housing its own critical business application, so that when a server fails, I have to restore just one application, not five or six. I've already got the servers, and if I have any money to spend, I'd rather invest it in improved server management tools.

We've invested in Microsoft Windows 2003 servers on Dell hardware, with Windows XP on the desktop. We manage the environment with Active Directory, among other tools. Our network and security monitoring tools are stand-alone applications running on Linux.

We use a layered approach to security that includes router access-control lists; firewalls; intrusion detection; security policies; Active Directory; an aggressive program of weekly patching of servers and desktops; round-the-clock updating of our antivirus, antispam and antispyware controls; and Veritas backup and restore tools. Our environment hums.

We have not had a single incident of a worm or virus attacking our environment in over a year. So, what's the problem? We don't really have one. I got sidetracked after listening to that webcast by the whole idea of virtualization and what it really means. I don't think it means anything to us.

What do you think?

This week's journal is written by a real security manager, "C.J. Kelly," whose name and employer have been disguised for obvious reasons. Contact her at mscjkelly@yahoo.com, or join the discussion in our forum: computerworld.com/forums. To find a complete archive of our Security Manager's Journals, go online to computerworld.com/secjournal.