Server virtualization

13.02.2007
In today's complex IT environments, server virtualization simply makes sense. Redundant server hardware can rapidly fill enterprise datacenters to capacity; each new purchase drives up power and cooling costs even as it saps the bottom line. Dividing physical servers into virtual servers is one way to restore sanity and keep IT expenditures under control.

With virtualization, you can dynamically fire up and take down virtual servers (also known as virtual machines), each of which basically fools an operating system (and any applications that run on top of it) into thinking the virtual machine is actual hardware. Running multiple virtual machines can fully exploit a physical server's compute potential -- and provide a rapid response to shifting datacenter demands.

The concept of virtualization is not new. As far back as the 1970s, mainframe computers have been running multiple instances of an operating system at the same time, each independent of the others. It's only recently, however, that software and hardware advances have made virtualization possible on industry-standard, commodity servers.

In fact, today's datacenter managers have a dizzying array of virtualization solutions to choose from. Some are proprietary, others are open source. For the most part, each will be based on one of three fundamental technologies; which one will produce the best results depends on the specific workloads to be virtualized and their operational priorities.

Full virtualization

The most popular method of virtualization uses software called a hypervisor to create a layer of abstraction between virtual servers and the underlying hardware. VMware and Microsoft Virtual PC are two commercial examples of this approach, whereas KVM (kernel-based virtual machine) is an open source offering for Linux.

The hypervisor traps CPU instructions and mediates access to hardware controllers and peripherals. As a result, full virtualization allows practically any OS to be installed on a virtual server without modification, and without being aware that it is running in a virtualized environment. The main drawback is the processor overhead imposed by the hypervisor, which is small but significant.

In a fully virtualized environment, the hypervisor runs on the bare hardware and serves as the host OS. Virtual servers that are managed by the hypervisor are said to be running guest OSes.

Para-virtualization

Full virtualization is processor-intensive because of the demands placed on the hypervisor to manage the various virtual servers and keep them independent of one another. One way to reduce this burden is to modify each guest OS so that it is aware it is running in a virtualized environment and can cooperate with the hypervisor. This approach is known as para-virtualization.

Xen is one example of an open source para-virtualization technology. Before an OS can run as a virtual server on the Xen hypervisor, it must incorporate specific changes at the kernel level. Because of this, Xen works well for BSD, Linux, Solaris, and other open source operating systems, but is unsuitable for virtualizing proprietary systems, such as Windows, which cannot be modified.

The advantage of para-virtualization is performance. Para-virtualized servers, working in conjunction with the hypervisor, are nearly as responsive as unvirtualized servers. The gains over full virtualization are attractive enough that both Microsoft and VMware are working on para-virtualization technologies to complement their offerings.

OS-level virtualization

Still another way to achieve virtualization is to build in the capability for virtual servers at the OS level. Solaris Containers are an example of this, and Virtuozzo/OpenVZ does something similar for Linux.

With OS-level virtualization, there is no separate hypervisor layer. Instead, the host OS itself is responsible for dividing hardware resources among multiple virtual servers and keeping the servers independent of one another. The obvious distinction is that with OS-level virtualization all the virtual servers must run the same OS (though each instance has its own applications and user accounts).

What OS-level virtualization loses in terms of flexibility, it gains in native-speed performance. In addition, an architecture that uses a single, standard OS across all the virtual servers can be easier to manage than a more heterogeneous environment.

Easier but harder

Unlike mainframes, PC hardware wasn't designed with virtualization in mind -- software alone had to shoulder the burden, until recently. With the latest generation of x86 processors, AMD and Intel have added support for virtualization at the CPU level for the first time.

Unfortunately, the two companies' technologies were developed independently, which means they are not code-compatible, although they offer similar benefits. By taking responsibility for managing virtual server access to I/O channels and hardware resources, hardware virtualization support relieves the hypervisor of its most demanding babysitting chores. In addition to improving performance, operating systems can run unmodified in para-virtualized environments, including Windows.

CPU-level virtualization doesn't kick in automatically. Virtualization software has to be written to specifically support it. Because the benefits of these technologies are so compelling, however, virtualization software of all types is expected to support them as a matter of course.

A virtual toolbox

Each method of virtualization has its advantages, depending on the situation. A group of servers all based on the same operating platform would be a good candidate for consolidation via OS-level virtualization, but the other technologies have benefits as well.

Para-virtualization represents the best of both worlds, especially when deployed in conjunction with virtualization-aware processors. It offers good performance coupled with the capability of running a heterogeneous mix of guest operating systems.

Full virtualization takes the greatest performance hit of the three methods, but it offers the advantage of completely isolating the guest OSes from each other and from the host OS. It is a good candidate for software quality assurance and testing, in addition to supporting the widest possible variety of guest OSes.

Full virtualization solutions offer other unique capabilities. For example, they can take "snapshots" of virtual servers to preserve their state and aid disaster recovery. These virtual server images can be used to provision new server instances quickly, and a growing number of software companies have even begun to offer evaluation versions of their products as downloadable, prepackaged virtual server images.

It's important to remember that virtual servers require ongoing support and maintenance, just like physical ones. The increasing popularity of server virtualization has fostered a burgeoning market of third-party tools ranging from physical-to-virtual migration utilities to virtualization-oriented versions of major systems management consoles, all aimed at easing the transition from a traditional IT environment to an efficient, cost-effective virtualized one.