Virtualization shoot-out: Citrix, Microsoft, Red Hat, and VMware

In data centers large and small, the move to server virtualization seems as unstoppable as the waves crashing on the beach in Hawaii's Waimea Bay. But for almost as long as the virtualization tide has been rising, there was only one vendor that could offer the features, interoperability, and stability necessary to bring virtual servers out of the skunkworks and into daily production. That is no longer the case.

From the beginning, VMware has been the king of x86 server virtualization, hands down. VMware's feature set, reputation, and pricing all reflect that fact. But where there used to be little competition, you'll now find a select group of challengers that have brought a wealth of enterprise features to their virtualization solutions and begun to give VMware a run for its money.

[ Also on InfoWorld: Three low-cost, low-fuss VDI solutions prove that desktop virtualization is within anyone's reach. See "InfoWorld review: Desktop virtualization made easy." | Read about the decline and fall of system administration on Paul Venezia's Deep End blog. ]

In order to accurately gauge just how close this race has become, we invited Citrix, Microsoft, Red Hat, and VMware to the Advanced Network Computing Lab at the University of Hawaii, and we put their server virtualization solutions to the test. We compared Citrix XenServer, Microsoft Windows Server 2008 R2 Hyper-V, Red Hat Enterprise Virtualization, and VMware vSphere by virtually every measure, from ease of installation to hypervisor performance, and all of the management capabilities in between.

We tested each solution on the same hardware, with the same real-world network topology, running the same tests on the same virtual machines. We ran real-world and synthetic Linux and Windows performance benchmarks, and we performed subjective management and administration tests. We looked at host configuration, VM templating and cloning, updates and patching, snapshots and backups, and scripting options, and we examined advanced features such as load balancing and high availability.

The results showed that all four solutions combine very good hypervisor performance with rich sets of management tools. But the solutions are not all equal in either performance or management. Although VMware is no longer the only game in town, choosing an alternative certainly involves trade-offs.

VMware still has advanced capabilities that the others lack. VMware also offers a level of consistency and polish that the other solutions don't yet match. The rough edges and quirks in Citrix, Microsoft, and Red Hat aren't showstoppers, but they demonstrate that these alternatives all have hidden costs to go along with their (potentially) lower price tags.

Virtualization shoot-out: The test bed

The fine folks at Dell were kind enough to lend us a bunch of high-end gear to run all of our tests. We requested blade servers for a variety of reasons. Primarily, we wanted the ease of setup and configuration offered by the blade chassis, which consolidates the power, network, and remote management into a single unit. We chose the two-socket blades for our test servers, as these are more representative of production hypervisor configurations than other CPU densities.

We were equipped with a Dell PowerEdge M1000E chassis with two Dell PowerConnect M8024 10G switch modules and a PowerConnect M6220 gigabit switch module. The storage tasks were easily handled by a Dell EqualLogic PS6010XV 10G SAN array, and we used four Dell PowerEdge M710 blades to run the hypervisors. Each M710 was equipped with two Intel Westmere 5645 CPUs running six physical cores at 2.40GHz. Those were accompanied by 96GB of DDR3 RAM, dual-port Intel X-520 10G Ethernet mezzanine adapters, and built-in dual-port gigabit NICs. Each server also had Dell's redundant SD-based flash devices for embedded installations and a pair of 72GB SAS drives in a RAID1 configuration for hypervisors that required traditional installation.

For backline duties, we used two Dell PowerEdge M610 Intel Nehalem-based blades. These blades were not part of the actual test, but were used to provide supporting services such as Microsoft Active Directory, DNS, and DHCP. Suffice it to say, we were very well outfitted on the hardware front.

Virtualization shoot-out: World's fastest hypervisor

The test plan was straightforward: Take a look at Windows and Linux server performance on the physical hardware, then on an otherwise quiescent hypervisor, as well as several more times on a hypervisor under increasing load levels. Metrics included CPU, RAM, network, and storage I/O performance, time and interruption (if any) during VM migrations, speed and agility in VM template creation and deployment, and overall handling of a few disaster scenarios, such as the abrupt loss of a host and failover to an alternate site.

The benchmarks themselves were based on synthetic and real-world tests. They provide a general picture of hypervisor performance, but as with many facets of virtualization, there's no good way to accurately forecast how any workload will perform under any virtualization solution apart from running the actual workload.

The Linux tests were drawn from my standard suite of homegrown tests. They are based on common tools and scenarios, and they're measured by elapsed time to complete. These included converting a 150MB WAV file to MP3 using the LAME encoder on Linux, as well as using bzip2 and gzip to compress and decompress large files. These are single-threaded tests that are run in series, but with increasing concurrency, allowing performance to be measured with two, four, six, eight, and twelve concurrent test passes running. By running these tests on a virtual machine with four vCPUs (virtual CPUs), we were able to measure how well the hypervisor handled increasing workloads on the VM in terms of CPU, RAM, and I/O performance, as all files were read from and written to shared storage.

The Windows tests were run with SiSoftware's Sandra. We chose to focus on a few specific benchmarks, primarily based on CPU and RAM performance, but also including AES cryptography, which plays a significant part in many production workloads.

Again, all tests were conducted on the same physical hardware, with the same EqualLogic PS6010XV iSCSI array for storage, and on identical virtual machines built under each solution. All the Windows tests were run on Windows Server 2008 R2, and all the Linux tests were run on Red Hat Enterprise Linux 6 -- with the exception of Microsoft Hyper-V. Because Hyper-V does not support Red Hat Enterprise Linux 6, we used RHEL 5.5, which may have had a minor impact on Hyper-V's Linux test results.

The performance test results show the four hypervisors to be closely matched, with no big winners or losers. The main differences emerged in the loaded hypervisor tests, where XenServer's Windows performance and Hyper-V's Linux performance both suffered. Overall, VMware vSphere and Microsoft Hyper-V turned in the best Windows results [see table], while vSphere, Red Hat Enterprise Virtualization, and Citrix XenServer all posted solid Linux numbers [see table]. The crypto bandwidth tests, where XenServer and vSphere proved three times faster than Hyper-V and RHEV, showed the advantages of supporting the Intel Westmere CPU's AES-NI instructions.

  • Microsoft Hyper-V shined when running a Linux VM in isolation, but wasn't as consistent as the others at maintaining Linux performance when loaded with multiple active VMs
  • Hyper-V certainly held its own in the bzip2 file compression tests, even when the hypervisor was stressed by multiple VMs.
  • Citrix XenServer often turned in the best raw Windows performance, but didn't always maintain it under a wide load. (The Sandra Whetstone benchmark measures floating point processing performance.)
  • Microsoft Hyper-V and VMware vSphere were the most consistent performers when running Windows VMs.
  • Citrix XenServer and VMware vSphere support the Intel Westmere CPU's AES-NI instructions, and Microsoft Hyper-V and Red Hat RHEV don't. It makes a big difference.
1 2 Page 1
Page 1 of 2
Download: EMM vendor comparison chart 2019