Virtualization Technology News and Information
Article
RSS
Xen's New Virtualization App Has Its Wrinkles

Quoting Linux Insider

The upcoming release of Virtual Iron 3.1 lacks many of the advanced features of VMware's Virtual Infrastructure Server 3, but it does showcase that VMware's competition is not far behind. In some ways, the competition is actually ahead: Virtual Iron 3.1 supports as many as 16 CPUs and 96 GB of RAM per virtual machine, compared with VMware's four CPUs and 8 GB of RAM.

Xen has had a relatively rough road since it began as a research project at the University of Cambridge. Early releases of the open source virtualization package were quite buggy, yet highly touted by major players in the Linux field, which has led many to view the project skeptically.

Initial packaging of Xen into Fedora Core 4 and 5 releases didn't help matters when it became clear that it was, at best, difficult to run and, at worst, simply broken right out of the box. Later releases have made significant usability and functional improvements, and the next release will officially include support for Windows guests -- but it still lacks the comprehensive management framework offered by VMware. Make no mistake: Xen works, but it's is still in its infancy as an enterprise virtualization solution.

 Demonstrating Xen's enterprise potential is Virtual Iron 3.1 which, as do XenEnterprise and Enomalism, seeks to leverage the open source model to provide a viable alternative to VMware at a significant cost savings.

Before turning to Xen, Virtual Iron had spent two years developing a homegrown hypervisor technology aimed not at consolidating many virtual servers onto a single physical server, but allowing a single virtual server to run across multiple physical servers. Although this was certainly a worthwhile concept, the pace of processor development and the progress of clustering technologies were beginning to render this concept outdated before it even matured.

Pumping Virtual Iron

The upcoming release of Virtual Iron 3.1 lacks many of the advanced features of VMware's Virtual Infrastructure Server 3, but it does showcase that VMware's competition is not terribly far behind. In some ways, in fact, the competition is actually ahead: Virtual Iron 3.1 supports as many as 16 CPUs and 96 GB of RAM per virtual machine, compared with VMware's current limits of four CPUs and 8 GB of RAM.

Moreover, Virtual Iron extends Xen by enhancing memory management to allow 32-bit and 64-bit guests to run side-by-side, full virtualization to allow guest operating systems to run completely unmodified (the current Xen release requires the guest OSes to be modified to run in a Xen environment), and significant work to increase I/O performance of guest OSes. These features will be present in the forthcoming Xen 3.1 release, but Virtual Iron is offering them now with the GUI (graphical user interface) management tools.

Virtual Iron 3.1 is a pure Java application that can find a home on a Windows or Linux server, and it ships as a binary GUI install wizard. The setup is minimal, and this first-built server is the equivalent to VMware's VirtualCenter with one important difference: It serves as the host server deployment system as well as providing the management tools.

Multiple NICs Suggested

When installing Virtual Iron 3.1, it's recommended to build the server with several network interfaces. One of these network interface cards will become a management network that should be constructed as an isolated network segment between all virtualization host servers and the management server. This is because, by default, the Virtual Iron 3.1 server will act as a DHCP/ PXE boot server, making deployment of virtualization hosts generally as easy as turning on a new server on that segment.

When the hosts PXEboot, they run a highly modified Linux kernel and no console, so there's no need for any KVM (keyboard, video, mouse) switch on the server, because there's nothing to see and no way to access the system other than through the Virtual Iron management console. Disks local to these servers are available, as are any NICs and HBAs (host bus adapters) that are supported by the Virtual Iron kernel.

In the testing, I was able to conduct in Virtual Iron's labs, this included Emulex and QLogic 2Gb UFC (Fiber Channel) HBAs, SATA, and SCSI disk, and Intel and Broadcom NICs.

Once booted, these servers are visible from within Virtual Iron's Java-based management application, which lays out hosts and virtual servers in an easily digestible hierarchy. The interface is quirky, requiring that every action be followed by clicking the Commit button, which becomes annoying after awhile, and the flow stutters in places, but it's otherwise functional.

Room to Grow

Creating a virtual server entails essentially choosing the number of CPUs (central processing units), BAM (business activity monitoring) size, and specifying disk resources to be used, much like VMware. Prior to the 3.1 release, however, the disk resources were required to be either an FC LUN (logical unit number) or local disk resource. No virtual disk support existed. With 3.1, vDisks conforming to Microsoft's standard are supported, making deployment easier.

On the downside, there's no iSCSI SAN (storage area network) or NFS (network file system) support, so if you're lacking a Fiber Channel SAN, you're forced to use local disk, and this precludes the use of the LiveMigration, LiveRecovery, and LiveMaintenance features.

All of these features are predicated on the use of shared storage, and the ability to shift running virtual servers from one host to another, akin to VMware's VMotion. In practice, LiveMigrate is very similar, with the guest OS migrating with nearly no operational interruption and no reboot required.

LiveRecovery will handle the abrupt failure of a host server by booting the VMs (virtual machines) that were running on that server on another hardware node. LiveMaintenance is simply a quick way to initiate LiveMigrations of all servers on a single hardware node to other nodes in order to bring down a server for maintenance.

In addition, LiveCapacity will dynamically migrate VMs between hosts to distribute the overall load evenly among all hardware resources, such as VMware's DRS (distributed resource scheduler). All of these features worked in my copy of the 3.1 beta.

Ready for Prime Time?

So what's lacking? Polish, performance, and the little bits around the edges. The console interaction provided by Virtual Iron 3.1 is fair for Windows guests, but quite sloppy for Linux guests running X11. This is rather surprising, but mouse tracking under Windows is far superior. Of course, most Linux guests won't be running X11, which mitigates this problem somewhat.

Also missing is VM snapshot support, as well as basic backup tools. Coupled with the lack of iSCSI and NFS support, very basic network configurations, questionable I/O (input/output) performance, and the obvious wet-behind-the-ears feel of the package, it may be a bit of a hard sell for production use.

Still, Rome wasn't built in a day, and I believe that the lack of these features is more reflective of "haven't gotten there yet" rather than "won't get there," and it certainly seems that Virtual Iron is well on its way to becoming a true competitor in the virtualization world. If the next release -- slated for first quarter this year -- manages to address these issues, the company may find that market open wide, especially because at US$499 per processor, a full Virtual Iron 3.1 license costs a fraction of a comparable VMware license. In short, if Virtual Iron can keep up this pace, it's definitely a contender.

Read or comment on the original, here.

Published Thursday, January 04, 2007 4:20 PM by David Marshall
Comments
There are no comments for this post.
To post a comment, you must be a registered user. Registration is free and easy! Sign up now!
Calendar
<January 2007>
SuMoTuWeThFrSa
31123456
78910111213
14151617181920
21222324252627
28293031123
45678910