Virtualization Technology News and Information
Article
RSS
Server Virtualization Performance - Love The Benefits, But Beware The Pitfalls

Quoting Processor.com

Server virtualization has been a hot topic for a few years now. The concept continues to excite IT managers with the possibility of running multiple OSes on one system. But in the midst of the hype, it’s easy to overlook how issues such as CPU overhead can seriously impact server performance. Before you commit to server virtualization, the pitfalls and remedies deserve some exploration.

The Pitfalls

Dr. Michael Salsburg, director of the Computer Measurement Group, says virtualization technology vendors will claim that they can drive I/O capacities up to wire speed, but he says they do not discuss the amount of CPU power that is needed to do that. Salsburg says, “Workloads that are data-intensive may utilize far more of your CPU power than you expect. Future hypervisors, working with the processor and HBA/NIC vendors, will drive down this CPU overhead, but that is later on their roadmap.” He says high CPU overhead will cause erratic and degraded performance.

Alex Vasilevsky, founder and chief technology officer of Virtual Iron Software (www.virtualiron.com) says to date, the virtualization of the x86 architecture has been accomplished in two ways: full virtualization and paravirtualization. Vasilevsky says while paravirtualization offers important performance benefits, it also requires modification of the operating system, which may impact application certifications.

Vasilevsky says, “Full virtualization, on the other hand, relies on sophisticated but fragile software techniques to trap and virtualize the execution of certain sensitive, ‘non-virtualizable’ instructions in software via binary patching. With this approach, critical instructions are discovered at run-time and replaced with a trap into the VMM to be emulated in software.” He says while fully functional, these techniques incur large performance overhead (as much as 20 to 40%). This, Vasilevsky says, becomes a problem in the area of system calls, interrupt virtualization, and frequent access to the privileged resources.

Vasilevsky says the successor to full and paravirtualization is native virtualization. He says with native virtualization the VMM can efficiently virtualize the x86 instruction set by handling the sensitive, “non-virtualizable” instructions using a classic trap and emulate model in hardware vs. software. He notes, “Native virtualization has just become available on the market in the last nine months. While it is a new approach, it offers considerable benefits to users in performance and ease of implementation. It also protects the investment in existing IT infrastructure. This new approach is worthy of consideration for those planning their next steps in server virtualization.”

Problems & Remedies

Salsburg says two problems that come to mind are the issues of security and management. He says a typical three-tier security model (with the Web tier isolated from the application tier, which is isolated from the database tier) cannot be deployed on a single consolidated server today using current hypervisors. He says, “If one tier is infected and this brings down the hypervisor, you have not sufficiently isolated one tier from another.”

Regarding management, Salsburg says consolidating many OS images on a single server may diminish the operation costs for the hardware but not for the various OS images. “In addition, virtualization will spawn many more OS images, due to the simplicity of setting them up. The hypervisor vendors are working on better management, but their solutions do not today scale up to an enterprise-level management structure.”

Carla Safigan, director of enterprise product management at SWsoft (www.swsoft.com), says it's important to consider the applications and the deployment goals to match the appropriate virtualization technology, whether virtual machine technology (such as VMware [www.vmware.com], Xen [www.xensource.com], or Virtual Iron) or OS virtualization (such as SWsoft Virtuozzo). Safigan says, “New issues are created through virtualization, such as virtual machine sprawl due to the ease of deploying a new virtualized server, as compared with setting up a new physical server.”

Safigan says virtualization is a technology shift, a process change. She notes, “Once organizations move forward in their deployment, they often realize that no one virtualization technology is perfect for every need. For that reason, many organizations are deploying virtual machines for test and development because the big advantage to this technology is the ability to load many different operating systems on the same server. In the same organizations, they are using OS virtualization for high I/O and production applications because it enables density of up to hundreds of virtual environments per physical server.”

Vasilevsky says companies that plan to virtualize their x86 infrastructure need the right tools and expertise to manage this virtual infrastructure. He notes, “The best server virtualization solutions have built-in capabilities such as Live Capacity and Live Migration (transparent workload migration) that enable users to optimize virtual server utilization across a shared pool of resources.” He says with these types of tools, users can take advantage of policy-driven manage-ment capabilities that continuously sample performance data from every server and every virtual server to automatically relocate running OSes and applications from one physical server to another (without losing any state). He adds, “This streamlines the management of the data center greatly while also reducing the potential for error.”

Read the original, here.

Published Friday, August 03, 2007 5:18 AM by David Marshall
Filed under:
Comments
There are no comments for this post.
To post a comment, you must be a registered user. Registration is free and easy! Sign up now!
Calendar
<August 2007>
SuMoTuWeThFrSa
2930311234
567891011
12131415161718
19202122232425
2627282930311
2345678