Virtualization Technology News and Information
Navigating the System Virtualization Maze -- Part 2

Quoting Sys Admin 

The virtualization maze is not getting any easier for systems managers, as there are options aplenty and plenty of marketing to go along with them. Last month, in part 1 of this column, I provided the first half of my map through the virtualization maze. It included discussion of the benefits of virtualization, the various types of system virtualization, and the data you need to make an informed virtualization technology decision.

As a brief review, the questions that I suggested should be answered before any virtualization decisions can be made fall into several broad categories. Here I expand the questions with information on why they are important to your overall virtualization strategy.

  • What problem are you trying to solve with system virtualization? As with all projects, make sure you know the problem and that the project solves the problem. (This is obvious, except when it is forgotten.) -- Virtualization does not solve "all" system problems, and sometimes creates problems. Be sure you end up solving more problems than you create.
  • What hardware needs to be virtualized (specific CPU architectures, other components)? -- Hardware requirements can eliminate some virtualization solutions.
  • What operating systems need to be virtualized? -- Again, these requirements can limit the solutions available.
  • What applications will run within the virtualized environments? -- Application support is one of the biggest areas of risk in system virtualization.
  • What are the performance criteria of the current systems and applications? -- Virtualization decreases available resources on the target platform.
  • What are the expected future performance requirements (i.e., resource use growth)? -- Future growth could mean today's perfect fit is tomorrow's can't fit.
  • Do you have "good" systems administration methods in place today (e.g., patching, backups, security)? -- All forms of system virtualization end up creating more instances of operating systems or applications to administer. This could multiply current administration issues.
  • How many systems, operating systems, and applications would be virtualized? -- The smaller the environment, the less worth-while the time, effort, and money needed to virtualize it.
  • What is the infrastructure around the virtualization (networking and storage)? -- Concentrating resource use into fewer systems can increase the use of external resources (like networking and storage). Is the infrastructure ready to handle increased load?
  • What conversions would be needed between physical and virtualized systems (known as "P to V" conversions)? -- These answers can limit the solution choices.
  • Will movement of a virtualized entity back to a physical entity (known as "V to P" conversion) be needed? -- For example, what if you want to stop using a virtualization technology?
  • What are the business requirements of the resulting environment (uptime, DR, resource flexibility, supportability, manageability)? -- Without business drivers, funding is difficult to come by. If a project doesn't end up solving business needs, future projects will be more difficult to get funded.
  • What are the requirements around managing resources and limiting their use? -- Some virtualization technologies allow very fine-grained resource management and limits, and some don't.
In part 2 of this column, I'll start from these data points and discuss how to analyze that information to determine the best virtualization solution for your environment. I'll also look at the darker aspects of the technologies and their deployment. By the end of this column, you should have a complete picture of the virtualization options, the choices to be made, and all of the gory details that can get you into or out of the virtualization maze.

Questions and Answers

The questions asked above are all about the problem you are trying to solve via virtualization. The answers to these questions can drive both the types of virtualization that could be brought to bear on your problem and the evaluation criteria for selecting the best virtualization solution. For example, consider this scenario:

"We have 40 older USPARC III-based Sun systems, mostly running home-grown applications and using high amounts of CPU but low amounts of disk and network I/O. We want to reduce the number of systems in our datacenter (and thus reduce power, cooling, and data center space use) and decrease system administration time. Resource growth is 10% per year, we patch systems manually, and if we virtualize these systems we shouldn't need to go back to physical systems. We have no strict uptime or DR requirements."

In this scenario, only one hardware platform (SPARC) and one operating system (Solaris) need to be virtualized. This should be a good fit for using Solaris containers on multiple, fast, modern Sun systems. But further consideration should be given to application support (do they run well in containers) and how the applications would be moved to the new systems. The new systems would have to be sized to handle the current load plus 10% per year for the expected life of the facility.

Another consideration of all virtualizations is downtime. If N systems are virtualized onto one, then all N are affected when the system has to be taken down. Individual operating systems or virtual environments can be rebooted independently, but a systemic failure or scheduled downtime has a much larger impact after virtualization.

Of course, there are many other scenarios, and many possible solutions. But never lose track of the option to not virtualize. As powerful a tool as virtualization is, it is not a tool that solves all problems. Throughout this column are discussions of the pros and cons of virtualization and the issues that can steer your virtualization decisions.

Other Metrics to Consider

Beyond the needs of your site, there are other metrics to weigh when considering a virtualization product. As well as the "usual suspects" of pricing, support, and licensing terms, there are some virtualization-specific aspects.

  • The time and effort it takes the technology to provision a new virtual instance (operating system, application).
  • Likewise, the time and effort to provision an application within that instance.
  • The time and effort it takes to move an application from its physical machine to its virtual environment ("P to V").
  • If applicable, the time and effort it takes to move a virtual entity between systems.
  • The time it takes to boot, shut down, copy, or replicate a virtual entity.
  • The non-monetary costs of virtualization, including training, deployment effort, maintenance effort, monitoring, and management effort not only for the virtualization technology but for the virtualized entities as well.
  • The hardware (servers, storage, networking) supported by the virtualization technologies is usually a subset of the hardware supported by the individual applications and operating systems you want to virtualize. Compare what you have to what is supported to minimize hardware costs. The supported resources may also be limited compared to native hardware, such as the number of CPUs a virtual machine can see, or the amount of memory a virtual machine can use.
  • The impact on operating system and application licensing for use within the virtualization technology. This aspect could have a major effect on the cost of the virtualization solution. Will you have to pay more for your operating systems and applications than you did when they were on separate systems? For example, an application that is licensed per-CPU could end up requiring all of the CPUs in the system to be licensed, rather than those just dedicated to the application's virtual environment.


The Future of Virtualization

In the future, application deployment could move from its current state of time-consuming complexity to a simple plug-and-play model. Some steps in this direction have already been taken. Consider the VMware player and the "virtual appliances" that can run in it. These appliances are pre-built images of installed applications (typically by the company that wrote the application).

Deploying one of these appliances (and thus an application in an appliance) is a simple matter of downloading an image and loading it into VMware. If all application vendors moved to this model, and if there were a universal virtual appliance format to allow appliances to run in the myriad virtualization environments, the world of systems administration would be a much happier place. Applications would be hardware independent and trivial to move, upgrade, patch, and so on. The need for hardware/software appliances to ease deployment and management would greatly diminish as software appliances become the mainstream solution. Granted, these are big "ifs", but there are signs that such a future is possible.

Another future direction that at this point seems certain is the integration of virtualization into mainstream operating systems. The Xen community and its many branches have a good start on this, and the project appears to be headed toward being a de facto virtualization standard on Unix/Linux systems. Red Hat, SUSE, and Solaris are all in the midst of integrating Xen into their kernels. SUSE has already released SUSE Linux Enterprise 10, which includes integrated Xen, but overall Xen integration into kernels is in its early stages.

More hardware support for virtualization is a certainty as CPU vendors vie for the virtualization business. Sun has been using partitioning for a long time but recently added LDoms on the T1 CPU. Both Intel and AMD have announced future improvements to their CPUs to accelerate virtualization. The next step in this process appears to be "nested page tables", which will allow the system to maintain a per-virtual-machine page table view. This should greatly speed memory management. Beyond that step, it appears that support for accelerated I/O virtualization will be added.

If you are currently using Solaris 10, you should consider adding at least one zone to each Solaris 10 system you are deploying, then deploy all applications and build all user accounts inside that zone. Be sure that the applications are supported within a zone, as not all are. The result of this approach is that security and manageability is increased on that system, and because of the forthcoming mobility of zones, you'll be able to detach that zone from one system and attach it to another for upgrades and many other purposes. Of course, having the zone on shared storage will empower these activities.

Read the entire article, here.

Published Thursday, March 29, 2007 6:11 AM by David Marshall
Filed under:
There are no comments for this post.
To post a comment, you must be a registered user. Registration is free and easy! Sign up now!
<March 2007>