Virtualization Technology News and Information
Article
RSS
Making Sense of Virtualization

Quoting Enterprise OpenSource Magazine

Virtualization addresses a number of these issues and offers a variety of benefits including hardware utilization, operational efficiency, and data center agility. However, many customers and their technology partners are becoming increasingly frustrated with the proprietary and expensive nature of the available virtualization software solutions. Luckily, a new wave of virtualization-related technologies is emerging to address these challenges and improve the economics of virtualization.

These emerging solutions are enabling a more dynamic IT infrastructure that helps transform the static, hard-wired data center into a software-based dynamic pool of shared computing resources. They provide simplified management of industry-standard hardware and enable today's business applications to run on virtual infrastructure without modification. Using centralized policy-based management to automate resource and workload management, the solutions deliver "capacity on demand" with high availability built in.

Virtualization 101
Regardless of the increased need for and the constant discussion in the industry around virtualization, many IT professionals are still having difficulty grasping the terminology and comprehending the many choices of hypervisors and hardware that make up the complicated virtualization landscape.

Originally part of mainframe technology, virtualization isn't a new concept. It's been applied to various technology problems throughout computing history and is now receiving renewed interest as an approach for managing standardized (x86) servers, racks, and blade systems.

Virtualization lets administrators focus on service delivery by abstracting hardware and removing physical resource management. It decouples applications and data from the functional details of the physical systems, increasing the flexibility with which the workloads and data can be matched with physical resources. This enables administrators to develop business-driven policies for delivering resources based on priority, cost, and service-level requirements. It also enables them to upgrade underlying hardware without having to reinstall and reconfigure the virtual servers, making environments more resilient to failures.

At the core of most virtualization software solutions is a "virtual machine monitor" or "hypervisor" as it's sometimes called. A hypervisor is a very low-level virtualization program that lets multiple operating systems - either different operating systems or multiple instances of the same operating system -share a single hardware processor. A hypervisor is designed for a particular processor architecture such as an x86 processor. Each operating system appears to have the processor, memory, and other resources all to itself. However, the hypervisor actually controls the real processor and its resources, allocating what's needed to each operating system in turn. Because an operating system is often used to run a particular application or set of applications in a dedicated hardware server, the use of a hypervisor can make it possible to run multiple operating systems (and their applications) on a single server, reducing overall hardware costs.

Server Virtualization versus Data Center Virtualization
Server virtualization is the masking of server resources from server users. The technology can be viewed as part of an overall virtualization trend in enterprise IT that includes storage virtualization, network virtualization, and workload management. This trend is one component in the development of autonomic computing, in which the server environment will be able to manage itself based on perceived activity. Server virtualization is also seen as a likely requirement for both utility computing in which computer processing power is seen as a utility that clients can pay for as needed, and grid computing in which an array of computer processing resources, often in a distributed network, are used for a single application.

While first-generation technologies were limited to working on a single machine or with small clusters of machines, data center virtualization manages the utilization and sharing of many machines and devices including server, storage, and network resources. This enables enterprises to automate numerous time-intensive manual tasks such as provisioning new servers, moving capacity to handle increased workloads, and responding to availability issues. In this environment, any application can run on any machine or be moved to any other machine without disrupting the application or requiring time-consuming SAN or network configuration changes. With these capabilities companies can transform the data center into a manageable and dynamic pool of shared computing resources, enabling IT to rapidly respond to changing business demands and dramatically reduce the costs of managing and operating the data center.

What About Xen?
Xen is a new Open Source hypervisor that is quickly being embraced as an industry standard. It supports the execution of multiple guest operating systems with very efficient levels of performance and resource isolation. Xen lets different operating systems such as Windows and Linux share the same server, and lets development and test systems run at the same time on the same hardware. It has a broad ecosystem that includes all the major processor manufacturers, server companies, and operating system providers. These companies are working together to deliver enterprise-class virtualization functionality based on industry standards. Besides driving innovation and building new solutions around the Xen standard, this ecosystem has also formed an extended testing team, further driving quality improvements.

Open Source technologies like Xen have a history of providing improved functionality, better performance, and lower total cost of ownership than proprietary technologies. Since Xen is free it's rapidly making its way into commercial offerings and end-user solutions. And as virtualization solution costs come down, it becomes feasible to deploy virtualization to every server throughout an enterprise IT infrastructure. History shows Open Source offerings, when generally accepted, tend to catch up with their proprietary counterparts quickly. Not since the Linux and Apache Open Source projects has such a large Open Source community and ecosystem formed so quickly. Although the current proprietary offerings have a few years' head start on Xen, the gap is expected to close quickly. The project and ecosystem has reached critical mass and the Xen hypervisor is emerging as the de facto standard. The tidal wave of innovation has begun.

Understanding Native Virtualization
Also relatively new to the market is what's known as, "native virtualization," which is a method that improves previous implementations by maximizing the benefits of the other approaches without the performance and management challenges. Previously, when deciding on a virtualization implementation, companies chose between operating system (OS) virtualization, full virtualization, and paravirtualization.

Native Virtualization is similar to full virtualization in how it supports a partitioned server running disparate guest operating systems "as is." This includes support for 32- and 64-bit applications and operating systems running concurrently. It also preserves investment in current certified software stacks, eliminating having to change or upgrade operating systems to run on the latest hardware. Although native virtualization is similar to full virtualization, there are major differences that improve efficiency and manageability. Unlike full virtualization, native virtualization doesn't rely on binary translation to emulate non-virtualizable x86 instructions. Instead it uses hardware virtualization assistance on the latest processors from Intel (Intel-VT) and AMD (AMD-V) to permit each guest operating system to run at full processor speed. Native virtualization also doesn't require a complete instance of a host operating system to be installed and maintained. Instead, it uses small standalone virtualization services software running in a service partition to communicate with the hypervisor. Removing a complete host operating system greatly simplifies maintenance and management since there's no host operating system or virtualization software to install and maintain.

It's a common misperception that hardware-assisted virtualization minimizes the role and value of virtualization software. It's actually just the opposite. The new processors from Intel and AMD add other capabilities that greatly simplify and improve virtualization software performance. Without virtualization software, such as the Xen hypervisor and other virtualization services and virtualization management capabilities, you have only a standard server that can run one operating system.

Native virtualization leverages these hardware-assisted virtualization extensions to support virtualization software in an integrated and seamless fashion improving the efficiency, performance, and security of virtual servers. By providing a new privilege layer for virtual servers, and supporting key virtualization functions in hardware, this technology will simplify virtual server development and maintenance, improve interoperability with legacy operating systems, enhance security and reliability, and reduce the cost and risk of implementation. These extensions to the chip architectures will help commercial vendors deliver products that reduce the cost and risk of implementing server virtualization solutions and increase the reliability, availability, and security of applications running in virtual partitions.

Previously, companies chose from three differing proprietary approaches when deciding on virtualization for x86-based processors. One of the approaches is known as "full virtualization." Here the hypervisor provides a fully emulated x86-based virtual server where unmodified operating systems can run. Another implementation is "OS virtualization" in which a host operating system (single kernel image) multiplexes one operating system kernel to look like multiple operating system instances. A third approach is "paravirtualization" (partial virtualization), which uses slightly modified/customized versions of the operating system kernel to replace non-virtualized x86 instructions with virtualization APIs. All three of these proprietary approaches have advantages and disadvantages as they pertain to performance, efficiency, management, and maintainability. Challenges include lack of standards, performance overhead and degradation, and the need to modify operating systems in some cases. There's also complex management and the excessive administration overhead from maintaining virtualization software.

Choosing the Right Path
With the emergence of new virtualization technologies, the challenge for users is making sense of what is available in the virtual world and creating an environment that will deliver the promise of improved performance, reliability and total cost of ownership, while preserving investments in their existing software stack.

The benefits of standards-based products are well known and well understood. Customers benefit from "vendor choice," which reduces upfront and ongoing capital expenditures. With standards in place, IT managers can also tap a large pool of available professionals with required skill sets (e.g., Linux, J2EE, etc.). This reduces personnel costs and improves productivity. Other benefits include increased agility, flexibility, and interoperability. Industry standard solutions promote common approaches and architectures for business applications, making it easier to integrate new applications and functionality into core business processes and architectures. This interoperability promotes application agility and allows for a rapid response to changing business conditions.

It's difficult and rare to find an off-the-shelf product that delivers a total solution or precisely matches the features and requirements a business needs. It's usually necessary to integrate different software products and system management tools from different vendors. And integration is made easier by standard interfaces and protocols. A standards-based infrastructure also leads to a more stable environment because industry standards are typically backed by an ecosystem of vendors who support the standard and evolve it conservatively so as to not cause major disruption. The standardized environment increases the reliability of an infrastructure and reduces the time to repair it because support staff has fewer products to master and start from well known and documented capabilities.

Time To Virtualize
Virtualization is not only helping control costs and deliver the agility, manageability, and utilization that IT leaders covet, but it's also becoming a necessity in enterprises to control and maintain everyday activities. The single most popular use and often the initial application of server virtualization software is partitioning, which lets administrators put multiple virtual servers, each with its own unique operating system instance, on a single physical server. By doing this, IT administrators can consolidate their physical infrastructure, preserve their investment in existing operating systems and applications, and get more from their hardware investments. At the same time, more mature users of virtualization are getting additional business value from applying it as part of their provisioning, business continuity, and capacity management strategies.

Virtualization is being overwhelmingly accepted in the market, and this has caused the number of virtualization vendors to rise. Although it may become a daunting task to weigh the benefits of the list of vendors and options, IT organizations should simply look for solutions that leverage the advances and new technologies to further improve the ROI of virtualization. The rest will fall into place.

Read the original, here.

Published Tuesday, October 17, 2006 11:18 AM by David Marshall
Filed under:
Comments
There are no comments for this post.
To post a comment, you must be a registered user. Registration is free and easy! Sign up now!
Calendar
<October 2006>
SuMoTuWeThFrSa
24252627282930
1234567
891011121314
15161718192021
22232425262728
2930311234