Virtualization Technology News and Information
Article
RSS
Virtualization in the Enterprise: Trends in 2009

What do virtualization executives think about 2009?  A VMBlog.com Series Exclusive.

Contributed by Adam Hawley, Director of Product Management for Oracle VM 

Virtualization in the Enterprise: Trends in 2009

Virtualization has been around the enterprise data center for a long time, mostly on mainframes and proprietary hardware.  For industry standard hardware, 2008 was the year in which virtualization made the jump from workstations onto servers in the data center, but generally for less critical or non-production workloads.   That’s changing:  in 2009, virtualization should become commonly deployed to host mission critical workloads with VMs that are much larger than have been typically seen in the past.  This is both a result of advances in virtualization software and hardware, as well as a result of the demands of users as reflected in several key trends:

Trend number 1: Larger, more mission critical applications will get virtualized on industry standard servers in the enterprise production data center.

Two things are driving this trend:  First, virtualization solutions for industry standard servers are now available that are more scalable than in the past, particularly with regards to I/O, but also in terms of CPU and multi-threading support so that performance on larger workloads is far better.  For example, architectures that are Xen hypervisor based, with support for paravirtualized guest operating systems, more streamlined I/O channels, and more efficient clock timer management allow for far less overhead than earlier architectures – something incredibly important for supporting, say, I/O intensive databases that typically have large numbers of processes.  

Industry-standard hardware has increased in horsepower as well, making consolidation of several large VMs on one server practical.  Now there are 8 socket, quad-core industry standard servers with up to 512GB of memory available on the market.  That’s a lot of horsepower and more than adequate for virtualizing several fairly large and heavily loaded workloads in production, where previously it was not possible to consolidate these workloads.

The second factor driving this is user demand for increasingly dense server consolidation, in order to become more “green” and further reduce expenses associated with power, cooling, and real estate.  So how do you further increase your hardware, power, and cooling efficiency?  You start to look at the servers hosting databases and applications that were not previously practical to virtualize but that can be now given hardware and software advances. 

Trend number 2:  Virtual and physical worlds start to be managed together from the same tools, but users will also want an integrated,  top-to-bottom view of their application stack.

Given the increasing virtualization of larger and more mission critical workloads, user demand for a well-integrated solution for managing the physical and virtual worlds together will also be on the rise, and solutions for this will be increasingly available as compared to 2008 where virtualization management was generally provided separately from tools for managing physical servers.   

But just managing VMs and physical servers from the same tools is ultimately not enough:  users also need to manage the stack top-to-bottom in one integrated solution – with the ability to see everything from the top-level application down through the middleware, database, OS, virtualization and hardware - from one view – so that users have a complete picture of their data center. 

Virtualization brings agility to the data center, but the value of that agility is wasted if the operator – or, indeed, the automation doing things like dynamic resource allocations – does not have a high quality context for making smart choices:  it is not useful to be able to drive fast if you have to drive blind.   In 2009, users may be increasingly frustrated at only being able to see the status of the virtualization layer from its own management tool.  While solutions for top-to-bottom management that integrate both the workload and the virtualization together will just start to be available,the demand will grow in 2009 for a more integrated approach that includes virtualization in its more mission critical role.

Trend 3:  Increasing focus on leveraging virtualization to deploy workloads faster, easier, and more reliably in the enterprise.

As virtualization starts to become more present and understood as a platform, 2009 will show increasing interest in leveraging virtualization up the stack to make application / workload deployment faster, easier, and more reliable (less error-prone).

There is much talk and debate about “cloud computing” and I think it can be fairly said that, whatever your opinion, the excitement reflects the demands of users to be able to quickly, easily, and reliably deploy applications and workloads into an environment where they don’t have to worry much about the details of what is under the covers.  Enterprise users want this too for their internal data center.  Whether you call this the “private cloud” or not, the key point is the demand to permit fast, easy, and reliable workload deployment on top of virtualization will grow in the enterprise in 2009.

Users will likely want to push as much of the complexity of deploying workloads on to their application - and virtualization vendors.  Few vendors have everything they need to do this today since it requires both the virtualization platform and the enterprise software running inside those VMs but also – crucially - an operating system to include in the “appliance” or “template” solution that gets distributed.  In the absence of any of the 3 critical components – virtualization layer, OS, or applications – the ability for any vendor to provide a complete solution is limited. 

New standards such as the DMTF Open Virtualization Format (OVF) standard aim to help with some key parts of this:  It describes how virtual machines (VMs) or a collection of VMs can be described in a standardized way so that any virtualization solution would know how to deploy the VM(s).  OVF should be a very useful part of making workload deployment easier in 2009 as products begin to adopt it. 

------

2009 is the year where the perspective of the virtualization user changes from “How can I implement it” to “How can I leverage it to make the rest of my data center better”:  How can I make it easier to deploy, manage, and support my mission critical application workloads? Answering that question is where the promises of virtualization start to become powerfully realized.  

About the Author

Adam Hawley is the Director of Product Management for Oracle VM.  He has more than 15 years experience in product management of virtualization, system management, and operating system software.  Prior to Oracle, Mr. Hawley held product management positions at Sun Microsystems, HP, and NCR.

Published Wednesday, December 17, 2008 6:29 AM by David Marshall
Filed under:
Comments
Interphase - (Author's Link) - December 17, 2008 1:25 PM

You make some great points.  Especially as it pertains to management of the Physical and Virtual components from one “single pane of glass” within the data center.  This is critical!  But this will need to be taken one step further.  Having the ability to manage these resources on a daily basis is a good start, but having the ability to manage these resources in the face of a disaster, from that same pane of glass, is another area where I think we’re going to see additional advances.

Lew Smith

www.interphasesystems.com

To post a comment, you must be a registered user. Registration is free and easy! Sign up now!
top25
Calendar
<December 2008>
SuMoTuWeThFrSa
30123456
78910111213
14151617181920
21222324252627
28293031123
45678910