VMware's Diane Greene recently sat down with Redmond magazine Editor Ed Scannell to talk about some of the reasons for the growing industry acceptance of virtualization technologies, being one of the few companies to successfully fend of Microsoft in a strategically important market, and the prospect of a thriving third-party market for virtualization.
Redmond: How would you characterize the era we're now entering with virtualization technology?
Greene: I would say virtualization has become very much mainstream. In the late 1960s and early 1970s IBM developed it for mainframes but it kind of died out. The problem with the x86-based processors has been they were not designed with virtualization in mind whatsoever. There was research done at Stanford by some of VMware's founders around the idea that virtualization could gain isolation for mainstream applications. That's why we founded VMware, really, to bring that to industry-standard systems. I think we invented some important modernizations that allowed virtualization to work on industry-standard systems by taking advantage of the extensive support for distributed computing. When we introduced it, we did so as a way to run Linux on Windows in order to get a lot of people to start using it on the desktop. Then, as we started partnering with the server vendors, IBM in particular, they had some large servers where the partitioning aspect of virtualization allowed them to deliver compelling solutions to customers and so server consolidation took off. It has now moved well beyond that to where people see the power of virtualization to the degree it's causing an entire industry refresh. You can do all sorts of systems infrastructure functionality in a new and more powerful way.
How soon before we get to the point where we have virtualization for everyone?
Virtualization is definitely headed toward ubiquity. At VMworld [in September] we announced our embedded hypervisor, the ESX3i, and many of the major x86-based hardware vendors announced they will ship servers with an embedded ESX server in them. Anything that's virtualized has more flexibility, better utilization, and stronger reliability and security properties. I'd say there's still some hardware-assist work to be done. We estimate that about 90 percent of applications today belong in virtual machines. Once the final hardware assist is there from the processor and peripheral vendors, all applications will run in virtual machines. What it gives you is this single way to manage your software and manage it completely separately from your hardware.
There's some industry talk about the eventual emergence of a complete virtualization system. What's your vision for that?
Once you have a comprehensive virtual infrastructure in place where you buy servers already virtualization-enabled, where you're running a VMware infrastructure, then you can have hot-pluggable machines. So if you're running out of capacity you can add servers and through VMware-or some virtual infrastructure-the system will automatically detect that you just added new resources and bring them all online and make them available for applications. With things like our VMotion technology you can automatically move running applications around. Or if you want to take something out of the system to service it, the systems will automatically move the applications off with no interruptions because you have a fully distributed system infrastructure running. A virtual infrastructure really takes all your hardware, server storage and network resources and pulls it all together so you can run it as a single system.
So this idea of hot-pluggable virtualization, how far away are we from seeing it on a wider-spread basis?
Well it works today and we have many customers running over 50 percent of their servers with VMware infrastructure. We have some that run it on 100 percent. You're asking how far away we are from everyone running that sort of virtualized infrastructure? Well, I tend to be always optimistic about adoption but it always happens more slowly than you'd expect. It'll be sooner rather than later. I get nervous about making predictions these days because now they call it 'forward-looking statements.'
Do you envision a virtualization software stack emerging around a set of industry standards?
Absolutely. In fact, I think we're making progress there. We announced right around VMworld the Open Virtual Machine Format [OVF] that's backed by many hardware vendors and all the virtualization [software] vendors including Microsoft and XenSource. So right there is a virtual machine that can be self-describing, managed and manipulated, and that contains an operating system and applications. I think this is a big step forward. We work actively with the DMTF [Distributed Management Task Force Inc.], which is a standards group for APIs, formats and protocols for virtual-machine management. So in terms of what there will be for a stack, there's the core virtualization where the hardware will just come virtualization-enabled. Then you have a full virtual infrastructure that takes that virtualization layer and exposes it to the software in a way that increases the reliability, availability, security, capacity and utilization. Then, on top of that, you'll see vertical solutions like solutions around desktop posting, virtual desktop infrastructure, or a solution around how to manage, test and develop through virtualization. What virtualization is making possible is an ability to truly automate the management of the software.
What's the biggest obstacle to establishing meaningful standards in the virtualization market?
There, too, we're starting to make some really good progress. Any standards process I've ever seen has a slower pace than the pace of technology innovation just because it's bringing together a number of different companies all moving at a different pace with different priorities.
Read the entire article and the rest of these great interview questions, here.