Virtualization Technology News and Information
Six Virtualization Predictions for 2010

Contributed article by Jon Toor, VP Marketing, Xsigo Systems

Six Virtualization Predictions for 2010

Virtualization technology is truly a game-changer for the IT world, as evidenced by VMworld 2009 where we saw an exceptionally dynamic environment, both in the technology and the market conditions. Enthusiasm at the event reflected an eagerness to put virtualization technology to work, setting the stage for 2010.  For the coming year, this will mean an acceleration of really cool implementations, many new use cases, new best practices, and the melding of various virtualization technologies into cohesive solutions. With virtualization now stretching end-to-end in the data center -- encompassing servers, I/O, storage and even management -- here are a few predictions of what's to come. 

1) Virtualization transforms the fluffy "cloud" into a solid solution

A year ago, many IT managers perceived cloud computing to be aptly named: the concept seemed fluffy and lacking in substance. Since then, the technologies have come together to create a much more tangible story. What's emerged is a multi-layered approach.  Enhanced virtualization software now manages VMs across larger environments.  New hardware elements virtualize the I/O to deliver far more flexible interconnects. The coming year will bring tools to further integrate virtual I/O and virtual machine management, making the "cloud" seem even more cohesive. But already the combination is proven in enterprise environments, delivering to IT managers the most valuable advance of all: success blueprints for solutions that achieve unprecedented efficiency.

2) No more sacred cows: all applications will be fair game for virtualization.

In the recent past, some applications were perceived as off-limits to virtualization (Exchange, anybody?). And that was a reasonable restriction back in the Wild West when virtual machines duked it out for limited processor and I/O resources. But those days are gone.  Nehalem processors combine more compute power with a lot more I/O bandwidth capacity, eliminating I/O bottlenecks at that level. Virtual I/O provides more bandwidth to the server (up to 80Gb/server with Xsigo) and the ability to guarantee bandwidth to specific virtual machines with hardware enforced QoS on both storage and networking traffic. The combination makes it far more feasible to run applications on VMs that would otherwise run on a dedicated machine.         

3) Virtualization resurrects "autonomic" computing: the return of 2001, A Space Odyssey

When IBM launched "autonomic computing" in 2001, it was more science fiction than computer science. But that's changing now. With systems becoming virtualized from end-to-end, self-healing is becoming a reality. Critical to this is virtual I/O, which makes it easy to create redundant I/O paths everywhere, and instantly move those paths to another server when issues occur. In addition, I/O attributes can automatically follow virtual machines as they migrate among servers. IT managers who enable Xsigo's real-time monitoring enjoy the confidence of immediate reporting, 24 x 7, when I/O faults are detected, something that no Cat 5 cable can provide. In the coming year we'll see software that recognizes alert conditions -- across servers and I/O together -- and proactively recommends corrective action. The combination is a much smarter, more resilient infrastructure than was possible back when IBM announced their 2001 initiative.

4) Compute density makes a quantum leap thanks to virtualization

Space requirements and server costs are about to take a giant step... down. Server virtualization was great for consolidating servers, but the conventional wisdom used to be that you'd consolidate devices to a big server that had lots of room for memory and I/O. Two years ago, many IT managers believed that compact 1U servers made no sense to virtualize. Well, at this year's VMworld event, VMware ran all of their booth demos on servers that each consumed just ½ of a 1U space. Each had 48G of RAM and up to 64 I/O connections (with just two PCI slots consumed). The result: this year they required just one rack, verses 14 racks at last year's event. And the demos ran flawlessly. That represents a massive savings on space, power, and cap ex.

5) Server I/O gets much simpler... and much more complex

Connectivity consolidation is here just in the nick of time. Without it, we'd soon be drowning in complexity. A few years ago, most server connectivity was 1G Ethernet and 4Gb Fibre Channel. Now it's those, plus 10G Ethernet, CEE (a variant of 10G, but not the same thing!), 8Gb FC, iSCSI, and FCoE. And looking forward, 40G Ethernet will again be a different connector than 10G. Managing all of this variation at the server level would be a nightmare. Virtual I/O greatly simplifies the situation. Servers are wired just once, with two high-speed cables to each server. The aggregation layer (the I/O Director) becomes the point where all the various I/O types are connected. They all share the same cables to the servers. It's much simpler to manage complexity on one I/O device that's shared by 100 servers than to manage 100 servers individually.

6) Virtual I/O becomes mainstream

Technologies require time to mature and to gain acceptance. September 2009 marked the fifth anniversary for Xsigo as a company, and the second anniversary since Xsigo launched its virtual I/O products. The virtual I/O concept, unheard-of idea back in 2007, is now a vibrant market space with multiple vendors, vigorous debate over competitive advantages, and customers who have proven the benefits. All of which are essential to make the jump to the mainstream. In this year's "technology hype cycle" curves, Gartner showed virtual I/O on the "mature solution" part of the curve, which is where you'd expect to see a technology that's poised to take off. Virtual I/O has become mature at the exact moment when cloud computing initiatives demand the benefits that it provides. And that's the formula for widespread adoption in 2010.   

About the Author

Jon Toor, VP Marketing, Xsigo Systems

As Vice President of Marketing, Mr. Toor brings over 20 years of storage experience to Xsigo. Prior to Xsigo, he served at ONStor as Vice President of Marketing. Before ONStor, Mr. Toor was Senior Director of Marketing at Maxtor, leading the marketing department of the company's Network Systems Group, a startup NAS vendor. Prior to that, Mr. Toor served for two years as Vice President of Marketing for Micropolis, a developer of hard disk drives. He also worked at Quantum as Director of Marketing, managing the strategic direction of the company's enterprise storage products, and at Seagate, where he served as an engineering manager. Mr. Toor holds a B.S. in Mechanical Engineering, a B.A. in Economics, and an MBA, all from Stanford University.

Published Thursday, December 03, 2009 5:35 AM by David Marshall
Twitter Trackbacks for Six Virtualization Predictions for 2010 : - Virtualization Technology News and Information for [] on - (Author's Link) - December 3, 2009 9:02 AM
uberVU - social comments - (Author's Link) - December 3, 2009 11:43 AM

This post was mentioned on Twitter by JoeSeibel: Six Virtualization Predictions for 2010

Six Virtualization Predictions for 2010 : … | VirtualizationDir - Top Virtualization Providers, News and Resources - (Author's Link) - December 3, 2009 4:02 PM
Six Virtualization Predictions for 2010 : … | Web Hosting Geeks - Shared Web Hosting, VPS, Dedicated Servers, Virtualization and Cloud Computing - (Author's Link) - December 4, 2009 11:01 AM
To post a comment, you must be a registered user. Registration is free and easy! Sign up now!
<December 2009>