What do virtualization executives think about 2009? A VMBlog.com Series Exclusive.
Contributed by Dr. Stephen Herrod, CTO, VMware
Top 10 Predictions for Virtualization in 2009
As the woes of the world economy extend into 2009, governments and businesses will be increasingly forced to do more with less. This is one of the key value propositions of virtualization, and as a result, virtualization is increasingly at the top of the list of strategic priorities for organizations worldwide.
Many begin their virtualization efforts with server consolidation to immediately save money on hardware expenses as well as on power, cooling, and facilities. Moving into 2009, organizations that have started their virtualization journeys with server consolidation projects will extend its use to the desktop, storage and networking areas, as well as to provide more flexible and economical approaches to business continuity, security, and application service level agreements.
For virtualization’s money-saving capabilities as well as its transformative affect on the industry, we have another exciting year ahead. Here are the 10 top trends in virtualization that I believe are worth watching for in 2009:
1. Virtualization of the Enterprise Desktop Breaks Out.
The “desktop dilemma” – i.e., the business choice of whether to provide thick or thin clients for employees – will begin to be solved in 2009. Thick clients, including fully loaded personal computers (PCs) and laptops, provide employees with a rich set of applications in their desktop environment, but can be a management challenge since applications can be distributed across thousands of PCs that must be provisioned, updated, patched and secured individually. Thin clients are cheaper, more secure, and more cost-effective to manage, but traditionally have not been able to deliver the richness, flexibility, or compatibility of thick clients. Most businesses provide thin clients only for employees, such as call center staff, who can be productive in this more restrictive environment. New virtualization-based approaches will solve this dilemma by combining the benefits of both approaches – delivering rich, personalized virtual desktops to any device (whether thick or thin), while simplifying management and securing endpoints with virtual desktops hosted in the datacenter. Virtualization is the essential platform for efficient, manageable desktops in an increasingly mobile world. In addition, better remote display protocols and use of the local machine’s compute resources will ensure an even better user experience, and the combination of online and offline modes will enable use when employees are traveling or when they do not have access to a higher-speed network.
2. Storage Becomes Truly Virtualization-Aware.
Storage is a critical building block in the virtual datacenter, and new advances in virtual storage will dramatically increase the flexibility, speed, resiliency, and efficiency of the virtual datacenter in 2009. New virtual storage solutions will automate handoffs between the virtualization platform and the storage infrastructure, simplify storage operations, and maximize efficient use of your storage infrastructure. In particular, look for solutions that offer native array support for common storage operations on virtual machines such as replication and migration; thin provisioning and de-duplication capabilities to optimize storage usage – which is particularly important for the desktop use case; and virtual machine-based storage (virtual storage arrays) solutions.
3. Virtualization of High-End Applications Becomes Mainstream.
Going forward, a combination of hardware and software advances will remove any remaining performance concerns over running the highest-end and most mission-critical applications in virtual environments. New chip advances – for example, Intel Extended Page Tables (EPT) and AMD Rapid Virtualization Indexing (RVI) – are particularly good news for memory-intensive applications and high-performance computing. In addition, the ability to purchase more and more applications as pre-packaged virtual machines, and improvements in the licensing and support policies offered by Independent Software Vendors (ISVs), will continue to drive the trend toward the virtualization of any and all applications.
4. Orchestration of Virtualization across Datacenters Arrives.
Global companies will increasingly use their virtualization platform to federate compute capacity dynamically across multiple datacenters. British Telecom (BT), for example, is building the next-generation, cloud computing-ready infrastructure. BT’s virtualization platform pools business processes, applications, IT infrastructure, user access, and the network in a self-healing, automated, service-oriented infrastructure with integrated service-level management and built-in business continuity. The system provides dynamic geo-balancing across BT’s datacenters in North America, South America, the UK, Europe, Asia and Australasia. On the user level, it enables virtual desktops to follow users as they travel. On the enterprise level, it enables workloads to be automatically redistributed to meet capacity needs and take advantage of eco-friendly locations where electricity can be tapped at much lower costs. This level of datacenter orchestration will become increasingly common, driven at first by the disaster recovery needs and the need to instantly migrate workloads from one site to another in the event of a failure. We are now seeing the first signs of follow-the-sun virtual machine migration and orchestrated use of secondary and off-premise datacenters for peak loads. And this will naturally ramp to the enablement of cloud computing, with cloud services providing more capabilities for importing and exporting industry-standard virtual machines to provide additional compute capacity on short notice.
5. Networking Becomes Fully Virtualization-Aware.
In September 2008, Cisco and VMware announced their collaboration to deliver joint datacenter solutions designed to improve the scalability and operational control of virtual environments, and that the Cisco Nexus 1000V distributed virtual software switch is expected to be an integrated option in VMware Infrastructure. In parallel, the two companies are collaborating on integrating VMware Virtual Desktop Infrastructure (VDI) solutions with Cisco Application Delivery Networking solutions to improve the performance of virtual desktops delivered across wide-area networks (WANs). This collaboration heralds breakthrough advances for networks across desktop and server use cases. Networking vendors are optimizing for virtualization network traffic. Remote display protocols are becoming more effective. Networking management tools will see through the virtualization layer to monitor and manage at the virtual machine -level. And other vendors, following Cisco’s lead, will begin shipping software-based network switches.
6. Virtualization Arrives in Smart Phones.
The benefits of virtualization will be extended to mobile phones. Ultra-thin hypervisors – a thin layer of software embedded on a mobile phone that decouples the applications and data from the underlying hardware, optimized to run efficiently on low-power-consuming and memory-constrained mobile phones – will both enable handset vendors to accelerate time to market as well as pave the way for innovative applications and services for phone users. Today, handset vendors spend significant time and effort getting new phones to market due to the use of multiple chipsets, operating systems, and device drivers across their product families. The same software stack does not work across all the phones and, therefore, must be ported separately for each platform. Virtualization will enable vendors to deploy the same software stack on a wide variety of phones without worrying about the underlying hardware differences. In addition to enabling vendors to more quickly develop rich applications for mobile phones, virtualization will also enable end users to run multiple profiles – for example, one for personal use and one for work use – on the same phone. This will improve both the security and cost-effective utility of mobile phones as communication and computing devices.
7. Virtualization-Focused Security Solutions Becomes More Common.
McAfee, Symantec, and Trend Micro demoed new virtualization-focused security solutions at VMworld 2008 in September, leading a growing trend in the security world. Traditional firewall, Intrusion Detection System (IDS), and virus detection offerings are now shipping as virtual machines. Customers are increasingly utilizing trusted platform modules (TPMs) to attest to embedded hypervisors. And VMware VMsafe, a set of application programming interfaces(APIs) designed to allow third-party products to add security to virtual machines without having to run an agent inside each, will continue to drive security advances for virtualized environments. For example, to reduce overall CPU utilization and increase antivirus performance, VMsafe enables vendors to deploy a single instance of an antivirus application per physical host, instead of requiring one for each virtual server.
8. Management Tools Increase Focus on the Virtual Datacenter.
Today, products provide management functionality for a wide range of virtualization management operations, including virtual machine discovery and configuration management, monitoring, performance management, provisioning, and resource management. These products combined with standardized, hardware-independent virtual machine containers that can be easily changed, moved and manipulated, have helped some VMware customers automate many IT processes and increase datacenter management productivity by 2-3 times compared to physical environments. Going forward, additional APIs and integration technologies (e.g., user interface plug-in architectures) that facilitate the integration of management functions into virtualization platforms will enable end-to-end management processes spanning heterogeneous datacenter environments, a wide variety of application stacks, and physical and virtual use cases. This is coming quickly, as leaders such as BMC, CA, HP and IBM have all announced products in this space.
9. Requirements of Green Datacenters Drives Virtualization Further.
Power and cooling continue as a top datacenter issue. “Upward-spiraling infrastructure demands and increasing energy costs mean that the energy proportion of IT costs could double by 2012,” said a recent Gartner research report (“U.S. Data Centers: The Calm Before the Storm,” 25 September 2007). “By 2011, more than 70% of U.S. enterprise data centers will face tangible disruptions related to floor space, energy consumption and/or costs.” Server consolidation, through virtualization, is one of the best ways to reduce power usage, as well as greenhouse gas emissions. Many VMware customers are able to run 15 or more virtual machines on a single server, thus increasing the utilization of servers from 10-15% (the average utilization rate for non-virtualized servers) to 70-80%. With fewer physical servers, VMware customers save 70-90% in energy consumption. This means greener IT, cutting datacenter power consumption by 70-90% and radically reducing CO2 emissions. Each server removed saves around 7,000 kilowatt hours (kWh) of power and eliminates four tons of CO2, which is equivalent to taking 1.5 cars off the road or planting 55 trees in a year. Going forward, customers will leverage virtualization for even greater power savings through dynamic management of resources. When a cluster of virtual machines needs fewer resources, VMware Distributed Power Management (DPM) consolidates workloads and puts hosts in standby mode to reduce power consumption. When resource requirements of workloads increase, VMware DPM brings powered-down hosts back online to ensure service levels are met.
10. Cloud Providers Utilize Virtualization for More Open, Compatible Offerings.
The IT industry is moving toward a vision of cloud computing, and virtualization is the infrastructure on which it is being built. There is momentum on two fronts: in the enterprise itself, where datacenters are starting to evolve into highly automated private clouds, where the pooling of compute resources on a virtualization platform enables IT to essentially become like a single, giant computer; and at the same time the outsourcing of compute capacity over the Internet to public clouds, or cloud services providers, is becoming a just-in-time reality. Standards are key to the success of public clouds – standards that allow compatibility at the virtual machine layer for easier entry and exit from the cloud, and standards that enable applications to be migrated in and out of public clouds without modification. In 2009, these advances will accelerate to enable companies both large and small to safely tap compute capacity inside and outside their firewalls – how they want, when they want, and as much as they want – to ensure quality of service for any application they want to run, internally or as an outsourced service when additional capacity is required.
About the Author
Dr. Stephen Herrod is responsible for VMware's new technologies and technology collaborations with customers, partners and standards groups. He has led the VMware ESX group through numerous successful releases. Prior to joining VMware, Stephen was Senior Director of Software at Transmeta Corporation co-leading development of their "Code Morphing" technology. Stephen holds a Ph.D. and a Masters degree in Computer Science from Stanford University where he worked with VMware's founders on the SimOS machine simulation project.