What do Virtualization and Cloud executives think about 2010? Find out in this VMblog.com series exclusive.
Contributed Article by Patrick Ancipink, Director of Product Marketing for NetQoS, a CA Company
2010: A Virtualization Management Odyssey
Application performance will take a noticeable hit in 2010 as management process and technology lags behind aggressive infrastructure virtualization. Immature management practices and visibility gaps will become more apparent and noticed by application end users. As a result, organizations will demand network management capabilities that can span both the physical and virtual environments to accurately monitor the performance of networked applications.
Most IT organizations to date have found it fairly easy to virtualize less-critical application components - such as test environments, antivirus packages and domain controllers - with relatively little management pain or degradation in application performance. In 2010, organizations will want to replicate that success by virtualizing their more complicated application, database and Web components, thereby putting more critical application performance at risk.
A problem with virtualizing more sensitive application components is that virtualization can mask critical performance data, such as response times, that network professionals and other members of the IT organization rely upon to deliver high service levels at the lowest possible cost. The need for IT teams to gain visibility should be in parallel with the pace of virtualization adoption, not lagging behind.
In addition, as production applications are migrated to virtual infrastructures, the boundaries between server and network administrators will continue to blur in 2010. Because virtualization can be very easy to deploy, the network team is often minimally involved in the initial phases of virtualization projects. In 2010, this situation will bring many cross-functional questions to a head, including: Is the server administrator using the same process and configurations on the virtual switch as their network counterpart might have on a physical switch? Where do the server and network teams draw the lines of demarcation in service provisioning responsibilities? How do we give the NOC better visibility into virtual hosts and machines without piling on more tools?
While some of these issues are cultural and political, a new class of virtualization specific management tools are becoming available that propose to address the visibility problems. However, these tools generally provide visibility limited to virtual machine health or application delivery from the perspective of the virtual host to end users. The result is that neither the network nor the server staff gets a full view of multi-tier application delivery coupled with the network's ability to deliver applications.
Given this scenario, IT organizations in 2010 will search for network management systems that provide a true end-to-end view of how applications are traversing the network and being delivered to end-users. This includes a service assurance platform that integrates the virtual machine infrastructure with the physical realm. This integration should include minimally invasive methods to measure response times and analyze traffic between physical-to-virtual and virtual-to-virtual application tiers. While these metrics alone can help network managers visualize how the application is being delivered, they should be coupled with device health and packet analysis to speed troubleshooting and obtain the full picture of how well the entire infrastructure is delivering applications across the network.
High levels of server virtualization should not result in a loss of end-to-end visibility when the network team's main role has become assuring service delivery. IT organizations in 2010 will find that they will be able to measure response times traversing virtual machine hosts, collect traffic detail from the virtual and physical infrastructures, and quantify the health of virtual machines and switches. The critical point is ensuring virtual and physical performance information can be collected with minimal intervention, customized for use by different IT groups, and integrated within a single management context. After all, virtualization is supposed to make everyone's life easier and managing application delivery is no exception.
About the Author
Patrick Ancipink is Director of Product Marketing for NetQoS, a CA Company. He has been working in the network and systems management industry for seventeen years. NetQoS provides network performance management software and services that help service providers, government agencies, and large enterprises improve application delivery across complex network infrastructures.