What do virtualization executives think about 2009? A VMBlog.com Series Exclusive.
Contributed by By Daniel Heimlich, Vice President, Netuitive
Netuitive’s State of the Union on Virtualization: More Automation Trends and Predictions of Virtualization in 2009
The search for higher capacity utilization, greater system flexibility and lower operating costs put thousands of companies on the path to server virtualization in 2008 - and early indicators point to this trend continuing in 2009. But organizations are just beginning to learn that managing system performance and isolating problems in virtual environments is significantly more complex than for physical systems. Trying to understand the hundreds of simultaneously moving parts in a typical virtual deployment, while deciphering their influence on one another, is a monumental challenge. Trying to further correlate their effects on the physical environment is all but impossible.
While policy-based monitoring tools are capable of understanding infrastructure availability, they lack performance visibility into the behavior of virtual resources. With industry surveys showing that 70% of IT managers have little or no confidence in their existing monitoring tools for physical servers, how can you successfully monitor virtual ones using the same approach?
The bottom line is that enterprises will never be able to scale their virtual (and physical) data center environments without automated performance management.
Why? Because it already defies human analysis to track all of the constantly changing performance metrics that represent performance of the physical infrastructure and its impact on service quality. Just think about the solely reactive incident and problem management processes that are already the norm for most IT shops. Virtualization is going to push this challenge beyond the breaking point.
Compounding the difficulty of maintaining performance in virtual environments is the speed at which problems can develop and spread. Since users can’t understand the interrelationships within the virtual infrastructure, many simply decide to blindly allocate or reallocate resources when problems arise, which can exacerbate the issues. Technology that automates management of both physical and virtual environments is the next logical step in virtual machine (VM) management and I believe innovation in this area is going to explode next year and beyond.
IT is supporting a mind-boggling number of increasingly complex and unpredictable user and technology interactions. Without the restrictions imposed by siloed, proprietary infrastructure platforms, performance has become difficult to manage and predict. To restore predictability and bring performance consistency to virtual environments, virtualization management technology will adapt in the following ways:
Self-Learning Capability: Heuristics and behavior analysis have been used in the IT security realm for years. Real-time behavior analysis provides the same benefit of self-learning anomaly detection in the data center. Rather than trying to model constantly changing performance variables, self-learning performance management will analyze behavior in real-time and correlate infrastructure performance quickly to application performance and vice versa.
Automated Threshold Management: As part of the self-learning capability, thresholds will become time sensitive and adaptive. Performance management tools for virtual environments need to be able to learn and build behavior profiles for servers, VMs, resource pools and the applications them selves.
End-to-end Performance Visibility: With so many moving parts in a virtual infrastructure, it can be nearly impossible to isolate the cause of an application performance issue. Is a performance problem because of the VM, host server, cluster, network or application? Next-generation technology will bring critical visibility into the health of each component while accounting for the health of the overall environment.
Proactive Capacity Planning: Performance management tools will need to offer resource allocation and capacity planning before deployment as well as in production. The value of virtualization is flexibility and resource optimization, so tools that deliver that optimization from the onset are the ones that will become the most valuable.
The benefits of virtualization are compelling, but successful adoption depends on having the right virtualization platform, administration skills and management tools. If done correctly, virtualization can help consolidate hardware, standardize heterogeneous environment and reduce training on how to manage different types of servers. This allows organizations to flip the budget allocation from just “keeping the lights on,” to the building a much more efficient datacenter.
All of this represents a new way to look at managing IT: Today, we have specialized hardware, software, monitoring and performance analysis systems. Tomorrow – and beyond – we’ll manage all of these systems holistically, made possible through a first step of applying automated performance management. Smart VM management will be the name of the management game in 2009.
About the Author
Daniel Heimlich is Vice President of Netuitive (http://www.netuitive.com/), an innovator of self-learning performance management technology. Daniel and the leadership team have helped Netuitive grow from a start-up in 2002 to the industry leader with more than 250 customers today. A proven executive with multi-disciplinary experience, Daniel brings more than fifteen years of experience in high-tech. Previously, Daniel worked at Citrix.