
Welcome to
Virtualization and Beyond
The Virtualization Automation Journey
By Michael Thompson,
Director, Systems Management Product Marketing, SolarWinds
Virtualization in the data center is a default configuration.
In fact, we have passed the point where over half of server workloads are virtualized
and it is predicted to reach 86 percent by 2016. However, while this indicates most
organizations have started down the virtualization path, that doesn't mean there
aren't pitfalls to carefully avoid and best practices to implement in each of the
three phases of what can be called the virtualization journey towards maximizing
what virtualization can do for your business.
Let's take a closer look at each of these phases as well as
the common challenges to look out for.
Initial
Implementation
Getting started with virtualization isn't much of an
obstacle for most companies these days. With an abundance of people with
virtualization skills and the amount of educational content available to a
typical system administrator, all that is really needed to get going is a
purchase order. But while it may be easy to start implementing, this phase is
also one that can be the hardest to get passed.
Why? Since it seems so easy to create and do basic
management of VMs, one of the key pitfalls is to enter this stage without a
plan. And without a well thought out set of guidelines, procedures, golden
images and maintenance and monitoring plans at the onset, you can get caught in
the vicious cycle of continuously addressing fire drills that take up your time
to the point where any proactive optimization gets squeezed out.
The best time to think through how you want to manage and monitor
the entire VM lifecycle is when you are just getting started. Of course, you
will learn and adjust based on your experience, but it is still much better to
be adjusting from a set of baseline policies as opposed to trying to reign in a
situation where everyone has taken a unique approach.
This is where getting the right data can be the difference
between success and failure. Being able to monitor and manage
virtualization-related problems before they impact systems, seeing
configuration problems and identifying resource allocation mismatches can
largely depend on the ability to readily get the appropriate information on a
real-time basis. So, make sure part of you plan is having a system to get that
data.
Optimization
After surviving the initial phase of implementation and
achieving a virtual environment that is operating in a relatively stable state,
the next phase is to work on optimizing the virtual infrastructure. This can be
optimization across multiple dimensions, including hardware and software
utilization, application performance or staff efficiency. In this phase,
capacity planning, VM sprawl management and a broader view of alignment with
other domains becomes critical.
Managing VM sprawl is one of the most basic optimization
strategies and is all about reclaiming resources that are currently being
wasted. A key best practice to managing sprawl is having a system to regularly scan the virtual
environment for orphaned or abandoned VMs and snapshots, as well as leveraging
historical data to determine if the resources (e.g., vCPU or memory) allocated
to each VM are appropriate for the workload.
At a more macro level, application performance optimization
can be done by looking at the application stack from the application through
the virtualization infrastructure then down to storage hardware. Are business performance-critical
applications running on VMs aligned to host and datastore resources that will
prevent bottlenecks? If IOPs is key, are those applications and their
datastores leveraging any higher performance storage arrays?
While this seems like common sense, it can be difficult to understand
all the relationships and to maintain resource alignment in a very dynamic
environment, so this must all be carefully evaluated
Automation
The current culminating phase in the virtualization maturity
process is to automate your virtual environment. Companies that have been
successful with automation typically start with a strong foundation of
workflows, best practices and policies that are already well documented and
tested. In this case, automation can drive speed and agility that can really
begin to distinguish IT services and business results from those of
competitors.
Based on this foundation, a typical automation progression
would be:
- Instrument the environment: Automation is only
as good as the data provided as input.
- Baseline: There is always a need to identify
trends and determine what is normal.
- Semi-automate: As an intermediate step, many
companies choose to set up automation, but have an administrator "push a
button" before kicking off the action to provide an extra layer of insurance
that the automation will do the right things before turning the decision over
to a machine.
- Fully automate.
As this sequence can involve a lot of effort, focusing on
applications that require a high level of speed and agility as opposed to
attempting to implement automation across an entire environment will improve your
success rate. Ensuring that the initial automation implementation is aimed at
the highest ROI opportunity can also help get business
buy-in for automating the next application.
Wherever you are in the virtualization automation journey,
there is value in moving to the next step. If you are stuck at any one stage,
the good news is that you certainly don't have to blaze a new path to get to
the next level. Given the maturity of the server virtualization market, there
are a lot of tools and resources available to help get where you are going,
including best practices documentation, user groups and vendor-provided
information.
Good luck!
##
Make sure to also read, "On-Premises versus Cloud-based Storage" , "Virtualization Security on the Front Lines" and "In the New Wild West of Storage, the Virt Admin is Sheriff"
About the Author
Michael Thompson, Director, Systems Management Product Marketing, SolarWinds.
Michael has worked in the IT management
industry for more than 14 years, including leading product management teams and
portfolios in the storage and virtualization/cloud spaces for IBM. He holds a
master of business administration and a bachelor's degree in chemical engineering.