
Virtualization and Cloud executives share their predictions for 2014. Read them in this VMblog.com series exclusive.
Contributed article by Andrew Hillier, co-founder and CTO, CiRBA
Software Efficiency, Multi-Hypervisor and Software Defined Challenges to be seen in 2014
In 2013, organizations
turned their attention to the problem of over-provisioning, and explored the
levers that enabled them to move to the next level of operational maturity and
efficiency within virtualized infrastructures, and of course, the
cloud.
In 2014, the drive to
efficiency will continue, but will not just focus on reducing hardware spend.
Significant savings will also be had through increasing the density of expensive
operating system and application licenses. Many organizations have already
adopted per-host or per-processor licensing models for this very reason, but
have not yet optimized the density of the VMs to realize the savings. Some
forward-thinking organizations have already placed a big focus on this, but the
coming year will see this happen on a much broader scale as it becomes clear
that a lot of money can be saved by simply moving VMs
around.
This software efficiency
theme will also extend to the hypervisors themselves, which have long been a
sore point from a cost perspective. Many organizations will seek to reduce the unit cost of hosting
workloads, and at the same time avoid vendor lock-in, by considering different
hypervisor alternatives. Hyper-V environments will become more commonplace, and
KVM will start to gain traction as it rides in on the coat tail of cloud
technologies like OpenStack. As with any adoption cycle, these trends will
start in dev/test environments, but will have an accelerated path to production
as the ecosystems build out around them and management vendors throw more weight
behind them. Those organizations that remain flexible and utilize the
automation tools that optimize workload placements will continue to see
efficiency gains.
Additionally
in 2014, as more and more infrastructure components become "software-defined,"
organizations will realize that the more degrees of freedom there are to define
things through software, the more difficult it becomes to figure out how to
define them. We have had a glimpse of this already with virtualization, which
is really another name for software-defined servers. Although it created the
ability to place VMs on different servers and flexibly define their resource
allocations, it ended up causing a bit of a mess as VMs were inevitably put in
the wrong places and made the wrong sizes. In recent years management software
has emerged to control this flexibility, and by analyzing all of the operational
metrics and constraints it became possible to optimize placements and
allocations, effectively driving up efficiency and reducing operational risk.
The need for this kind of approach will expand to a broader scale, and from it
will emerge software to define the software-defined data
center.
##
About the Author
Andrew Hillier is co-founder and CTO
of CiRBA. For
more information, visit www.cirba.com
or @CiRBA on Twitter.