
Virtualization and Cloud executives share their predictions for 2013. Read them in this VMblog.com series exclusive.
Contributed article by Alex Rosemblat, Product Marketing Manager for Virtualization Management at Dell
2013: The Year of Hardware and Management Software "Convergence"
An oft repeated theme at the beginning of many
virtualization presentations is how the abstraction of hardware has led to the "breaking
down of silos" in the IT organization. Virtualization in effect, mixes together
server, storage and networking resources to achieve increased efficiency and
provide an application that needs 1 CPU and 3 GB of Memory with that exact
prescription. However, due to unforeseen changes, misconfiguration or
mismatches between the hardware provided and an application's requirements, the
underlying resources that are provided to a virtual machine do not always match
the VM's needs. These inefficiencies cut down on performance, and as a result,
require data centers to spend more on certain hardware components to ensure
that no weak links will slow down their VMs. At the same time, virtualization has added
additional "virtual" infrastructure layers to the application delivery path, likewise
resulting in additional failure points that can impact performance if not monitored.
Converged infrastructure has been seen
as a remedy for efficiency and performance for many years. Previous solutions have
been viewed as too expensive or too niche. As more organizations continue to
grow their virtualized infrastructure in breadth and scope, the increasingly
tailored needs for specialized projects such as a VDI or cloud implementation -
as well as the desire for top performance - will make 2013 the year of data
center "convergence" for systems management and hardware.
Hardware Convergence - Creating a "data center in a box"
With the combined efforts from established hardware vendors,
such as Dell, and a whole slew of start-ups funded by big-name venture
capitalists, IT departments will begin to invest more readily in converged
infrastructure (CI) hardware solutions. These will take on two flavors on the
opposite end of the pricing spectrum. On one extreme, high performance
applications will appear on high-end CI platforms. On the other, small companies
and individual departments of larger companies will consider buying a "data
center in a box" to be more cost-effective than a best of breed hardware
strategy. This move to CI will also be supported by the recognition that
"reinventing the wheel" through manual hardware selection for each kind of IT
project is time-consuming and can be unnecessarily expensive and worse, doesn't
always yield the right results. With a large number of converged infrastructure
solutions making it into the market as tailored "recipes" for certain kinds of
projects, it'll be cheaper, faster and easier to go with a CI platform in many
cases.
Management Software Convergence - Unified Monitoring
Virtual objects such as VMs and datastores have become links
in the chain from the application to the actual hardware, causing the number of
nodes that can fail to increase drastically. These interconnections require
monitoring systems to evaluate all infrastructure areas simultaneously and
correlate issues together to identify where exactly a problem is occurring and
why. Yet, most existing management systems are niche, best-of-breed solutions
that cover only a single infrastructure area such as network or storage. In the
increasingly virtualized data center, having a multitude of non-integrated
systems to monitor each hardware or software area in isolation will simply not
work. For example, an issue in storage may be caused by increased usage in memory
that leads to swapping with the SAN. A storage engineer will only see the
increased disk utilization, but will be unable to identify where the issue is
coming from, and equally powerless to fix it.
Management software in a virtualized data center will have
to gain visibility into all infrastructure areas, both virtual and physical,
and correlate the performance of each infrastructure area with the actual
performance of an application. The real competitive advantage for such a
product will derive from the ability to communicate what the issues are, and
how to fix them in a digestible fashion for the specific audience that is
noting that there is a problem. For example, a DBA whose slow database
performance is being caused by a storage issue will not understand what the
exact problem is within the storage array. However, arming this person with the
knowledge that the issue is originating in storage, and providing high level
information to relay to the storage engineer, who can then find the
storage-level issue detail will be a catalyst to solve issues across
disciplines. Developing this clarity will require extensive domain knowledge,
and a great deal of usability design as virtualization projects become
increasingly more complex, and mix previously siloed infrastructure even
further.
###
About the Author
Alex
Rosemblat is the Product Marketing Manager for Virtualization Management at Dell.
Alex joined Dell through the acquisition
of VKernel, and has over eight years of experience with enterprise software and
related technologies through IT consulting, product management and pre-sales
engineering within Symantec and Epic Systems Corporation. He holds an
undergraduate degree in Commerce, specializing in IT from the University of
Virginia and an MBA from the MIT Sloan School of Management.