Virtualization and Cloud executives share their predictions for 2016. Read them in this 8th Annual VMblog.com series exclusive.
Contributed by Brian Morin, Senior Vice President, Global Marketing, Condusiv Technologies
Resolution for the New Year: Use a Software Solution to Solve VM Performance Problems
The New Year offers an opportunity to
go out with the old and in with the new. If you've been having application performance
problems in a virtual environment, then I'm guessing your "old" approach to
boost virtual machine (VM) performance has been based on spindles or
flash-expensive hardware solutions.
But what we now know about using a
hardware strategy to attempt to solve performance issues is that it simply
masks the problem without addressing its root cause. Hardware turns out to be a
resource-waster, and here's why:
The top factor that kills application
performance the most in a virtual environment is smaller, fractured, random
I/O. Processing this type of I/O creates a huge strain on systems trying to
process their workload since they end up having to take
bandwidth from VM to storage. It creates a system
characterized by fragmentation, not unity, which leads to the most I/O
intensive applications suffering.
But when IT administrators respond to the
resulting dampened performance by simply bringing in more expensive storage and
server hardware, they're spending more than they need to
because I/O inefficiencies at the Windows OS and hypervisor layers end up
robbing optimal performance. While flash does process inefficient I/O profiles
faster than disk, it's no cure for I/O inefficiencies. The truth is that much
of the investment in flash gets squandered on unnecessary cycles.
If you're ready for a change in 2016,
there's a much better solution: software. While hardware can process I/O, it
can't optimize it-but I/O reduction software can. That's
because such software directly targets the challenge of small, fractured,
random I/O that hurts performance so much. As such, a
software solution also presents a compelling argument for server-side DRAM
caching.
Let's put some numbers behind these
claims. Research has shown that when virtualized organizations sequentialize
the I/O stream from VMs to target the worst-offending I/O to be serviced by
server-side DRAM, they can decrease I/O to storage by 50 percent. They can also
increase application performance up to 300
percent on existing hardware.
If you're ready to make this
performance-enhancing switch in the New Year, here are some benefits that you
can expect from today's I/O reduction software:
-
Set-and-forget convenience. The transparent software is built from
the ground-up to operate
with almost no overhead, since it uses available resources.
-
Optimizes writes and reads.
The software increases I/O density and sequentializes writes, while also optimizing
reads using available server-side DRAM as the first caching tier in the
infrastructure.
-
Cuts
latency times. DRAM isn't capacity-intensive, but it is much quicker
than a dedicated PCI-e or SSD cache. As little as 4GB of available memory
consistently cuts latency times in half since DRAM is so speedy and algorithms
at the VM layer are application-aware. Faster DRAM also results in alleviating
I/O to storage as it targets small, random I/O.
In short, unlike a hardware approach,
I/O reduction software makes certain that the most
performance-inhibiting I/O won't degrade your infrastructure. By
switching to software, you can protect your company's existing investment in
hardware without needing to pile on more spindles and flash. What's more, if
you invest in a storage system down the road, whether with SSD or HDD, you'll
be able to reap the maximum benefit. You can also rectify the performance
bottlenecks to which you've grown accustomed by using I/O
reduction software. Its biggest benefit is that it eliminates the
IT administrator's need to designate limited DRAM for
caching purposes-the software simply leverages available DRAM. Now that's
something to celebrate in the New Year.
##
About the Author
Brian Morin is Senior Vice President,
Global Marketing, of Condusiv Technologies. Prior to Condusiv, Brian served in
leadership positions at Nexsan that touched all aspects of marketing, from
communications to demand generation, as well as product marketing and
go-to-market strategies with the channel. With 15+ years of marketing
expertise, Brian has spent recent years on the forefront of revenue marketing
models that leverage automation for data-driven determinations.