
Welcome to
Virtualization and Beyond
Living Like it's 1999 or 2016?
By James Honey, Sr. Product Marketing Manager, SolarWinds
Flash Storage!
Have I got your attention?
Unless you've been living outside the solar system, you know flash storage has
been a hot topic of conversation in the IT world for a while now. A couple of
years ago when I was a product manager for enterprise solid state drives, I attended
regular meetings with various flash vendors who would show great stuff on their
roadmaps ranging from more capacity, better performance and, of course, lower
costs. Today, we are seeing most of that coming to fruition with 15TB drives on
the horizon, sub $1/GB costs and more options to use flash in the data center.
Not only is this changing the face of storage in the data center, but it's also
changing roles and business decisions.
However, something is
missing, and that something is more conversations around best practices when
deploying flash devices.
To lay the groundwork, let's
take a quick walk down memory lane.
If you've been in IT for a
while, you probably remember the processor speed wars of the late 1990s and
early 2000s. Almost every day there was another processor with a higher clock
speed-450 MHz, 500 MHz, 700 MHz and so on and so forth. Moore's law was in high
gear and CPU cycles were available everywhere. This lead to server sprawl,
wasted resources and poor best practices because there was plenty of processing
power to go around.
Then virtualization came
onto the scene and consolidation of server resources started to happen. I
remember some of the earliest virtualization solutions I was a part of and the stir
it caused: "Wait, I can put, like, multiple applications on a single server and
they run separate from each other? Isn't that, like, mainframe?" It was an
exciting time to be sure, but organizations had to start adjusting how they
planned, deployed and managed their servers and applications. Server resources
(CPU and memory) were shared and best practices were needed to maintain the
balance of having enough resources to cover everything and not have too much
extra that was wasted.
Looking at the current flash landscape, I see this same cycle repeating.
Right now, in most situations, just adding a flash device will generate an
incredible jump in performance for most data centers. Even without adjusting
settings, a flash device will usually deliver much better performance than a
hard drive based solution. But what happens when you get questions like, "Adding the flash device has helped
decrease the run time of that report from 12 hours to 4, but how can we get it down
to 1?," or, "This application is running great on that flash device, we need
the same kind of performance for this new application, so can we put it on
there, too?"
Like the saying goes, "If you build it, they will come." Or in
this case, "If you have available performance, they will use it." Management
will want to make sure that any investment is maximized and optimized.
With all that in mind, let's cover a few best practices that
should always be a part of a flash device deployment:
-
Understand ALL the bottlenecks - Storage has always been the end of the line when it comes to blame
for application performance problems, and most of the time, that is rightfully
so. However, as we all know, there are different levels of "bottlenecks" in the
data path. Having a clear, consistent picture of all the bottlenecks in your data path is needed to maximize any
flash investment.
-
Understand the differences between
"fresh out of the box" performance and "steady state" performance - As we all know, flash storage is fast, very fast. But how
consistent is that speed? As flash fills up with data, there is a performance
drop off due to various processes going on at the drive level. Knowing this and
understanding where your array's thresholds are is key to long-term success.
-
Understand your application needs - This may seem obvious, but all too often people fall into the trap
of simply answering "faster." Applications can have very different needs and
clearly understanding those needs in your data center will help optimize how a
flash device is used.
In summary, flash storage is a great technology that is changing
how businesses run and grow. However,
just putting it into your data center without taking a disciplined approach to
planning, deployment and flash
storage management and monitoring will create problems down the road, or at least cause the problems
you were trying to solve when you implemented it in the first place to
reappear. You might not see them today because flash performance can mask poor
planning, but believe me, eventually you will.
##
Read more articles from the Virtualization and Beyond Series.
About the Author
James Honey is a sr. product marketing manager
for hybrid IT performance management software provider SolarWinds. He has more than 15
years of experience in the IT industry focused specifically on storage
technologies and virtualization solutions for SMBs to enterprise environments.
His current role includes responsibility for all storage monitoring and
management-related product marketing initiatives, including SolarWinds
Storage Resource Monitor.