By Vinod Mohan, DataCore
Applications
are one of the most mission-critical elements in today's digital
economy, and poor performance results in bad user experiences leading to
productivity and revenue losses for your business. More times than not,
sluggish and unresponsive applications can be blamed on either
underpowered processors, network bottlenecks, or storage inadequacies.
In this blog, we shall focus on storage - the least understood and
arguably the most critical piece to the application performance puzzle.
Some who dabble in storage will tell you flash solves all. But that is
only a fraction of the solution, and only addresses the endpoint.
Beyond Shiny New Objects: Flash,
NVMe, and 3D XPoint technologies all have a role in shaping the
performance envelope of storage infrastructures and ultimately the
response and throughput of the applications running on top of them. Yet
anyone who follows the industry recognizes them as temporal components
soon replaced by even faster, cheaper media and more advanced methods of
connectivity.
The
four best practices presented below contemplate not only the endpoint
where data rests, but the route it takes there and back, and how its
value changes over time. We will discuss data placement, path
redundancy, fault domains and access speeds, as well as factor in cost
tradeoffs, business continuity imperatives and ongoing modernization
challenges to help you keep your applications running at peak
performance.
Best Practice #1: Match Storage to the Value of the Data
In
the world of applications, data is king. The faster the data
processing, the faster the applications run. With increasing I/O
activity, even your most performant storage devices tend to become
overloaded and consumed by competing demands from low priority workloads
and aging data. For most organizations, adding more expensive storage
gear each time this happens is not a viable option. What if you could
ensure only the most valuable - in other words, the most actively used -
data gets stored on your premium storage, and infrequently used data is
only stored on secondary and tertiary storage? Then I/O demands on your
primary storage will reduce and so will capacity consumption, which in
turn will keep the device running faster.
This is where data tiering comes to your aid. Tiering is the process of moving data between storage devices based on how frequently the data is accessed.
- If the data access temperature is high (hot data), keep that data on your fastest tier.
- When data access temperature cools, move that data to secondary storage so you can free up space on your primary storage.
DataCore SANsymphony is a software-defined storage (SDS) solution that
leverages built-in machine learning to automatically detect data access
patterns and determine when and where to place data. Without causing
any disruption to the application accessing the data, SANsymphony
automatically tiers (moves) hot data to primary storage and warm and
cold data to secondary and lower-cost storage. This allows you to save
more space on your premium hardware, avoid I/O bottlenecks and capacity
overload, and improve overall responsiveness. Additionally, this also
saves you money as you do not have to keep throwing hardware at your
capacity and performance problem. Provision your valuable data the
fastest storage it deserves.
Learn more about auto-tiering using DataCore SANsymphony »
Best Practice #2: Ensure an Alternate Data Path for Uninterrupted Access
Storage
technology upgrades occur quite frequently out of necessity. The most
common one is replacing slow storage hardware with modern, high-speed
gear. Such storage refreshes typically require planned outages and
prolonged periods of data migration.
Both have major repercussions on data access, either interrupting it or
slowing it to the point of being unusable. It will require
disconnecting the old device from the application's data path and
decommissioning it, then integrating the new storage, and then pointing
it to the application's host server. Also, you need to factor in the
manual oversight of the lengthy data migration project that needs to
happen between the old and new device.
To
overcome this disruption, DataCore SANsymphony provides you with the
best practices of creating path redundancy and fault tolerance for your
storage infrastructure by:
- Leveraging multi-path I/O techniques such as ALUA (Asymmetric Logical Unit Access) from the operating system or hypervisor
- Synchronously mirroring your data between suitable storage devices even from different manufacturers
- Automatically copying application data in real time to the mirrored device for redundancy
- Automatically failing over to the mirrored copy when it's time to change/refresh the old storage device
- Ensuring there is no data access disruption for the application during the process of hardware upgrade
- Transparently
evacuating data from the old device and moving it to the new one (or
balances it across other storage in the pool) once the new and upgraded
device is in place
- Automatically switching back the data path and pointing to the new storage device from the failover device
During
the course of this entire project, there is no application downtime
experienced. Injection of a new technology and data migration happens in
real time without applications or users realizing any change happening
at the back end. Modernize your datacenter while ensuring no performance
degradation or downtime.
Access via alternate data path with synchronous mirror when upgrading a storage array
Learn more about synchronous mirroring using DataCore SANsymphony »
Best Practice #3: Leverage Parallelism for I/O Processing
The
rise in popularity of big data analytics, IoT and, in general,
burgeoning digital business transactions, places heavy demands on the
storage backend. Regardless of the high-speed storage device being used,
traditional serial I/O processing will have its limitations. Even when
using multi-core processors, I/O operations typically happen in a
sequential fashion - one core at a time. This leads to compute workers
waiting on I/O workers to process their data. This also results in CPUs
being wasted as they are not put into use simultaneously. The bottleneck
multiplies in effect in a virtualized environment where many apps/VMs
run on the same host and wait to get their threads processed serially.
DataCore
SANsymphony uses a patented parallel I/O processing technology wherein
the variable number of cores in a multi-core processor are leveraged at
the same time to process I/O threads in parallel. In this way, compute
workers get serviced much quicker with much faster I/O processing (up to
5X more than serial processing as seen in our customer environments).
This also reduces the cost and complexity of spreading the load over
many servers/virtual hosts as a single multi-core server can now process
multiple I/O workers simultaneously.
This
results in faster application response, money saved from not having to
add additional hardware to increase throughput, and higher workload
consolidation ratio allowing you to run more VMs per server and improve
operational efficiency. Speed up data processing significantly with
parallel I/O.

Learn more about parallel I/O processing with DataCore SANsymphony »
Best Practice #4: Cache in RAM Where Possible
Well,
we know that throwing cash at the problem is not going to be a
practical solution in the long run. Let us try to throw cache at the
problem and see what happens.
Using
flash is great; RAM is even faster than flash. Employing RAM caching
the right way in addition to flash will add to your performance gains.
RAM cache is memory that acts as a buffer for disk I/O. Consider it as
volatile memory that holds data temporarily for faster read and write
access by the application without having to wait until the data is
written to the non-volatile disk and read back. When RAM is used for
caching, it increases the speed of data access by orders of magnitude in
comparison to spinning disks and even flash arrays.
When
using DataCore SANsymphony (present in the data path between the
application and backend block storage) as L1 cache, you can leverage the
DRAM available in it as a caching device and increase the speed of I/O
reads and writes.
RAM caching using DataCore SANsymphony
- SANsymphony
uses write coalescing technology to reorder random writes that are held
in cache and writes them to the disk in sequential stripes. This avoids
waiting on disk for every write operation and speeds up the overall
writing process.
DataCore
SANsymphony supports up to 8TB of RAM per node for caching, which
increases the I/O processing exponentially, thereby facilitating faster
application response. Cache in on it and accelerate read and write
processing.
Learn more about high speed caching with DataCore SANsymphony »
As
we saw with these four best practices, there is a lot you can do with
your existing infrastructure to improve application performance. By
optimizing your hardware resources and employing the right tools and
techniques, you can ensure your applications run faster and users do not
experience delays. In addition to these best practices, DataCore
SANsymphony software-defined storage solution
offers many other capabilities to further help you improve performance.
Whether it is balancing loads across devices or setting Quality of
Service (QoS) limits on I/O traffic to regulate throughput, you can do a
lot more with SANsymphony to turbocharge your applications. Take a test drive of SANsymphony in your environment today!
##
About the Author
Vinod Mohan is a Senior Product Marketing Manager at DataCore
Software. He has over a decade of experience in product, technology and
solution marketing of IT software and services spanning application
performance management, network, systems, virtualization, storage, IT
security and IT service management (ITSM). In his current capacity at
DataCore, Vinod focuses on communicating the value proposition of
software-defined storage to IT teams helping them benefit from
infrastructure cost savings, storage efficiency, performance
acceleration, and ultimate flexibility for storing and managing data.
Prior
to DataCore, Vinod held product marketing positions at eG Innovations
and SolarWinds, focusing on IT performance monitoring solutions. An avid
technology enthusiast, he is a contributing author to many popular
sites including APMdigest, VMblog, Cyber Defense Magazine, Citrix Blog,
The Hacker News, NetworkDataPedia, IT Briefcase, IT Pro Portal, and
more.