Virtualization Technology News and Information
Article
RSS
Why High-Performance Storage Matters

A Contributed Article By Amir Sharif, Sr. Director of Technical Marketing, Violin Memory

Part I:  The High Cost of Squeezing Performance out of Legacy Storage

When deploying a new IT system, the four key questions are:

  • What are the goals and what value is to be created?
  • How do we achieve those goals?
  • What type of systems should be implemented to meet the ends?
  • Does the target system support the value that the initial goal demands?

For instance, your goal may be to reduce the total cost of ownership associated with the thousands of desktops in your enterprise.  You will achieve those goals through centralized desktop management.  In the process, reducing the costs associated with the needed infrastructure and the human management of that infrastructure creates value.

The current system for achieving this goal is VDI.  The idea is that thousands of virtualized desktop would sit on many servers (versus thousands of laptop or desktop devices).  Centralized infrastructure and standardized, large-scale management tools would then deliver the benefits.  Servers are certainly powerful enough.  Networks are over-provisioned.  How does storage stack up as an essential IT food group?

When it comes to VDI, storage capacity is not the problem:  Disk capacities are getting larger progressively while clever deduplication and compression schemes are slowing the need for additional capacity.  The real problem that needs to be solved is one centered on storage performance for random IO. 

Human interaction with desktops generates random IO.  Because of Web 2.0 applications, social networking, and cloud-oriented workloads, user-generated random IO is increasing progressively.  For VDI to deliver on its value, a user's experience with a virtual desktop needs to be on par with its physical laptop or desktop brethren.  Therein is the rub:  Disks are supplying more capacity when, in contrast, user interactions are demanding more random IO performance. 

As legacy storage technologies have it, to get additional performance, you add additional disks.  Each disk delivers some 100 - 250 IOPS, depending on the technology.  To get the sufficient performance for a large-scale VDI installation - or any other IT system that generates lots of IOs like a OLTP system - lots of spinning disks have to get deployed to get to the target performance window.  As such, much more storage capacity is deployed than needed; and with that comes additional costs associated with the hardware itself, software licenses, support contracts, data center floor space, power, cooling and personnel.  The results are pictured below:

 

We could declare victory and say that we have met the goal, but what about the value that we were trying to create?  To date, very few VDI implementations have been successful either because the storage subsystem lacked sufficient performance or because the expense associated with getting sufficient storage performance killed the value of the project. 

In the second part of this article, I will examine how an underperforming or overly expensive storage subsystem that supports a larger VDI installation (or other business critical applications) destroys real value.  The analysis is shocking, depressing, or likely both.

Part II:  How Legacy Storage Destroys Value

Legacy storage vendors, wanting to prove value, will use the $/GB metric to justify their price tag.  They will attempt to prove value by stating that their storage system costs $0.06/GB (for large-capacity SATA drives). 

As I pointed out in part one of this article, the key storage problem for high-performance enterprise applications is not capacity; rather, it is random IOs.  To grasp the gravity of the situation, let's picture a large-scale, 5,000 seat VDI environment based on Microsoft Windows 7.

In a VDI setting, there are four distinct lifecycle periods during which IOPS needs to be characterized:


User logon is highly correlated with application startup, generating some 3,800 IOPs.  This heavy IO period happens at start of the workday, also known as the Boot Storm.  A capable large-scale VDI system must be able to handle the Boot Storm's IO needs within a reasonable timeframe.  In other words, its performance must be on par with the performance of a dedicated desktop environment, meaning that logon should take no more than 3 minutes.

The best way to assess our IOPS needs is to model human behavior and their VDI system interactions.  To do this modeling correctly, we need to understand several flows and rates.  They are:
  • The flow of people logging in, and how the flow rate changes over a business day;
  • The flow of people logging out and rate changes over the same time period;
  • The flow and rate of IOs generated by user activity once users are active;
  • The flow of IOs processed by the storage subsystem; the rate by which the IO flow is processed is the storage system's IOPS performance.

The following histogram, generated from Active Directory logs of a large enterprise, depicts two flows.  The green curve shows the login pattern (right-hand scale) over a 24-hour period in a typical business day.  The blue curve (left-hand scale) captures the number of active users over the same period.  Although not depicted, there is a logoff curve that is similar in shape to the logon curve but shifted in time by approximately 8:30 hours.

 

User logons are occurring throughout the 24-hour period, but pick up in earnest slightly before 8:00 am and peak at around 9:20 am.  The maximum number of simultaneous logons is slightly less 540 users in this 5,000-seat environment.  The number of active users on the same system climbs to a peak of over 4,400 users around 4:40 pm.

This human activity profile generates an IO profile composed of four distinct VDI activities:  User logon, application starts, steady state usage, and user logoff.  The aggregate IO profile is depicted in the histogram below:

Let us assume that we were going to build a VDI system using legacy storage technology, using SATA and SAS disks.  In the first case, let's build the system to capacity needs, assuming that we were going to allocate 50 GB of space per user in this 5,000 seat environment, for a total of 250 TBs.  Our storage system profiles for SATA and SAS technologies would be as follows:

Based on the disk systems IOPS performance above, our known IO needs, and a simplistic assumption that there are no system timeouts, we can calculate the user wait-time at any time of the day.  The simulation starts capturing IO at system trough time (lowest usage period), or at 6:50 am.  The result for the SATA array is as follows:

The SAS array, because of its better IOPS performance, generates a less abysmal picture:

 

In both cases, if the 5,000 VDI environment seat is built to capacity, the system will not function because of the storage subsystems lack of performance:  Neither can consume IOs fast enough to render the system functional.

Using the same storage arrays, if we were to increase SATA's capacity by 50X (effectively increasing the IOPS rate by 50X), we would have a system that could digest the IOs, but the peak wait time would still exceed 2 hours.  Increasing the SAS storage capacity by 10X would have our users wait for approximately 1 hour at peak times.   To get desktop like performance where peak wait time (at logon) is 3 minutes or less, we would have to deploy 140X excess capacity on the SATA array or 17X excess capacity on the SAS array.  This brings us back to the chart in part I of this article with a more complete picture:

 

We started out by trying to solve a desktop-sprawl problem in an enterprise and chose VDI as the solution.  In the process, we discovered that we need quite a bit of excess capacity - waste - to get the required performance.  We also learned that if we skimp on performance, we render the workforce idle through long system wait times, hence creating workforce productivity waste.  As such, legacy storage gives us the Hobbesian choices of CapEx waste (through excess disk capacity, disk sprawl, excessive power usage) or OpEx waste (through robbing the workforce of its productivity). 

Put simply, legacy storage destroys value, especially for high-performance enterprise applications like VDI, OLTP systems, or real-time reporting.  Flash-based storage arrays redefine the storage value curve in a way that legacy storage systems cannot.  In the next installment of this article, I will examine how flash-based memory arrays provide the right high-performance solution for enterprise-grade business critical applications.

Part III:  High Performance Memory Arrays and Value to the Enterprise

In the previous two sections of this article, I discussed the high cost of squeezing performance out of legacy storage for high-performance enterprise applications, and how legacy storage destroys value through high costs or poor performance. 

I also mentioned that Flash-based arrays redefine the storage value curve in a way that legacy arrays cannot.  These storage arrays behave like server DRAM memory extensions, enabling enterprises to run mission critical applications in real-time and avoiding the latencies and costs associated with legacy storage.  This creates real value for enterprises.

When the legacy storage vendor sells product at an advertised price of  $0.06/GB when only a fraction of that capacity is needed, they are using the same trick as rental car companies do when they presell a full tank of gas at a "discount."  In the case of the car rental companies, they are betting that you will return the car with at least a quarter of the tank.  The results are premium gas prices for the consumer and higher profits for the rental car company. 

There is no value in paying for unused storage capacity; the value is price of capacity within a target performance.  And there is something peculiar about performance:  Unless prohibited by legislation or limited by physical law, better performance is never too much.  Agile, competitive enterprises always find ways to exploit their performance advantage for an edge in the market place. 

With up to 1,000,000 IOPS out of a 3U chassis, flash arrays make any high-performance enterprise application, like VDI or OLTP systems, hum and highly responsive.  While these systems may be more expensive than what low-end applications, like backups, demand, they are actually far more economical when performance can create a competitive advantage for an enterprise and deliver economic value.  Business-critical applications become more responsive. Waste, incurred with huge storage array costs or unduly long wait times is eliminated.

It is clear that legacy storage systems destroy value when it comes to high-performance applications.  Flash memory arrays deliver the right performance at the right capacity, while saving significant dollars for the enterprise.  In-memory computing is disrupting the old boy's storage club.  It is creating value by giving companies a high-performance competitive edge.

###

Published Monday, April 15, 2013 7:03 AM by David Marshall
Filed under:
Comments
There are no comments for this post.
To post a comment, you must be a registered user. Registration is free and easy! Sign up now!
Calendar
<April 2013>
SuMoTuWeThFrSa
31123456
78910111213
14151617181920
21222324252627
2829301234
567891011