Virtualization Technology News and Information
Article
RSS
Q&A: Tegile Systems Explains the Benefits of Hybrid Storage Arrays

I've had a number of opportunities to speak with Rob Commins over the years, usually at trade shows where he is proudly showing off Tegile Systems' feature rich storage arrays.  And each time I speak with Rob, I walk away with learning something new in the storage world.

This interview was no exception.  In this Q&A discussion, Rob Commins, vice president of marketing at Tegile Systems, talks about hyrbid storage technologies, the difference between hybrid storage and an all-flash memory storage array, caching and tiering benefits, and more.

VMblog: What features are the most important when evaluating a Hybrid Storage vendor?

Rob Commins: There are big differences in how vendors implement what is being called hybrid storage technologies.  Here are a few key questions to consider when stacking up alternatives:
  • Is the vendor using less reliable consumer grade flash drives?
  • Are data reduction technologies in-line or post-process?
  • Can I access the array with both block and file interfaces?
  • What data management features such as snapshots, replication, thin provisioning are included?

VMblog: What are the main differences between a new cache optimized hybrid array and a traditional array with flash added to it?

Commins: There are significant differences that cache optimized designs offer that traditional arrays simply can't deliver.  The most significant is the ability to run data reduction technologies such as deduplication and compression in line before data is cached.  This has drastic effects on the efficiency of the array's cache. 

Let's say dedupe and compression are delivering a 5:1 reduction ratio.  With in-line reduction, the effective size of 2TB of physical cache is actually 10TB!  That means that five times more data can be cached and cache hit ratios will get a huge boost.  Traditional arrays with post-process data reduction only get capacity efficiencies in the hard disk pools.  This is great, but grossly misses the mark of a truly cache optimized hybrid array.

VMblog: What are the acquisition and operational cost differences of hybrid storage and traditional storage?

Commins: The cost advantages of a cache optimized hybrid array are huge and can be realized instantly.  When users are buying up to 80% less capacity, there are immediate acquisition cost implications that are easy to measure.  You don't need an ROI calculator to ascertain the benefits of buying 20TB versus 100TB of capacity. 

Operationally, our customers see similar cost reductions.  One of our customers sent me a note that he was amazed to replace 115U of legacy storage with 12U of Tegile's gear.  He was seeing the same performance and capacity while reducing space/power/cooling by 89%.  Amazing.

VMblog: What is the benefit of implementing a Hybrid Solution rather than all-flash memory storage array?

Commins: There are places where an all-flash array fits better than hybrid, but typically, hybrid arrays fit the bill at a much lower cost and with a considerably easier to manage footprint.  The key question between all-flash and hybrid is "how often can I withstand a latency hit above 1-2 ms?"  If you are running a massive OLTP database for a credit card company, an all-flash array makes sense.  If you have a medium sized enterprise running databases, consolidating VMs and looking to deploy VDI, there is no comparison: hybrid is the way to go.

I mentioned manageability.  With an all-flash array, you still need a place to put capacity centric data - typically unstructured files.  That means buying a separate array that is optimized for $/GB versus $/IOP.  With a hybrid array, you have a single well balanced system that can resolve the age-old imbalance between capacity and performance.  You don't have to play Tetris with your data.

VMblog: Why are caching and tiering capabilities important?

Commins: Some people think caching and tiering are the same.  They're not.  In a cache optimized hybrid array, caching is a real time, in-line process that keeps hot data in DRAM and SSD and lets cool data migrate down to spinning disk.  It is always happening.  Tiering is very different - it uses a process that at a fixed time interval (usually once or twice a day), the array's data management software will sweep for what I like to call IO density.  "Where are the hot regions of data to move to fast media, and where is the cold data to move to slower HDD?"  With the agility of business applications and new use cases like VDI, users can't wait 12 or 24 hours for their data to end up in the appropriate media.  Caching is a far more agile and efficient means to deliver performance and efficiency.

VMblog: How does separating the metadata from the primary data path help me?

Commins: Metadata (data about the data) is a really big deal in storage systems.  Metadata is used to manage extremely important functions such as RAID, snapshot pointers, and deduplication tables.  This metadata is typically interleaved with user data to keep some locality between the two.  There is an inherent problem with that.  Metadata IO needs to be extremely fast in a cache-optimized array.  If metadata is sitting on spinning disk, it will inherently be accessed slowly.  An array that separates metadata and only stores it in fast media such as DRAM and SSD (well protected, albeit), the array can run all of the metadata operations at extremely fast speeds, making every application run faster.

##

Once again, special thanks to Rob Commins for taking time out to speak with VMblog.

About Rob Commins

Rob Commins has been instrumental in the success of some of the storage industry's most interesting companies over the past twenty years. As Vice President of Marketing at Tegile, he leads the company's marketing strategy; go to market and demand generation activities, as well as competitive analysis. Rob comes to Tegile from HP/3PAR, where he led the product marketing team through several product launches and 3X customer growth over three quarters. Rob also managed much of the functional marketing and operations integration after Hewlett Packard acquired 3PAR. At Pillar Data Systems, he was at the forefront of converged NAS/SAN storage systems and application-aware QoS in mid-range storage. Rob is also a veteran of StorageWay, one of the first storage services providers that launched cloud services.

Published Monday, December 09, 2013 6:32 AM by David Marshall
Comments
There are no comments for this post.
To post a comment, you must be a registered user. Registration is free and easy! Sign up now!
top25
Calendar
<December 2013>
SuMoTuWeThFrSa
24252627282930
1234567
891011121314
15161718192021
22232425262728
2930311234