Virtualization Technology News and Information
Article
RSS
The More Storage Changes, The More it Stays the Same

Article Written by Jai Menon, Chief Scientist at Cloudistics

The evolution of technology makes a fascinating study. At times we see solutions evolve into something quite unrecognizable from their original form, while others seem to come almost full circle. In the storage world, and the block storage architecture world in particular, the SAN falls into the latter category.

That's right, the SAN is making a comeback, and to understand why, we need to look at the block storage evolution and within that, what it is that made the SAN great to begin with, why it fell out of favor, and what has changed to make it viable once more.

Back in the Day

The period 1994 to 2012, when Fibre Channel (FC) was introduced and became popular, was the heyday of SANs. What made it superior to Direct Attached Storage (DAS) was: better storage utilization; higher reliability; high performance storage sharing; and simplified storage management via a common set of storage functions available to all the servers in a datacenter. In addition, SANs allowed for the independent scaling of storage and compute resources, making it possible to run a wide variety of workloads with widely different storage to compute requirements.

On the downside, SANs were expensive, often proprietary, and their latencies were too high for fast SSD storage. Thus virtual SAN (VSAN) storage software emerged to mitigate these shortcomings. This scale-out software runs on commodity x86 application servers with DAS storage, and provides SAN storage functionality - creating a virtual SAN on top of an existing server-to-server network.

Next came Hyperconverged Infrastructure Systems (HCI)- tightly coupled compute and storage nodes that use VSAN software to achieve SAN-equivalent functionality without the SAN.  Storage functions, plus optional capabilities like backup, recovery, replication, de-duplication and compression, are delivered via software in the same compute nodes where applications run, while high availability is provided by data replication or erasure coding.

VSANs and HCI systems, however, proved to have disadvantages. These include limited scalability; limited ability to scale storage and compute independently as needed by modern workloads; inefficiency in their approach to achieving high availability (replication and erasure coding) as compared to RAID; and heavy dependence on good data locality for performance.

Thus by 2016 new block storage architectures began emerging to address these shortcomings. What surprised many people, though not me, is that they are returning to a SAN-based architecture that disaggregates storage and compute. However, unlike the old SAN controllers that were expensive thanks to proprietary hardware and networks, the new generation use standard servers and networks and are comparable in cost to VSAN/HCI solutions. In addition, as I'll illustrate next, they use new approaches to overcome the performance issues that plagued the original SAN architecture.

Modern Storage Architectures

Two issues have traditionally limited SAN storage performance:

1)      The high overhead of the existing FC and iSCSI SAN networks; and

2)      The limited compute and memory inside centralized SAN storage controllers, that limit the iops and bandwidth it can deliver.

To surmount these performance issues, several new approaches are being taken to: reduce the high SAN overhead; and overcome centralized storage controller bottlenecks.

Reducing high SAN overhead

Traditional SAN networks used the SCSI protocol over Fibre Channel (FC) transport. This is being superseded by NVMe (originally designed for local use over a computer's PCIe bus), which offers a far lower CPU overhead, lower latency, more parallelism, and significantly higher performance, and was designed for use with faster media like SSDs.

Next came NVMe over Fabrics (NVMeoF), the new gold standard that enables the use of alternate transports that extend the distance over which a host device and a storage drive or subsystem can connect. This new protocol preserves all the high performance and low latency benefits of NVMe, plus it can work with transports like RDMA-capable Ethernet (iWARP or RoCE), Infiniband (IB) and FC. The goal of NVMeoF is to bring the latency to a remote storage device within 10 microseconds of the latency to a locally attached NVMe storage device.

As a result of these developments, new SAN networks can have very low overheads and eliminate the first issue we identified with traditional SANs - the high overhead of the existing FC and iSCSI SAN networks.

Overcoming centralized storage controller bottlenecks

The second performance-limiting SAN issue, namely limited compute and memory, is being addressed by emerging storage architectures that attempt to overcome centralized storage controller bottlenecks, using a variety of techniques.  

One such technique is to scale-out storage controllers - adding controllers when the iops and bandwidth required by applications exceeds the capability (with acceptable latency) of a storage controller. These multiple storage controllers are then federated either at the storage controller level (e.g. EMC XtremIO), or in host software (e.g. Cloudistics), so that it all appears as one large name space. Federation in host software is superior to federation in the controller, because the storage controllers are simpler as they don't need to know about each other.

A second approach is to perform only a very limited set of functions in the storage controller, such as RAID and basic read/write, removing all other higher-level storage functions, like compression, de-duplication and encryption, to be performed either in host servers (e.g. Datrium) or by the application (e.g. E8).

A third approach is to do caching in the compute nodes, using a small number of direct-attached flash drives in each host. Read-only caching (e.g. Datrium, Hedvig, Cloudistics) or read/write caching may be used. This reduces the number of requests that need to be handled by the centralized storage controllers and alleviates bottlenecks.

A fourth approach is to use more powerful storage controllers. For example, instead of using processors with 10 cores per processor, modestly more expensive processors with 20 cores per processor may be used. Cloudistics uses this approach to greatly improve performance, without significantly raising the overall cost of the customer solution, since much of that cost is dominated by the storage media cost, particularly in all-flash storage environments.

Finally, some vendors (e.g. Vexata) use an approach in which data intensive functions such as compression, encryption and de-duplication are performed using specialized hardware, however this approach risks leading to a proprietary solution that won't fully ride the commodity server price curve.

So we see then that modern storage architectures use SANs based on NVMeOF, and employ one or more of the aforementioned approaches to build very high performance centralized storage controllers.

The SAN of the Future

In hindsight, it's apparent that traditional SANs fell from grace as a result of their higher cost and performance overhead, allowing VSANs and HCI systems using DAS to take their place. These too had limitations however, including limited scalability, limited ability to independently scale storage and compute, inefficient high availability and locality-dependent performance.

By 2016, as the deficiencies of VSAN or HCI architectures became apparent, newer block storage architectures began emerging, reflecting a return to a SAN-based architecture that disaggregates storage and compute.

These new SANs use NVMeOF for extremely low latency, they overcome storage controller bottlenecks using one or more of the five techniques I described and they optimize end-to-end performance all the way from the application in a VM to the storage. Finally, they are built using standard servers so they are cost-competitive.

This, ladies and gentlemen, is the future of storage architecture - the new SAN. 

##

About the Author

Jai Menon 

Dr. Jai Menon, Chief Scientist, IBM Fellow Emeritus

Jai is the Chief Scientist at Cloudistics, which he joined after having served as CTO for multi-billion dollar Systems businesses (Servers, Storage, Networking) at both IBM and Dell. Jai was an IBM Fellow, IBM's highest technical honor, and one of the early pioneers who helped create the technology behind what is now a $20B RAID industry. He impacted every significant IBM RAID product between 1990 & 2010, and he co-invented one of the earliest RAID-6 codes in the industry called EVENODD. He was also the leader of the team that created the industry's first, and still the most successful, storage virtualization product. When he left IBM, Jai was Chief Technology Officer for Systems Group, responsible for guiding 15,000 developers. In 2012, he joined Dell as VP and CTO for Dell Enterprise Solutions Group. In 2013, he became Head of Research and Chief Research Officer for Dell.

Jai holds 53 patents, has published 82 papers, and is a contributing author to three books on database and storage systems. He is an IEEE Fellow and an IBM Master Inventor, a Distinguished Alumnus of both Indian Institute of Technology, Madras and Ohio State University, and a recipient of the IEEE Wallace McDowell Award and the IEEE Reynold B. Johnson Information Systems Award. He serves on several university, customer and company advisory boards.

Published Monday, November 13, 2017 7:37 AM by David Marshall
Comments
There are no comments for this post.
To post a comment, you must be a registered user. Registration is free and easy! Sign up now!
Calendar
<November 2017>
SuMoTuWeThFrSa
2930311234
567891011
12131415161718
19202122232425
262728293012
3456789