Virtualization Technology News and Information
Why you can't Build a Cloud with Fibre Channel

Why you can't Build a Cloud with Fibre Channel

A Contributed Article by Kevin Brown, CEO of Coraid

As enterprises plan a move to private cloud architectures, storage design is a critical consideration for cost, performance, and manageability. Fibre Channel has been the enterprise storage architecture of choice since the mainframe era, and the familiar old vendors are confidently selling it as the ideal platform for cloud projects- no surprise. However, with storage costs today exceeding 40% of many IT budgets and no slowdown of data growth in sight, many customers are looking critically at this platform choice. Increasingly it's becoming obvious that Fibre Channel and Fibre Channel over Ethernet (FCoE)
are a poor fit for the modern data center.

Take a look at cloud computing giants like Google and Amazon in the era of Big Data and you won't find cloud stacks built on legacy storage technology. These players developed their own cloud-scale "operating systems" over the last decade to aggregate massive amounts of commodity hardware into elastic compute and storage farms. The cloud players rejected Fibre Channel and "rolled their own" storage systems based on a number of key assumptions:

1) Cloud is Scale Out

Traditional enterprise storage arrays use "scale up" designs, with proprietary storage controllers driving daisy-chained shelves of drives. As deployments grow, the processors and disk connectivity become performance bottlenecks, forcing forklift upgrades to handle growing capacity. In contrast, cloud architectures utilize massively parallel "scale out" architectures with off-the-shelf hardware and virtualization to deliver maximum scalability and elasticity. No forklift upgrades are required as data volumes grow -- capacity is added just in time and performance scales linearly.

2) Cloud is Dynamic

Legacy Fibre Channel storage networks are static with rigid data connections between every server, switch, and storage. This level of complexity was acceptable when companies ran an 8-port SAN, but data growth is pushing many companies to an 80-port or 800-port SAN. Whenever storage is added or reconfigured, storage [or IT] administrators are forced to manage multiple layers of complexity, including multi-pathing, port bonding, switch zoning, controller load balancing, and array management across multiple tiers for different workloads. That's not a cloud, and it's not even remotely elastic. Cloud applications are mobile and fluid, with the relationships between applications, servers, and storage in constant change. Cloud storage needs to be dynamic by default.

3) Cloud is Efficient

Lastly, cloud business models work because they aggressively lower IT operating expenses. To do this, they demand cost-efficient technologies that are simple to deploy and operate. The acquisition cost of Fibre Channel storage systems are often 10 times higher than commodity systems, and the complexity of managing them fundamentally affects operating cost and agility. In contrast, cloud architectures assemble inexpensive off-the-shelf Ethernet and arrays in ways that minimize operating costs for configuration and replacement.

The server industry has already completed the shift to scale-out architectures, but enterprise storage is still stuck in the mainframe networking era. As customers scramble to keep up with data growth of 50-70 percent per year, it's apparent that storage has become the largest single impediment to achieving virtualization and cloud benefits. 

The winning vendors in the cloud era must deliver commercialized versions of what the large cloud players built from scratch over the last decade. New Ethernet SAN designs are one of the most promising areas of innovation. Ethernet SAN architectures use massively parallel Layer 2 Ethernet networking and off-the-shelf array hardware to deliver a scale out, dynamic, and efficient architecture. By eliminating the complexity and cost of mainframe era designs, Ethernet SAN combines the simplicity of direct attached storage with the benefits of shared storage. This approach enables a single elastic tier of storage to support a wide variety of shifting workloads.

As more companies transition their storage networks to cloud architectures, they need to question their assumptions. If they want all the benefits of cloud, upgrading from Fibre Channel could be the key. 


About the Author 

Kevin Brown - CEO of Coraid.  Kevin is an accomplished entrepreneur and executive, with experience in the networking, storage, security, and virtualization sectors. Most recently, he served as President and CEO of Kidaro, a desktop virtualization software vendor, where he led the team to a leadership  position in this emerging segment. Kidaro was acquired by Microsoft in May 2008 and incorporated as a key element of Windows virtualization.

Prior to joining Kidaro, Kevin served as a vice president on the original executive team of storage security vendor Decru, where he led worldwide marketing, business development, and product management. Following the acquisition of Decru by Network Appliance, Kevin stayed on as a vice president and helped the company achieve #1 market share and global adoption across the financial services, healthcare, telecommunications, manufacturing, and government sectors. As one of the leading experts on storage security, he served as an advisor to the U.S. Congress, Federal Trade Commission, and U.S.
Department of Defense. Previously, Kevin was a key member of the founding team of Inktomi, a pioneering infrastructure software firm, where he served in a number of executive positions, including Vice President and General Manager of Inktomi's networking business. Kevin is as a Fellow at UC Berkeley's Haas School of Business, Lester Center for Entrepreneurship, and earned his Bachelor's and MBA degrees at UC Berkeley, where he served as MBA class president.

Published Tuesday, June 28, 2011 5:00 AM by David Marshall
Filed under: ,
BReams - June 29, 2011 11:04 AM


Interesting blog post.  

However, I think you are not fairly representing the facts regarding Fibre Channel fabrics.  As I work for Brocade, you would expect me to have a perspective on this topic :-)

1. Modern arrays use FC switching internally not arbitrated loop (daisy chained) and have for some time.

2. 80 or 800 port SANs are not examples of large scale deployment in 2011, but may have been in 2001.  You should contact more customers to get a current perspective on scaling with Fibre Channel.

3. Storage requires configuration and some vendor implementations, provide this automatically.  THis is an array configuration requirement, not a storage network requirement.  Fibre Channel (the transport for SCSI and FICON) is quite simple.  Security typically requires zoning. Adding a new switch does not require changing zoning. Multi-pathing is automatic via trunks.  It's simple and efficient.  Servers requiring multipath IO require configuration, Fiber Channel SANs are not modified for mulitpath IO.  Storage Arrays that support multipath IO require configuration, Fibre Channel SANs are not modified.  Concluding that Fibre Channel SAN is inflexible because end devices require configuration is a bit disingeneous.  The Fibre Channel network isn't what needs reconfiguration.

3. Mainframes use FICON which leverages Fibre Channel.  UNIX, Linux and x86 servers use SCSI which leverages Fibre Channel.  One transport handling many OS and server architectures would seem "flexible" to me.  What is inflexible about that to you?

Thanks for blogging on the topic

dharr - (Author's Link) - July 12, 2011 6:41 PM

Kevin, we selected Coraid to establish a 'private cloud' for Splunk at Equinix.  We use your products to build this elastic model and rapidly scale out the platform.  Good news for the mid market - thank you!

??????????????????????????????????????? ????????????????????? ???????????? ???????????? - (Author's Link) - February 6, 2012 11:12 PM
To post a comment, you must be a registered user. Registration is free and easy! Sign up now!
<June 2011>