Virtualization Technology News and Information
Windows Server 2012 – Reshaping the Virtual Storage World

A Contributed Article by Lawrence Gavin, Head Geek, SolarWinds, Virtualization & Storage Management

One of the notable financial burdens of operating a Storage Area Network (SAN) is the expense of Host Bus Adapters (HBA) needed to allow compute nodes to communicate with the SAN and access storage resources. A typical cost for an HBA is around $1500, and most systems need two of them for redundancy.

However, with the advent of Windows Server 2012 (WS2012) and the new Server Message Block (SMB) v3 file services that are available, you can now make the storage in your SAN available to compute resources on the network without the expense of installing HBAs in every system-by placing a WS2012 file server cluster in front of the SAN.  

In addition, by eliminating HBAs from every compute node, you will also significantly reduce the number of switch ports that would need to exist in the fabric-potentially eliminating the need for additional acquisition of switches altogether.

Better Data Throughput for Less

At the basic level, WS2012 servers are each equipped with a pair of 16GBit/sec Fibre Channel HBAs, just like any other compute node. For more throughput, you can install additional HBAs in the file services clusters and increase the throughput from the 32GBit/sec available with just a pair of HBAs, to 64GBit/sec by installing four HBAs.

However, with only the WS2012 file services cluster nodes connecting directly to the SAN, you also have many additional options for increasing throughput from the slower Fibre Channel interface. You can implement iSCSI interfaces instead of Fibre Channel or install InfiniBand HBAs.


Historically, iSCSI has not found its place in large-scale implementations, perhaps because Ethernet was nowhere near as fast as Fibre Channel. iSCSI over a Gigabit Ethernet (GbE) fabric is great for a few servers accessing a small SAN, but it just doesn't have the juice for more than that.

However, with the advent of 10GbE, that's no longer the case. You can team multiple 10GbE adapters in a file server node, and provide data throughput rivaling that of Fibre Channel for a fraction of the cost. Four 10GbE adapters is about two-thirds the cost of a pair of 16Gb Fibre Channel adapters.


Alternatively, if you want to go the super-fast route, you can install InfiniBand HBAs in the file servers. A pair of 40Gbit/sec InfiniBand HBAs is less than the cost of a single 16Gbit/sec Fibre Channel HBA, and a  54Gbit/sec InfiniBand adapter is cost-comparable to the Fibre Channel adapter, but it gets you into the hundred-gigabit throughput range on each file services cluster node.

Compute Nodes

On the compute node to file server link, you can team gigabit adapters to provide multi-gigabit/sec throughput to each compute node, or even install a 10GBit/sec Ethernet adapter and dedicate some or all of that bandwidth for file services.

For virtualization hosts, you also have 40GBit/sec Ethernet adapters in the same price range as Fibre Channel, and on the horizon we have 100GBit/sec Ethernet. The advantage here is not only in the cost savings of the Fibre Channel adapter, but also in the freedom from expensive Ethernet switches instead of Fibre Channel switches.

All of these technologies will benefit compute node connectivity to the file servers, as well as present additional options for connectivity from the file services cluster nodes to the SAN.


Another advantage of using a file services cluster to front-end the SAN is that the file services cluster can scale horizontally with ease and minimal additional expense. For the cost of one additional connection to the SAN, file services can be provided to dozens, perhaps hundreds, of end-users via SMB v3.0.

In addition, there are no major initial implementation costs. In fact, the methodology could be implemented with a single standalone file server! From a practical perspective, however, a two-node cluster would be the place to start for a production network.

The WS2012 file services cluster can scale to eight nodes, providing a virtually unlimited SMB v3-based file access capacity for compute nodes-including virtualization hosts and the dozens of virtual machines that will run on those hosts.


While the title "Reshaping the Virtual Storage World" may be a bit of hyperbole, there are certainly elements of truth to it. As memory and compute capabilities have gotten cheaper, storage IOPS have increasingly become a limiting factor for virtual environments. This is especially true as companies have just started to consider moving high IOPS applications like databases off of physical servers and onto virtual ones. This ability to significantly increase storage throughput at a relatively low cost with a software asset truly could start to reshape the virtual storage world.


About the Author

Lawrence Garvin is Head Geek and technical product marketing manager at SolarWinds, a Microsoft Certified IT Professional (MCITP), and an eight-time consecutive recipient of the Microsoft MVP award in recognition for his contributions to the Microsoft TechNet WSUS forum. He has been working with Microsoft Windows Server Update Services (WSUS) and Software Update Services (SUS) since the release of SUS SP1 in 2003, and update management, generally, since the creation of Windows Update in 1998. Prior to joining EminentWare (now part of SolarWinds) in 2009, Lawrence offered Windows Server Update Services expertise, including deployment, implementation, and troubleshooting advice to companies worldwide as Principal/CTO of Onsite Technology Solutions.

Published Tuesday, June 11, 2013 6:29 AM by David Marshall
Filed under: ,
There are no comments for this post.
To post a comment, you must be a registered user. Registration is free and easy! Sign up now!
<June 2013>