Virtualization Technology News and Information
Data Center Virtualization: The role of 10 Gigabit Ethernet Fabrics

What do virtualization executives think about 2009?  A Series Exclusive.

Contributed by Joseph Ammirato, Vice-President of Marketing for Woven Systems

Data Center Virtualization: The role of 10 Gigabit Ethernet Fabrics

Server virtualization is a fast growing data center trend boosted by the growing compute capacity of x86 server architectures and proliferation of hardware and software solutions leveraging this power. Server virtualization enables data center operators to maximize physical server resource utilization by packing multiple virtual machines (VM) into a single physical x86 server.  While server virtualization permits concurrent operating systems and applications to run simultaneously on the same physical server, it also increases the network bound traffic demand from the server. When combined with the need to consolidate data centers and the resulting large scale-out designs, demand for wirespeed, ultra-low latency and scalable Data Center Ethernet networks is growing.  Server virtualization allows IT administrators to build on-demand, robust server and application platforms where applications and VMs can be moved across data centers, on the fly, to maximize resource utilization and grow application resources as demanded by end-users.  10 Gigabit Ethernet (GE) fabrics are ideally suited for this confluence of server virtualization and scale-out.  10 GE fabrics support hundreds of wirespeed 10 GE ports to aggregate thousands of servers as one Layer-2 switching fabric (domain), enabling unconstrained VM movement and VM-based network virtualization at wirespeed with ultra-low latency.

10 Gigabit Ethernet Fabric Network Design

A true 10 GE fabric is based on a CLOS (Charles Clos Charles) topology, also known as “fat-tree”.  This fabric solution should support hundreds of wirespeed 10 GE ports to aggregate thousands of servers as one Layer-2 domain or switching fabric.  Unlike conventional multi-tier Enterprise Ethernet network solutions, which use oversubscribed 1 GE or 10 GE links to interconnect switches, the fat-tree topology relies on a tree structure where link capacity is increased as the tree gets closer to the root or core of the fabric. Fat-tree topology guarantees a nonblocking 10 GE switch fabric. The following diagram illustrates fat-tree topology in canonical form:

The 10 GE fabric uses Layer-2 multi-path technology where all link capacity is used simultaneously to construct the fat-tree topology.  High-capacity nonblocking 10 GE switching nodes are used to construct the fat-tree.  This architecture permits full cross-sectional bandwidth through the 10 GE fabric while expanding the number of 10 GE ports well beyond that available from conventional switching solutions.

Layer-2 multi-path multi-chassis technology in the 10 GE fabric accommodates traffic distribution on multiple paths with a dynamic rebalancing scheme based on real-time measurement of average one-way latency plus jitter on all available active paths.  This assures optimum fabric bandwidth efficiency while guaranteeing ultra-low latency and ultra-low jitter for all traffic flows.   Conventional solutions react to congestion or unbalanced traffic after it has occurred; with their large costly buffer memory, congestion is absorbed causing high latency and jitter.  Alternatively, the 10 GE fabric sees rising latency (a proxy for emerging congestion) on a traffic flow’s current path, and switches the traffic to an alternative lower latency path thereby avoiding congestion, maintaining balanced traffic, and maximizing network efficiency.

VM-based network virtualization

VM deployments raise a new set of challenges for data centers.  Each physical server can host multiple VM, and each of these VM may belong to a separate application division where the network needs to be separated or virtualized based on the VM instead of the physical server.  The network ports attached to the physical servers cannot be used as the sole identity for network policies or network virtualization.  Each VM, attached through a single physical Ethernet I/O, needs to be identified to the network by its network policy and network virtualization assignments.  In this way for example, in the case of network congestion, the physical server ports do not have to be paused as a whole; just specific flows linked to specific VM.  The network can then create traffic back pressure for specific VM flows instead of all flows through the physical port of the server.

Thus it is important for 10 Gigabit Ethernet fabrics to enable data center networks to track traffic flows. The flow tracking information includes the necessary data to identify each VM, and this capability is the linchpin of the fabric’s VM-based network virtualization.  The VM identity and state information is fundamental to coupling a VM cluster (a related set of VM) with the fabric and network virtualization.  Independent of the different server virtualization solutions available, the 10 GE fabric can track VM without the need for additional proprietary tagging.   The 10 GE fabric tracks the state information of the VM, and makes the forwarding decision for each VM based on the state and policies given to each individual VM.   Additionally, with its large Layer-2 domain, the fabric allows VM infrastructure to be expanded to thousands of physical servers with unconstrained VM movement over the 10 GE fabric.

Let’s conclude by exploring how a 10 GE fabric segregates and virtualizes by VM clusters.  The 10 GE fabric’s partitioning combined with VM state and networking policy enables the fabric resources to be virtualized with assigned capacity.  Multiple logical fabrics can be constructed using a single physical fabric.  The fabric allows multiple partitions based on the flow information.  Each VM can be identified using the VM state and policy data, and the fabric forwards or restricts traffic for each VM based on the defined forwarding group and policies for the associated VM.  


Two VM clusters could be built using a single physical fabric where each VM cluster is associated with its own virtual partition of the fabric; each VM cluster’s traffic is isolated, but all VM clusters share the resources of a single physical fabric.  VM network traffic for each cluster will be constrained to its own domain, and inter-cluster traffic will be routed through external Data Center core routers.

About the Author

Joseph Ammirato is Vice-President of Marketing for Woven Systems (  You can send him an email at this address:

Published Tuesday, December 16, 2008 1:06 PM by David Marshall
Filed under:
There are no comments for this post.
To post a comment, you must be a registered user. Registration is free and easy! Sign up now!
<December 2008>