What do Virtualization and Cloud executives think about 2012? Find out in this VMblog.com series exclusive.
Data Center Networking in 2012 - Top 10 Predictions
Contributed
Article by Vikram Mehta, Vice President, System Networking, IBM
The data center network is experiencing a major transformation to support server virtualization and cloud computing, convergence of data storage, application-to-application traffic and new high-performance applications. To address these needs, the data center network will become smarter and faster in 2012. Here are 10 reasons why:
1. Network Intelligence Will Be Driven by Application Requirements - Some vendors say intelligence must be in the core of the network, while others contend that intelligence is best at the edge. However, both are self-centered views. The reality is that intelligence in the network should serve the needs of applications. Today's big data, cloud and optimized workloads require intelligence everywhere - at the core and edge. An "intelligence" debate that does not acknowledge applications is a big mistake.
2. Virtualization Will Continue to Drive Utilization -With so much capital being invested in IT infrastructures, the question of maximizing utilization is top of mind. In the airline industry they call this "yield management." In the hospitality business, they call it "occupancy rates." Virtualized networks will help to ensure that clients can virtualize their server infrastructure while maximizing virtual machine security, high availability, and mobility.
3. Interoperability Will Rule - Modern data center networking is best accomplished when it is standards-based, when multiple vendors equipment can coexist and interoperate, and clients can choose between multiple vendors' wares without paying a pricing penalty or needing to rip-and-replace to meet growth needs and implement next generation approaches. Standards-based Ethernet will remain essential for smarter networks.
4. Networks Will Migrate to 10/40G Speeds - To meet the performance needs of big data, cloud computing and workload-optimized systems, data centers will increasingly implement 10 Gigabit Ethernet on the server and in the access and aggregation layers, which will drive deployments of 40 Gigabit Ethernet for aggregation networks with 100 Gigabit Ethernet on the horizon.
5. The Race to Zero Latency Will Continue - Applications such as high frequency trading require the lowest possible latency. The race to zero latency will continue fueled by ultra low latency switches with deterministic or fair latency with the same connection speeds across every port combination.
6. Networks Will Scale and Converge - Linear scale is a requisite of smarter network design, which can be achieved by non-blocking, non-oversubscribed topologies implemented using top-of-rack switches. To meet the needs of machine-to-machine applications and converged data and storage networking, Ethernet networks must be lossless. Network equipment that supports the Data Center Bridging (DCB) standards ensures high-performance,
operation for IP SANs, iSCSI and Fibre Channel over Ethernet (FCoE).
7. Networks Will Get Flat - Clos and fat tree network designs will become increasingly prevalent for the flow-based, non-blocking, shortest path network fabrics required in highly virtualized and cloud data centers and for converged data and storage traffic. With standards such as Transparent Interconnection of Lots of Links (TRILL) and Shortest Path Bridging (SPB) on the horizon, and other alternatives for flat networks requiring proprietary implementations, network architects will favor existing approaches to meshed networking such as Virtual Link Aggregation (vLAG) to maximize network efficiency.
8. User Will Gain Control - The emerging OpenFlow specification will enable network infrastructure providers to deliver open virtual networking systems that that are easy for users to control, optimize performance dynamically and minimize complexity.
9. Management Will Become More Unified - Network devices should be managed, configured and provisioned as if they were a single logical device, including the ability to can track virtual machines by switch or IP address, and pre-provision network characteristics for VMs.
10. Lifecycle Costs Will Decrease - And finally, switch costs per port will become ever more affordable so networks can scale on an incremental basis, the need for expensive chassis switches will be minimized, and low power
requirements and cooling efficiencies will reduce energy costs particularly for massive networks that interconnect thousands of server and storage systems.
Summary - In 2012, clients that have already invested billions of dollars in their data centers will leverage the data center network to harness the best innovations in the industry and enable the lowest possible cost of ownership for their overall IT infrastructure. Smarter networking in the data center will be ever more critical to smarter, more efficient computing.
###
About the Author
Vikram Mehta is Vice President, System Networking for IBM's Systems and Technology Group (STG). Vikram leads IBM System Networking in providing Smarter Computing solutions by IBM and industry-leading partners to build intelligent data center networks for optimized workloads, cloud, and analytics/Big Data. As President and CEO of BLADE Network Technologies (BNT), Vikram spearheaded the company's acquisition by IBM in 2010.