By Kedar Hiremath, Sr. Solutions Marketing Manager, cPacket Networks
A
study by
IHS research found that network downtime costs North American organizations
$700 billion per year, ranging from $1 million a year for a typical mid-size
company to $60 million for large enterprises. This includes lost productivity,
lost revenue and sales, and the cost of sending technicians to the impacted
sites to diagnose and fix issues.
These costs are staggering and highlight the fact
that IT teams must prioritize proactive network monitoring and fast resolution
of issues. Yet even the best tools and technicians can't fix what they can't
see, making pervasive network visibility central to reducing downtime and lowering
its impact on the business's bottom line. One of the proven methods for
improving that visibility is implementing an efficient packet brokering
architecture in the data center.
A complete network visibility and monitoring
architecture allows the network operations (NetOps) team to quickly access and
assess any part of the data center architecture, from servers dangling off leaf
nodes to core systems or high-performance compute (HPC) clusters. This allows
for quicker troubleshooting and eliminates the need to send technicians on
expensive trips to investigate problems on-site. Moreover, packet brokers can
efficiently feed network traffic to specialized monitoring and security tools, allowing
these systems to better do their job by eliminating blind spots and missed
traffic.
All of these capabilities ultimately reduce
downtime and saves money. Other benefits include improved efficiency of network
operations, reduced mean-time-to-resolution (MTTR), and better visibility for security.
By building such an architecture, organizations will benefit from a reduction
in total cost of ownership (TCO), greater return-on-investment (ROI), and a
stronger competitive advantage.
Where to Tap?
That said, you can't throw packet brokers and test
access points (TAP) just anywhere in the network. Typically, the core/spine of
a data center is a 40Gbps network. However, to keep pace with today's network
challenges, many customers are quickly migrating to 100Gbps speeds. Switches at
the leaf layer are mostly 10Gbps with some being upgraded to 25Gbps. They are
predominantly constrained to match the speed of the server's network interface.
TAPs are usually strategically positioned in the
network where the most important traffic is passed. This is easier in the
north-south traffic direction at the spine due to fewer links. In the east-west
direction at the leaf switches, the large number of connections usually makes
too many TAPs cost prohibitive and inefficient to manage. If IT needs to
monitor east-west traffic, a balance between TAPs and SPAN (switch port
analyzer) ports can be chosen.
Virtualization presents no roadblocks to modern
packet brokers. Most packet brokers offer virtual devices deployed as virtual
machines (VM) and Docker containers. These virtual devices can integrate
seamlessly with hardware-based devices to monitor performance inside virtual
servers and are a better strategy to monitor east-west traffic.
How to Broker?
Once the data center switches are efficiently
tapped and network traffic is fed to the packet broker, the remainder of the
architecture becomes easy, flexible and responsive to any troubleshooting and
monitoring needs. IT can now connect packet broker ports to specialized
monitoring and security tools to perform in-depth analysis.
Organizations that are considering a 100Gbps
upgrade (or those who wish to future-proof a monitoring infrastructure to
ensure it will be in place for many years) will need to select packet brokers
that can monitor and process packets at 100Gbps speeds but are still backward
compatible as many tools are still at 10/40Gbps. It is technically challenging to
not drop any packets during brokering at 100Gbps speeds and this capability is still
not common among packet brokers on the market today, so consider equipment
choice here carefully.
An effective packet brokering architecture would
be to use a two-tier design where the outer layer aggregates and feeds the
traffic from all TAPs and, on the other side, distributes packets to the tool
rail after processing at a central packet broker. The central or core packet
broker performs more intense operations such as smart filtering, packet
truncation, de-duplication, etc. for the tool's consumption.
One significant broker feature for reducing
downtime costs is de-duplication. De-duplication is the ability to detect and
eliminate duplicates of a packet to reduce the traffic sent to downstream
tools. This allows tools to operate at their peak performance and is valuable
for troubleshooting as well as identifying failing or misconfigured equipment. Network
infrastructure devices, such as switches and routers, that are operating
normally do not generate many duplicate packets. However, in certain situations,
duplicate packets can be seen in some segments of the network due to poor
network design, a flaw in the network-topology, or misconfigured or potentially
failing equipment. Packet brokers that can detect these duplicate packets on a
specific port and issue an alert to help identify and rectify these situations
more quickly.
Benefits of a Full Network Visibility
Architecture
If the architecture includes a full ecosystem of
packet brokering, capture, storage and analytics solutions, it offers even more
benefits. IT can centrally manage, monitor and collect metrics like latency,
burst rate, session level analysis and more that can be correlated in real time
and visualized on easy-to-use dashboards. Packet capture and storage devices
can take copies of the traffic from the packet broker and store it for forensic
analysis for compliance or incident response. This allows IT and security
operations (SecOps) team to investigate network issues or security threats in
greater detail.
SecOps and NetOps can leverage the increased
network transparency and visibility to make informed and accurate decisions. An
efficient packet broker architecture will provide comprehensive network and application
or network performance monitoring (NPM)-related metrics to enable downstream
tools to provide more intelligent insights. This will simplify decision making,
improve the verification of network configurations, reduce MTTR for more
efficient network operations and eliminate costly in-person troubleshooting
trips. All of these improvements help to reduce downtime and lower the TCO of
the network on which the business depends.
##
ABOUT THE AUTHOR
Kedar Hiremath is a Sr. Solutions/Product Marketing Manager at cPacket Networks. Kedar has been in the technology space for over seven years leading go-to-market strategies, content and product launches, most recently at IBM. He holds a Masters in Computer Science from Santa Clara University, loves the NBA and once appeared on America's Got Talent as a singer and dancer.