By Nadeem Zahid, VP of Product Management and
Marketing at cPacket Networks
For
many organizations, customer satisfaction, competitiveness, operational
efficiency, and profitability all rely on secure and responsive applications.
That in turn makes comprehensive network observability critical, because it's
impossible to manage, monitor or secure what you can't see.
These
facts are just as true when it comes to the cloud. Migrating and operating in the cloud can be a
daunting prospect, where lack of visibility can cause service disruptions
resulting in revenue loss and customer churn.
Yet cloud
environments - particularly public cloud environments - can be
notoriously opaque, making them effectively a "black box" for operations teams.
This is problematic, as public cloud customers are still responsible for
securing all data and applications in their respective virtual private cloud (VPC)
environments. Understanding why visibility is difficult, and what to do about
it, can thus be crucial to maintaining data integrity and application
responsiveness in the cloud.
When
it comes to network observability, both packet and flow data is critical. It
provides the actionable visibility and data needed to thoroughly understand
cyber-attacks, malware behavior, and the interactions between end-users, IoT
devices, applications and services. But accessing network traffic can be
challenging in public cloud environments. In fact, until fairly
recently it was impossible; traffic could be monitored in private corporate
networks, but what happened once that data went into the public cloud was a
mystery.
To compensate for this lack of visibility, companies
use various hacks, such as deploying traffic forwarding agents (or
container-based sensors) or using log-based monitoring. Both have limitations. Forwarding
agents and sensors must be deployed for every instance and every tool - a
costly IT management headache - or there is a risk of blind spots and
inconsistent insight. Event logging only provides snapshots in time, and even then,
must be well-planned and instrumented in advance. Neither provides the
high-quality, continuous or deep data needed to troubleshoot complex
application, security or user experience issues. And as mentioned, there is
significant cost involved.
Amazon, Google and Microsoft recognize the problems
cause by this lack of visibility and have taken different paths to solving the challenge
on their respective public cloud platforms.
AWS
and Google Cloud use similar approaches: referred to as VPC traffic (AWS) or
packet (GCP) mirroring service, which are available as part of their respective
VPC offerings. Simply stated, this traffic/packet mirroring duplicates network
traffic to and from the client's applications and forwards it to cloud-native
performance and security monitoring tool sets for assessment. This eliminates
the need to deploy ad-hoc forwarding agents or sensors in each VPC instance for
every monitoring tool. Compared to log data, it delivers much richer and deeper
situational awareness.
Traffic
or packet mirroring on its own isn't sufficient, however. Just like the agent
or sensor approach, it simply provides the access to raw packet data, basically
creating the equivalent of a virtual Tap. This raw data is not quite ready to
feed directly into monitoring and security tools and requires a virtual or
cloud packet broker to handle the pre-processing operations to ensure the right
data gets to the right tools.
Solving this visibility challenge with Azure requires a different
approach: using what's known as "inline mode" on certain virtual packet
brokers. This allows the packet broker itself to monitor subnet
ingress and egress traffic to capture, pre-process, and deliver packet data in
real-time to security, performance management, analytics and other solutions. Importantly,
this actually minimizes cloud visibility costs by eliminating unnecessary
traffic mirroring.
Regardless of whether inline or non-inline, there's much the
virtual packet broker can do with the traffic. It can provide a lossless feed
to a packet-to-flow gateway to generate flow data for those tools that prefer
it, such as AIOps, ITOM or SIEM solutions. It can provide packet feeds for
security tools that need to monitor the cloud environment, or facilitate packet
capture to cloud storage for Network Detection and Response (NDR), or later
forensic analysis.
As importantly, these solutions allow use of rich analytics when
it comes to the public cloud. Tools that
consume the fine-grain metadata extracted from the above middleware can in turn
produce visualizations and dashboards that enable IT NetOps, SecOps, AppOps and
CloudOps teams to effectively perform their jobs. The high-quality metadata can
also be exported to other tools such as threat detection, behavioral analytics
and service monitoring solutions for monitoring, baselining, dependency-mapping
and optimizing. This intra-cloud visibility also facilitates the application
performance monitoring that's critical when it comes to successfully migrating
existing workloads to the cloud or deploying new cloud-first solutions.
However, you choose to get there, observability is critical for
both strong security and improved user satisfaction in the cloud. On the
security front it enables a robust posture by reliably delivering data and
intelligence for rapid Network Detection and Response. Conversely, it bolsters
satisfaction (and efficiency) by ensuring application availability and
responsiveness. All of which lowers operational risk while contributing to
growth and profitability - the ultimate business benefit. Achieving
high-fidelity network observability in Azure requires a slightly more complex
solution than AWS or Google Cloud, but the benefits more than justify the
additional coordination. You can find out more about such an observability
solution here.
##
ABOUT THE AUTHOR
Nadeem Zahid serves as Vice President Product Management
& Marketing at cPacket Networks. He has spent more than 23 years in the IT industry
at several leadership positions in strategy, product management, marketing and
business development with companies like LiveAction, tFinery, Extreme Networks,
Juniper Networks, Brocade/Foundry Networks, Cisco Systems, and Alcatel-Lucent.
Nadeem holds a Master of Science in Technology Management from Boston
University, a Bachelor of Electronics Engineering from N.E.D University of
Engineering & Technology, a Product Management certification from M.I.T and
Cisco Certified Internetwork Expert (CCIE) from Cisco.
--
Photo by Aleksandar Pasaric from Pexels