Virtualization Technology News and Information
Article
RSS
Remote Work and the Cloud are Killing Traditional Network Monitoring
Written by Alex Henthorn-Iwane, VP Product Marketing, Sinefa 

Traditional network monitoring is in trouble. A sea change in where employees and applications are located, and what even constitutes the enterprise network, means that the olden ways of collecting and understanding network telemetry are going bye-bye fast. If you're stuck in those traditional tools, it's time to leave the old behind and breathe new life into network monitoring.

Death by a thousand cuts

Network monitoring was already undergoing a significant evolution due to the move to the cloud. As more organizations go "cloud-first" and "build-last," literally thousands of applications and services are going to the cloud. According to Netskope's 2019 Cloud Report, the average enterprise uses 1,295 cloud services, and 85% of web traffic is for cloud services. Driven by cloud adoption, more and more enterprises are opting away from backhauling Internet traffic and opting towards Direct Internet Access (DIA) and SD-WANs, rather than MPLS connectivity, even between branches and data centers. According to a July 2019 IDC study commissioned by Masergy, SD-WAN adoption rates increased from 35 percent to 54 percent over the past two years, with 90 percent of respondents actively researching, piloting, actively using, or upgrading to SD-WAN. And no surprise. Most network leaders I've ever talked to have found (at least in the U.S. and other like markets) that Internet connectivity is just as or more performant than MPLS, and certainly much more cost-performant. 

Every move to the cloud and SD-WAN means that network connectivity is dramatically more dependent on the Internet, a network that IT doesn't control. Today, corporate-controlled branch networks are essentially stub networks hanging off the Internet. The corporate-controlled portion is a fraction of the whole. And this brings us to network monitoring. Traditional tools assume that the corporate-controlled network is the center of the universe, the majority of the story. Traditional network monitoring collects data from IT-controlled network devices. Given the new reality of what the "corporate network" is, that's clearly not enough visibility. While IT may not own or control the Internet, IT still owns the application performance and business outcomes, no matter what. Network teams must have visibility across the Internet to all their applications, which requires moving beyond traditional passive data collection and internal network monitoring solutions. 

Those traditional network monitoring tools? Well, they haven't lost their place. But they've been losing relevance to the modern scope of networking issues, one cloud app, and DIA connection at a time. Death by a thousand cuts.

Death by Remote Work

Once upon a time, in a quaint universe where the vast majority of employees work in corporate branch offices, it was at least plausible to ignore the shift to the cloud and Internet dependence. But then COVID-19 hit, and everyone left the corporate branch office. Now network teams are dealing with thousands of non-corporate branch offices, filled with nefarious competition for bandwidth (Netflix and gaming) that can impede business-critical application performance. So now, IT teams have the perfect storm. They literally own nothing outside of the corporate laptop and possibly some internally hosted applications. Due to the rapid shift, many organizations may have had to move to cloud-based security from Netskope, Zscaler, or Palo Alto Networks, which injects yet another cloud/Internet variable into the picture.  

Those traditional network monitoring tools? Zero relevance. Death by remote work. 

Breathe new life into network monitoring

So, if traditional network monitoring is getting killed by these mega-trends, how do you get to a modern network monitoring stack? Here are some key things to consider:

Focus less on infrastructure, more on user experience. 

One of the chief pitfalls of traditional network monitoring tools is that they tend to overly focus on infrastructure rather than users. But networks exist to carry application traffic, and applications exist to aid users in completing work. Make sure your monitoring puts user experience at the center.

Get external/Internet visibility

Practically speaking, this means you need to get web and network path synthetic monitoring into your mix. You can't collect packets or flow data from someone else's network, so synthetics is a way to create your own performance metrics. Make sure you can see external network paths, hop-by-hop. If you have both synthetic and traditional network traffic views, you're in the best shape to solve both internal and external network problems.

Don't short-change your remote workforce

If it's good for a branch office, then it's needed for your remote and home workers. That's why a modern network monitoring portfolio should include Endpoint Agents that deploy on Windows and Mac, that measure and determine whether user experience problems are due to devices, wifi, local network congestion, ISP and Internet issues, or SaaS/internal application performance.

Go live

If you're planning to have branch offices still, etc. then rethink what you expect from the "traditional" side of network monitoring. DPI is a commodity now, and compute power is unlimited in the cloud. So, don't settle for summary, historical views of traffic or crude traffic classifications (HTTP vs. other) that only get you to the general vicinity of understanding root causes. Look for automated, DPI-classified application views that allow you to see traffic real-time, as in second-by-second, so you can figure out what's clogging up the Cleveland branch network, right now, while users are complaining.

Don't be a hoarder

Traditional network monitoring tools often tout how they can store vast amounts of network data for long periods. This is a false flag in most enterprise use cases. For the vast majority of user-to-application network troubleshooting in enterprise environments, you only need metadata derived from raw monitoring data. Storing metadata history is far more efficient and performant than raw data. For most operational needs, you just need 4-12 weeks of data. Don't pay for raw storage unless you absolutely need it. 

Move to the cloud

Many traditional network monitoring tools are shockingly outmoded in their architectures, stuck in an on-premises deployment model that forces you to be a systems integrator and software and MySQL database maintenance crew. If you're convinced that SaaS is way better for your enterprise applications, then why stick with on-premises applications for network monitoring? They just can't compete anymore, and you're fixing a boat anchor to your team if you keep on investing in them.

New categories, new thinking

Network monitoring isn't going away. It's just evolving. In fact, network, app, endpoint, and user experience monitoring is converging into what's being called Digital Experience Monitoring (DEM). It's worth expanding your understanding of new categories like DEM so that you can think and plan your monitoring approach as strategically as possible. Here's a tutorial comparison of DEM vs. NPMD that may also be useful. I'd love to hear from you about what you think of this article, so hit me up at @heniwa on twitter or https://www.linkedin.com/in/alexhenthorniwane/.  

##

About the Author

Alex-Henthorn-Iwane 

Alex Henthorn-Iwane is VP Product Marketing for Sinefa, and has brought innovative networking, software, and security technologies to market since the early days of the commercial Internet. Alex speaks and writes mostly on cloud, network, monitoring, Internet performance and digital experience.

Published Tuesday, June 23, 2020 7:44 AM by David Marshall
Comments
There are no comments for this post.
To post a comment, you must be a registered user. Registration is free and easy! Sign up now!
top25
Calendar
<June 2020>
SuMoTuWeThFrSa
31123456
78910111213
14151617181920
21222324252627
2829301234
567891011