Virtualization Technology News and Information
Article
RSS
Nokia 2022 Predictions: Utterly consumed - Why 2022 is the year of consumable data center networks

vmblog predictions 2022 

Industry executives and experts share their predictions for 2022.  Read them in this 14th annual VMblog.com series exclusive.

Utterly consumed - Why 2022 is the year of consumable data center networks

By Bruce Wallis, Senior Director, Product Management, Nokia

It's an interesting time in the data center industry. While public cloud providers contend for the workloads that would typically run inside on-premises data centers, a throng of upstart webscalers is vying for part of the action. What's certain is that the latter group cannot truly succeed without making their networks as consumable as those provided by hyperscalers. There are some key predictions going into 2022 will help make that change happen.

Prediction #1: The level of automation across data center fabric operations will increase in 2022 with automation of Day 2+ operations a key focus area.  

We may never be able to automate racking a switch, powering it on, and plugging in cables. But when it comes to fabric design, bootstrap, and deployment of workloads, automation options exist for those that want them. And I expect to see adoption rates increase.

Where we can expect to see more movement in 2022 is at the Day 2+ stage: operating the data center. Rather than out-of-band service configuration, the fabric will be more event-driven than ever, with most changes occurring in reaction to workloads being deployed on the surrounding compute stacks - more so than ever the fabric will be consumed through the same interface as is used to deploy these workloads.

With a massively competitive landscape, the necessity to reduce the costs of the few remaining operational tasks through automation is inevitable. Upgrades are costly, with expensive projects spun up every year. Operators often try to stay on a specific software release as long as they can to avoid that expense and maintain stability. Going forward, we will see some of those maintenance activities more aligned with how the application world has evolved, with the foundation for this evolution being laid in 2022. Upgrades will be smaller, more frequent, more automated. We'll see GitOps become a trend, and the concept of Continuous Integration and Continuous Deployment (CI/CD) truly be embraced.

Expect to see more automated testing inside the data center. Upgrades may come every month, instead of every year, making the delta in functionality easier to test, and much easier to automate. Blast radiuses will become smaller as the scope of each change is reduced.

Troubleshooting the network and outage mitigation will also see changes. We will apply the same concept of pipelines to introduce change, automatically remediate outages, and handle the low-hanging fruit of event management. Machine learning and AI have a role to play here, being used to train the platform to automatically mitigate some outages.

Prediction #2: Open network operating systems will play a key role in making data center networks consumable in 2022.  

This year will bring a focus on the extensibility of the network operating system (NOS), and the automation stack itself - allowing both vendors and operators to build functionality and integrations. We already see this in small quality-of-life functions or workflow automations deployed into the network.

At both layers, this iteration must occur in a way that does not leave these extensions unmanaged - with YANG model augmentations and Custom Resource Definitions in Kubernetes providing the needed schema and API extensibility. Finally, and specific to the NOS, we will see schema normalizations finally getting the traction they need to become viable to a wider market. The most obvious one is OpenConfig, with support no longer being a bolt-on and being rich enough with support for streaming telemetry providing both a config and a state normalization layer. All of this will lower the barrier of entry for new vendors and help promote competition within the ecosystem.  

Prediction #3: Digital twins will gain mindshare with operations teams in 2022.  

Many operators have had to deal with outages because the test lab looked slightly different from production. And with the network in a constant state of flux, there has always been a risk even with perfect alignment. Digital twinning provides the sandbox in which to test topology, configuration, and the general state of the network without impacting production. Expect digital twins to gain mindshare in 2022 as we introduce CI/CD into the data center.

The twinning concept has been around for some time in other industries, and we will see this concept gain a foothold in 2022 with the availability of multiple containerized NOSs, and the cost in terms of resources, CPU, and memory to spin them up being much lower than it used to be. Kubernetes is very good at orchestrating containers and potentially stitching them together in relevant topologies - providing the needed orchestration layer. With network states being more dynamic than typical applications, there is a need to inject the actual state of the network into simulations and run them as close as possible to what the real network will look like. When taken in the context of CI/CD, operators will be able to run Continuous Integration in these twins, before Continuous Deployment automates the deployment into production, perhaps upgrading a pair of switches first as a canary, letting them soak for a week, before upgrading the rest.

Prediction #4: Consumable data center fabrics will enable automated edge clouds in 2022.  

The trend of reducing latency between end users and their services is undeniable. With diminishing returns in processing delay, applications need to move as close to the end user as possible. That means stitching the service from a far edge cloud to the data center where the operator might have some of the other services that the end user is requesting. With applications now containerized, at one moment there might be two containers supporting the service, and the next there might be four - a degree of dynamicity never seen before in networks.

It's a challenging problem to solve, especially given the number of different orchestrators: at the far edge, central data center, and DCI. Getting all that multi-domain orchestration to occur seamlessly, in seconds, is challenging. Vendors and operators have room to innovate to simplify that workflow and enable it to be consumed with a single click of a button. Or, even better, consumed in such a way that spinning up additional capacity at the far edge automatically creates all of those corresponding pieces in the fabric - a truly self-driving network.

We will go from changing the configuration from maybe once a week to thousands of times per day or even an hour in the form of new services being deployed, and new attachments to those services. The only way to do this is to make the network easily consumable and therefore invisible, constantly moving in lockstep with applications.

Moving latency-sensitive services closer to users will also increase edge locations from a handful to hundreds or even thousands. This introduces a very heavy dependency on automating operations when building and deploying fabrics. These workloads need to be able to expand in capacity, to have the network automatically connect them to where they need to be, and to do so in such a way that is invisible to the end user. These consumer services will mean more infrastructure, not just more connectivity, and automating that infrastructure at scale will present unique problems for both vendors and operators to solve.

##

ABOUT THE AUTHOR

Bruce Wallis 

Originally from New Zealand, now based in the San Francisco Bay Area, Bruce cut his networking teeth on supporting Internet Service Providers in New Zealand and in the wider APAC region. Before taking the leap into product management, his role within the IP/Optical (ION) Consulting and Solutions Engineering team supporting PoC and demos on yet-unreleased ION products allowed him to build an extensive skill set in virtualization, all things Linux, and general system and solution design. Starting in the industry in 2008 at the age of 17, he now has 12 years' experience across a wide array of roles.

Published Tuesday, January 04, 2022 7:35 AM by David Marshall
Comments
There are no comments for this post.
To post a comment, you must be a registered user. Registration is free and easy! Sign up now!
Calendar
<January 2022>
SuMoTuWeThFrSa
2627282930311
2345678
9101112131415
16171819202122
23242526272829
303112345