Industry executives and experts share their predictions for 2022. Read them in this 14th annual VMblog.com series exclusive.
Five Kubernetes Predictions for 2022
By Anton Smith, Product Manager at
Canonical | Metal-As-A-Service (MAAS)
Ninety-two percent of roughly 1,300 IT
pros surveyed by the Cloud
Native Computing Foundation a year ago reported that their organizations are
using containers in production, a 300 percent jump since 2016. And 83 percent
said they use Kubernetes -- aka K8s -- to manage their container lifecycle.
Such
numbers show how far Kubernetes has gone in becoming the default API for
infrastructure. As Kubernetes approaches the eighth anniversary of its founding
in 2014, the technology has moved beyond a rather rudimentary container orchestration tool designed
for internal use at Google to the preferred platform for deploying, monitoring,
and managing apps and services across clouds, with a vibrant open-source
community supporting it.
So, what will 2022 bring for the K8s
juggernaut? Here are five predictions.
1. Bare metal provisioning will
become a standard building block for multi-node clusters at the edge.
This plays to Kubernetes' strength because K8s
provides a nice, consolidated API for deployment of applications and
containers.
That's not only desirable for public cloud, but also
for edge deployments. Edge deployments usually mean many more sites. Operations
teams still want to use the same interface that they use with public cloud. One
major difference with public cloud is that the physical servers need to be
managed. Gone are the days where manually deploying and configuring servers is
an acceptable way to do this.
Therefore, tools like MAAS will become crucial components for edge Kubernetes
deployments. Equally important are standardized integrations between Kubernetes
and bare metal, such as the Spectro Cloud Cluster API MAAS provider and Juju-MAAS integration.
2. Single-node K8s clusters for
edge will be a growing trend.
Edge sites with a single machine will be increasingly
important. There are sites where the application does not warrant hardware
redundancy or it is not economically viable to do so. However, it is desirable
to retain the Kubernetes API for rolling out applications and managing their
life cycles.
Small, nimble, production-grade Kubernetes offerings
such as MicroK8s are perfectly suited for
small deployments. This is critical to offering K8s on single-node clusters. MikroK8s
will provide the consistent packaging and deployment experience that is needed
to support hybrid cloud.
3. Bare metal Kubernetes will be
the default for all new 5G base stations.
The telco industry is always striving to find more
efficient ways to roll out applications. The 5G architecture has disaggregated
many functions, allowing them to be containerized and rolled out on COTS
(commercially available off-the-shelf) hardware. These are often referred to as
CNFs (Cloud Native Functions). Thanks to the maturation of Kubernetes, this
allows the deployment and management of CNFs to be more effective. It also
allows standardization on an infrastructure abstraction. As a result, the
preferred method for deploying 5G base stations and associated functions will
be via Kubernetes.
4. AI/ML or VR/AR workloads will
be delivered to the edge on bare metal Kubernetes.
Expect to see these exciting applications drive more
deployments to the edge. Due to the way these applications use hardware such as
GPUs, they are easier to deploy and offer superior performance on bare metal.
AI/ML (Artificial Intelligence/Machine Learning)
creates efficiencies by being closer to the data it is processing. VR/AR
(Virtual Reality/Augmented Reality) also benefits by being closer to users. In
the case of VR/AR, it is because of reduced latency to the end user.
For AI/ML, processing data into meaningful information
closer to where it is generated reduces central storage requirements and lowers
network throughput. Game streaming where the game engine runs on the edge will
also benefit thanks to lower latency. Like other applications, deploying these
applications to the edge is easier and operationally more efficient when using
K8s, so it should be expected that they will also be deployed as containers on
bare metal.
5. Multi-tenancy at the edge
will see more open-source Virtual Machine (VM)-based solutions.
VM-based isolation is important when the strictest
security is needed. Hosting tenants on shared hardware is the key driver for
this. Open-source solutions such as LXD will see increased adoption at the
edge as a result. LXD has a rich set of functionality and offers the ability to
lower capital expenditures for VM management.
Containers and orchestration via Kubernetes, will
still be important. Containers will co-exist with LXD by being deployed inside
VMs.
As
these five predictions show, Kubernetes continues to mature as a transformational
technology in how software is deployed. It will be exciting to see all the
places K8s goes in the new year.
##
ABOUT THE AUTHOR
Anton Smith is a Product Manager at Canonical - publisher of Ubuntu. He is an experienced product and technology leader skilled
at leading teams in production.