By Emile Vauge - Founder and CEO, Traefik Labs
Kubernetes as a core technology has become foundational to
modern application architectures and continues to expand its market presence. A
recent survey of 500 full-time IT department
employees conducted by Portworx by Pure Storage finds 87% expect Kubernetes to
play a larger role in their organizations over the next two years, with 79%
noting they have already increased usage of Kubernetes clusters over the last
two years. The primary reasons for increased reliance on Kubernetes cited by
survey respondents are the need to increase levels of automation (56%),
followed by reduced IT costs (53%), the need to deploy applications faster
(49%), and digital transformation initiatives spurred by the COVID-19 pandemic
(48%), the survey finds.
After initial adoption, many enterprise IT organizations
quickly realize that Kubernetes is simultaneously the most powerful yet complex
platform ever to be deployed and managed. Now those same enterprises are
attempting to manage fleets of Kubernetes clusters that present even more
networking and security challenges at levels of unprecedented scale.
While it may feel intuitive to run many workloads in a
single Kubernetes cluster for easier management and better resource
utilization, we observe an increase in the number of Kubernetes cluster
deployments, whether it is a consequence of the development team's own choices,
or for performance optimization of workloads running at the edge to be closer
to users, or to isolate workloads for organizational or legal reasons.
Kubernetes was built by some of the world's most talented
software engineers for large-scale architectures. The issue is its complexity
requires skilled software engineers which are a scarce resource across today's
highly competitive workforce. Kubernetes expertise is not only hard to find and
retain, the software engineers that have these skills also command some of the
highest salaries in the IT industry.
This exponential growth in deployed Kubernetes clusters
coupled with the challenges of attracting and retaining in-house Kubernetes
expertise leaves small-to-medium-sized IT organizations in a difficult position
to keep up with Kubernetes clusters' sprawl. The market needs simpler ways to
industrialize Kubernetes growth at scale, whether through the help of a central
control plane or automation, or both.
The proliferation of
Kubernetes clusters demands a central control plane
As the fleet of Kubernetes clusters continues to expand, a
central control plane is necessary to ensure that the system's different
components can work together effectively and efficiently. Without a central
control plane, it would be difficult to manage and coordinate the different
Kubernetes clusters, and to ensure that applications are running smoothly
end-to-end. This also makes it easier for DevOps and administrators to have
centralized management and control over the clusters. A central control plane
also needs to take microservices networking into consideration by:
- Managing global and local traffic from one place while
providing a dashboard overview of distributed environments
- Applying settings such as traffic management rules and
security policies globally across all clusters in a consistent manner
- Providing a centralized Global Server Load-Balancing
(GSLB) capability to increase reliability and reduce latency for applications
spanning multiple regions in public and private clouds.
A centralized control plan with a simple to use web GUI is a
convenient way to enable teams to quickly bootstrap projects. Organizations
that are just getting started with Kubernetes will find this invaluable while
they are still handcrafting individual cluster deployments.
But, as organizations accelerate their adoption and use of
Kubernetes in production, manual management of multiple clusters becomes
untenable.
Automation is no
longer a nice to have
The only way to effectively navigate Kubernetes deployments
at scale is to adopt the right automation and management tools.
Organizations deploying Kubernetes must make it accessible
to the small army of administrators that populate most IT teams. Most
individuals seek full automation and auditability through GitOps
- a version of DevOps automation - to deploy and manage infrastructure and
applications across multiple Kubernetes clusters.
At its core, GitOps promotes the use of declarative
infrastructure and application definitions, which describe the desired state of
the environment rather than the steps required to achieve it. Non-GitOps
approaches and deployment strategies for provisioning clusters and deploying
manifests are often fragmented and involve manual intervention, which costs
engineers time and elongates the process of scaling. GitOps solves the problem
of managing and deploying infrastructure and applications in a consistent and
repeatable way with easier collaboration (with full audit trails), version
control, and the ability to rollback changes. By leveraging GitOps-compliant
tools, application teams take advantage of automating the self-healing,
autoscaling, and observability of Kubernetes clusters, as well as creating a
consistent method for incorporating security and observability standards.
Regardless of the motivation behind the initial rise in
adoption, it's clear Kubernetes is now a permanent fixture in the IT landscape
as clusters are increasingly deployed everywhere from the network edge to the
cloud and everywhere in between. Investing in a GitOps-ready, central control
plane will point organizations in the right direction of the next Kubernetes
management frontier.
##
To learn more about the transformative nature of cloud native applications and open source software, join us at KubeCon + CloudNativeCon Europe 2023, hosted by the Cloud Native Computing Foundation, which takes place from April 18-21.
ABOUT THE AUTHOR
Emile Vauge Founder and CEO, Traefik Labs
Emile Vauge is the founder and CEO of Traefik Labs, the
leading cloud-native open source company used by the world's largest online
enterprises including eBay, Condé Nast, and Nasa. Emile is also the creator of
Traefik, one of Docker Hub's top-ten projects with more than 3 billion
downloads. Prior to Traefik Labs, Emile spent more than 10 years building
applications for web-scale and large enterprise organizations where his
first-hand experience with microservices inspired him to create Traefik.