By Graham Siener, VP,
Product, VMware
Managing
Kubernetes isn't easy. Three years ago I
walked the booths of KubeCon in Seattle, asking vendors that offered managed
services built on Kubernetes what their policy was for upgrading clusters. A surprising majority said they generally
avoided upgrades since it took so much work to harden them. When teams can't rely on automation, they
resist making changes.
Kubernetes has
emerged as the API for Infrastructure, in part because of its primitives which
lend themselves so well to extensibility.
This composability does come with a cost though. Kelsey Hightower famously maintains a
teaching tool called "Kubernetes The Hard Way,"
in recognition of the complexity and challenges involved with bootstrapping a
production-grade k8s environment. Once
you get to the starting line of a functional cluster, the next challenge
emerges of ensuring that all your workloads can be upgraded.
We're still in
Kubernetes' early days, which means that management challenges are just
beginning too. The 2020 CNCF Survey
reports that 92% of respondents use containers in production, a 300% increase
from the first survey in 2016. That
means more clusters to manage. At the same time, 36% of enterprises report that they are
managing Kubernetes in multiple infrastructure environments - on-premises, in
several public clouds, and at the edge - which complicates things still
further. And let's not forget that there are more than 100 different certified
Kubernetes distributions and installers in use today, each with its own special
configuration and management needs. Complicated indeed.
The good news is
that we know how to solve management problems like these: Kubernetes itself has
shown us the power of automating software lifecycles using a declarative
model. Inspired by Kubernetes, two open
source projects have been formed to simplify and automate cluster management by
applying declarative automation principles to the task. These projects are Cluster API and Carvel. Each project
tackles important aspects of Kubernetes platform management, and together they
open new doors to more consistent, efficient, and scalable management
solutions.
Cluster
API
Platform teams
adopted Kubernetes, in part, because they could benefit from treating their
cloud native apps as cattle, not pets.
This unfortunately didn't extend to the clusters themselves, which were
often curated as bespoke compositions of yaml, CRDs and policies. When tasked with reaching for an
off-the-shelf solution, teams have to choose from more than a hundred
distributions and installers, each with different default configurations for
clusters and supported infrastructure providers.
Many attempts
were made to manage the upgrade cycle from within K8s itself, but the community
eventually converged on the idea of performing this work outside the cluster
itself. SIG Cluster Lifecycle began the
Cluster API project as a way to address these gaps by building declarative,
Kubernetes-style APIs, that automate cluster creation, configuration, and
management. In fact, a specific non-goal
is to add these APIs to Kubernetes core.
The first step
was aligning on kubeadm as a tool to support the best-practices of
bootstrapping a cluster. The supporting
infrastructure, like virtual machines, networks, load balancers, and VPCs, as
well as the Kubernetes cluster configuration are all defined in the same way
that teams deploy and manage their workloads. This enables consistent and
repeatable cluster deployments across a wide variety of infrastructure
environments. Using this model, Cluster
API enables the creation of clusters across multiple infrastructure providers
with minimal changes necessary to existing manifests.
Extending beyond
bootstrapping, it now provides declarative APIs for other parts of the k8s
lifecycle such as deploying and scaling the Kubeadm-based Control Plane, to
allow for deploying and scaling the Kubernetes control plane, including
etcd. The community is growing and it's
great to see the number of K8s providers that support Cluster API continue to
rise.
Carvel
Reflecting on the
experience for authors of Kubernetes-based workloads, there are large gaps in
packaging and lifecycle management.
First, developers
and operators have to install, manage, and update packages software manually
through a set of imperative commands and without being able to use standard
Kubernetes APIs. The above approach gets
even more cumbersome, complex to learn, and error prone if the packaged
software being installed has dependencies. An imperative approach is easy to
get started with, but poses challenges for Day 2 operations, such as updates.
Next, developers
and operators have a hard time knowing what is running on a cluster. It is
difficult to inspect various Kubernetes objects, and this gets more complex
when software has dependencies. Creating
and managing clones of a cluster for dev, staging, and prod is hard. Auditing software to ensure that what is
running on the cluster is up to date and patched and matches the desired
configuration is equally a manual and error-prone process.
Last, the user
experience for a developer who is writing and running their own software is the
same as that for software that developers are consuming that are written by
someone else. This often leads to developers needing to learn a lot more about
the packaged software than they want to.
Inspired by the
"small, sharp tool" philosophy of unix, Carvel provides a set of reliable,
single-purpose, composable tools that aid in application building,
configuration, and deployment to Kubernetes.
It offers declarative APIs to enable easy updates to software by
updating configuration files and letting Kubernetes do what it does best in
reconciling state. Building on that, it
provides immutable bundles of software distributed using OCI registries so that
you know exactly what is running on your cluster and can reproduce the state of
a cluster at will. Lastly, it uses a
layered approach with appropriate abstractions to provide a UX that is most
suited to what you are doing, since operators can use these tools separately or
in concert.
Summary
Using Cluster API
managed Kubernetes resources and Carvel managed Kubernetes workloads gives you
everything you need to declaratively provision and operate packaged software on
your cluster. It enables a
configuration-as-code approach to managing all your cluster operations,
including upgrades. If you're interested
in learning more, you can check out the getting started guides for Cluster API
and Carvel, and watch
Shatarupa Nandi's keynote
talk on the future of package management.
When the time
comes to walk the booths at the next KubeCon, I'm optimistic that more teams
will share they're leveraging these tools to take a more declarative approach
to managing the many lifecycles of Kubernetes.
As we move into the early majority phase of adoption, everyone should have the choice of running Kubernetes "The Easy Way."
##
To hear more
about cloud native topics, join the Cloud Native Computing Foundation and cloud native community at KubeCon+CloudNativeCon North America 2021 - October 11-15, 2021
ABOUT THE
AUTHOR
Graham Siener, VP, Product, VMware
Graham is the VP of Product for Tanzu's app
platform products, focused on helping developers build great software with a
modern supply chain and getting it to production faster and more often.