By Theo Despoudis, Content Consultant, Palo Alto Networks
Even though there are plenty of security
controls available in Kubernetes, it is not secure by default-there is room for
errors and misconfigurations. In Kubernetes, secrets include passwords, API
tokens, and ssh keys. These are used by containers in a pod to access or be
accessed by the Kubernetes main and external services. However, unprotected
secrets are about as useful as having no security at all on communications.
We could adhere to the best practices for securing a Kubernetes
installation; however, they are not a complete solution. Security is not an
inspection or a check that you perform once and are done. Every security professional understands the
continuous need to deploy new and updated security controls as architectures
evolve. When it comes to protecting secrets keys and sensitive configuration
data across a cluster, the current best strategy is to manage them with a Zero Trust security model.
This article provides an overview of secrets
management, and how to secure secrets at the container level.
How Kubernetes Handles Secrets
It's important to understand how Kubernetes stores any type of secret
(either encrypted or plaintext). By default, it uses the existing etcd server, which is the distributed key-value store also used for other
operations. The kube-apiserver
component of Kubernetes is responsible for communicating with the etcd store.
Alternatively, the kube-apiserver can
be configured to communicate over gRPC to a key management system (KMS).
If left unconfigured, any secrets stored using
the standard mechanisms will only be base64 encoded or in plaintext, if using
the stringData type, using an opaque secret
type. This means that anyone accessing the etcd cluster can see those secrets
in plain sight (for example by accessing the cluster using a curl request at GET
http://<CLUSTER_IP>:2379/v2/keys/?recursive=true).
A first step to secure these secrets is to
provide an encryption configuration and to deny all public access to the etcd
cluster before setting up a production environment. You could also encrypt the
values before sending them to Kubernetes. For example:
$ cat secrets.txt
postgresPassword=password
$ openssl enc -aes-256-cbc -salt -in secrets.txt -out
secrets.txt.aes -k password
Then we store/retrieve them as:
$ kubectl create secret generic db-credentials --from-file=./secrets.txt.aes
secret/db-credentials created
$ kubectl get secret db-credentials -n default -o yaml
apiVersion: v1
data:
secrets.txt.aes:
U2FsdGVkX1/1w56G2eGwzs7ZfLiLrUN/gPDl1yHkdNKYFNWIsspRHxIFz2ytJdrM
kind: Secret
...
However, this method increases the maintenance
effort. We have to propagate the secret that we used to encrypt the secrets.txt into any application
config that needs to access secrets. Handling secrets like that should only be
used as a last resort or for conducting quick experiments.
Let's see the other ways to configure
encryption at rest when handling secrets.
Configuring and Using Local
Encryption At Rest
We can issue our own encryption-at-rest
configuration by applying the relevant config. We have an option of four
providers that use the local etcd server to store the secrets: identity, aescbc, secretbox and aesgcm.
The identity provider is used mostly in testing, as it stores the key in
plaintext.
The other providers use well-vetted
cryptographic algorithms (AES CBC, XSalsa, AES GCM) using standard library
implementations (AES, Secretbox). Beyond that, there is little to no
configuration other than the default parameters.
When we set up the kube-apiserver, we have the option to pass a flag -encryption-provider-config- accepting a configuration
file with an EncryptionConfiguration kind. Note that
some cloud providers with managed Kubernetes engines do not allow you to pass
any parameters to the kube-apiserver,
forcing you to use either a KMS or having a provider managed by them, which
we'll discuss next.
Here is an example configuration:
apiVersion: apiserver.config.k8s.io/v1
kind: EncryptionConfiguration
resources:
- resources:
- secrets
providers:
- identity: {}
- aesgcm:
keys:
- name: key1
secret: c2VjcmV0
- name: key2
secret:
cGFzc3dvcmQ=
When we store the encryption configuration
this way, we have an extra responsibility to protect this configuration file
itself. If attackers somehow manage to access that file, they can extract the
encryption keys and compromise the secrets. Solutions such as SOPS
can be used as an extra layer of security when we want to encrypt configuration
files with the use of existing KMS.
If using this configuration, it's really
important to regularly rotate those keys to prevent synchronized attacks. In
both this and the previous example, etcd is a weak point that stores important
configuration information and the secrets. In the next section we'll explore a
third way to store secrets that doesn't rely on etcd.
Configuring and Using External
Encryption Managers
As mentioned before, if we configure
Kubernetes to use an external KMS provider, then the kube-apiserver will establish a gRPC connection with the defined KMS.
The kube-apiserver and the KMS provider run in the same pod, and each KMS provider
establishes its own connections with the external secrets engine. For example,
we show the vault-kms-plugin in the diagram below.
All communication exchanged between the kube-apiserver and the vault-kms will be
under common UNIX domain sockets. The vault-kms-plugin establishes its own connection to the Vault server, which may reside
in a different pod inside the cluster; or, even in a different network. This
connection is encrypted and uses authentication credentials for added security.
Some cloud providers may offer their own KMS
plugins that have extra protections or enhanced security features. For example,
when using Azure Kubernetes Engine, you could certainly use Vault as your
preferred KMS provider. But Azure Key Vault for Kubernetes offers better
integration (in terms of access control and identity policies) and less
operational overhead when using the Azure platform. The downsides are the
increased lock-in with Azure services and reliance on existing features, which
may not cover all business requirements. For example, up until now the KMS
Plugin for Key Vault did not support key rotation.
Alternatively, you can also develop a new
provider that leverages a custom logic. A quick search into GitHub showed a few
interesting examples, such as this
one. The logic of the gRPC client is encapsulated inside the cmd folder. However, inside the pkg
folder, we find the encryption service, which leverages AWS KMS to store and
retrieve secrets. If your organization has the resources and business use
cases, it can invest in developing a KMS provider that satisfies them better
than anything else available.
Handling Secrets with Prisma Cloud
With more businesses and organizations
adopting Kubernetes as part of their core infrastructure, it's important to
have well-integrated and up-to-date security systems in place for handling
Kubernetes secrets. Prisma Cloud by Palo Alto Networks offers a
single pane of glass for all cloud security controls, and can natively
integrate with reputable secrets management stores such as AKS, Vault or AWS
Secret Manager. It allows you to control when and what secrets to inject into
the containers. Additionally, you can leverage Bridgecrew by Prisma Cloud to check if configurations have encrypted
secrets.
Check out the video demo to see how they can
help protect your cloud platform today.
##
To learn more about
cloud native technology innovation, join us at KubeCon + CloudNativeCon Europe 2021 - Virtual, which will
take place from May 4-7.
ABOUT THE AUTHOR
Theo
Despoudis is a Senior Software Engineer, a
consultant and an experienced mentor. He has a keen interest in Open Source
Architectures, Cloud Computing, best practices and functional programming. He
occasionally blogs on several publishing platforms and enjoys creating projects
from inspiration. Follow him on Twitter @nerdokto. He can be contacted via http://www.techway.io/.