Virtualization Technology News and Information
Article
RSS
Get Started with Securing Your Containers Supply Chain

By Toddy Mladenov, Principal Product Manager, Microsoft

In today's digitalized world, news about software exploits is as common as the weather report. Among them, exploits in the software supply chain are becoming a prominent topic that we cannot ignore. When pulling directly from the Internet, whether it is due to a rogue maintainer or a state-sponsored actor, the security risk for your container workloads is high. Securing the containers supply chain is a continuous process but starting early and making a deliberate plan is essential for your success.

What can you do, though? There are so many tools, technologies, and open-source projects that you need to monitor, and so few engineers that you have on staff with limited hours in their days! Don't worry, you don't need to do everything at once. Every supply chain has clearly defined phases and the logical approach is to start securing the pipeline from left to right.

In this article, I'll walk you over the tools and technologies you can employ to increase the security of each phase of the containers' supply chain. I'll start with how you can better track the acquisition of your base images and host them internally, go over improvements in the build process and will end up with deployment and runtime policies.

containers-supply-chain 

Acquiring Base Images

Let's look at a typical supply chain for containers. It starts with the acquisition of the base image. Depending on whether it is an open-source or proprietary image, you have two options: build the image from source or pull it from a public repository. While building from source is often a recommended option, it is not always an entirely feasible one - access to third-party source code may not be available due to licensing restrictions. Also, building from source doesn't guarantee security. There are numerous examples where malicious code was introduced in the source code and propagated to the production environment.

There are a few important things you need to do when acquiring images from third parties. Annotating the image with additional information that can help you track the provenance of the image is required to track back the origins of the image. Things like source repo or public registry, Git commit that produced the image, release date, etc. add traceability for the image. At Azure, we leverage the OCI annotations to add additional metadata to every image that allows us to track back its provenance.

What software does the base images come with?

The next step after documenting the origin of the image is to generate a Software Bill of Materials (SBOM) that lists all the packages and binaries included in the image. Later, this SBOM can be used to introduce rules at deployment and run time that can prevent images with known severe vulnerabilities from being deployed in test or production services. Using a standard SBOM format like SPDX will both speed up your process thanks to the tools available and will also enable interoperability for your images if they are used by your customers.

Do base images have vulnerabilities?

Now that we know where the image originated from and what is included in it, we need to make sure that it doesn't contain any vulnerabilities. Using an open-source vulnerability scanner like Trivy can give you an early indication if the image contains well known vulnerabilities that can be exploited. Trivy can produce a vulnerability report that contains details about each vulnerability discovered during the scan.

Can you verify the publisher?

After all those steps, if we decide to use the image, we can attest that the image is approved to use by our internal teams and push it to our golden images repository. The golden images repository hosts images that are not only approved for internal use but also have the latest bits and are free from known vulnerabilities. Before that though, we need to make sure that the above information cannot be tampered with. We sign each one of the above generated artifacts using Notary V2 and push the image, any additional artifacts like annotations, SBOM, and vulnerability report, as well as their corresponding signatures to the golden registry.

Hosting Base Images for Internal Use

Hosting the image internally is done in an OCI compliant registry that also supports ORAS artifacts. The ORAS artifacts implementation allows us to not only store the above artifacts related to the image but also to continuously scan the image for new vulnerabilities, push the new reports to the registry, and attach them to the image. This way we have up-to-date information about the vulnerability status of the image and can make decisions for its continued use. ORAS Artifacts also enables additional annotations, such as when the image has reached its end of life (EOL), or if it meets a set of compliance requirements.

Building Container Images

Our CI/CD pipelines are configured to only pull images from the golden-registry or the Microsoft Container Registry (MCR), which is the distribution vehicle for Microsoft-built containers. We are implementing the same capabilities in MCR to enable external consumers of Microsoft images to leverage the same secure supply chain process we use internally.

If you run your container workloads on AWS, you can leverage the AWS pull-through cache feature to avoid uncontrolled pulls from the Internet. Other cloud vendors are also coming up with similar solutions allowing customers to control the sources of their base images.

Establishing Build Policies

The CI/CD pipeline has a policy to allow only images from the above registries that are signed, have SBOM, and have a recent vulnerability report. In the future, we can also update the policy to make decisions based on the information in the SBOMs and vulnerability report. For example, we may deny the usage of images that have severe vulnerabilities like the ones present in Log4J 2.17.0. If the SBOM or the vulnerability report has Log4J 2.17.0 in it, we can block the use and recommend an updated version of the image. Using OCI annotations and ORAS Artifacts, we can constantly add metadata for the images to deprecate them and point the image consumers to updated versions before we retire the images. The CI/CD pipeline is also instrumented to verify the signatures for those artifacts to ensure that no information has been changed after the image has been approved for use.

Keep Track of Components Dependencies

The CI/CD pipeline is used by teams to build their own application or service images. For any runtimes and packages added on top of the base image, we implemented a service that registers every component and keeps track of what allowed components are added to the image or alert if disallowed components are used. If a team tries to use a component that is not allowed, their build will break, and they will need to request the introduction of the component with necessary justification.

Different enterprises may have their own implementations but if you are just starting, OWASP dependency check tool or GitHub dependabot are excellent tools to start with.

Add Metadata to Internal Images

Like the acquisition process for base images, once the application image is built, annotations are added and SBOM is generated with information about the new components. We also scan the image for malware and vulnerabilities and sign all the respective artifacts. Only then is the image pushed to the service team's registry that is used for the production deployment.

Continuously Scan Images in the Registries

While in the service team's registry, the image is continuously scanned for vulnerabilities and malware to ensure that no new vulnerabilities are introduced to the test and production environments. Also, the image is annotated with additional metadata like deprecation flags or end-of-life dates. This way we can easily retire images that are end-of-life and monitor deprecated images that are still in use in production.

Deploying Container Images

At deployment time, we have policies that are evaluated from information stored in the registry. Deployments for services are enforced only from internal registries by Gatekeeper and Ratify. Because access to a registry is required for deployment of container-based services, storing the additional information needed for policy evaluation in the registry reduces the need for integration with different services. It also reduces the need to open access to various services on the runtime firewall that additionally increases the security of those services.

Running Container Images

Once the containers are deployed and running, we continue to scan them for vulnerabilities to make sure any vulnerability exposure of running containers is well understood. Using Eraser we can remove images from the runtime nodes that match certain criteria.

Patch Your Images

Patching images is of course one of the most important tasks in securing your containers' supply chain. Automating the patching process is essential to keep your container workloads secure. Using tools like GitHub's Dependabot can help with tracking vulnerabilities. Be aware that multiple teams may be involved in the process of updating the image and deploying the updated image to the production workloads. Adding rich metadata through OCI annotations can help you identify the actual owners and provide a higher level of automation.

Keep an Inventory and Report

Inventory and reporting capabilities throughout this process are crucial for the security of the containers' supply chain. Having an up-to-date inventory of our assets allows us to quickly identify images that are vulnerable and track back to their origin. It also allows us to identify the owners responsible for different steps of the process. For example, some images are produced by central teams, and they are responsible for providing fixes for discovered vulnerabilities. Once the image owner fixes the vulnerability, various teams are responsible for re-deploying their service with the updated image. Developing solid SLAs for vulnerability patching and tracking those SLAs using KPIs is critical for understanding your container security posture.

Don't Wait! Start Now!

Your plan to improve the security of your containers supply chain should follow an incremental approach. Do not wait for all tools to be available before you start! The basic steps you can start with today are:

  • Using ORAS, add OCI annotations to your images so you can track them throughout your supply chain
  • Use a vulnerability scanner to continuously scan the images while in your registry
  • Start defining admission policies for your container workloads
  • From there on, you can add new capabilities like SBOM generation and signatures once the tools mature and industry standards become available. Finally, get involved in the community and help shape the future of the containers' secure supply chain.

## 

***To learn more about containerized infrastructure and cloud native technologies, consider joining us at KubeCon + CloudNativeCon Europe 2022, May 16-20.

ABOUT THE AUTHOR

Toddy Mladenov, Principal Product Manager, Microsoft

Toddy-Mladenov 

Toddy has more than 25 years of experience in technology and is currently a Principal Product Manager on the Container Compute team at Azure. These days, he is working on securing the supply chain for containers used by Azure service teams and Microsoft customers.

Published Wednesday, May 11, 2022 7:32 AM by David Marshall
Comments
There are no comments for this post.
To post a comment, you must be a registered user. Registration is free and easy! Sign up now!
Calendar
<May 2022>
SuMoTuWeThFrSa
24252627282930
1234567
891011121314
15161718192021
22232425262728
2930311234