Virtualization Technology News and Information
Tips and Considerations for Kubernetes Deployment in Large Enterprises


Containerized applications have become commonplace in the development and deployment of modern software applications. For smaller deployments, managing these containers can potentially be done manually but for larger ones, such as those required by large enterprises, an orchestration solution is needed. Kubernetes, created by Google engineers, is one such solution.

Why Use Kubernetes?

There are numerous tools for container orchestration currently available, but Kubernetes (K8s) is by far the most widely used. Its open-source nature means that you don't need to be concerned by vendor lock-in and it's portability allows you to use it on just about any set-up, on-premise, hybrid cloud, or public cloud, regardless of provider. Since K8s is cloud-native, it easily supports the agile development, high-availability, and scalability of applications required in high-volume, production-grade deployments.

Kubernetes has built-in features for the versioning of containers, inter-container communication, and storage consideration. It can automatically manage container availability, integrates well with Continuous Integration/Continous Development (CI/CD) pipelines, and provides a platform for building customized workflows and higher-level automation.

Tips for Deploying Kubernetes in Large Enterprises

Deploying Kubernetes on an enterprise scale is a complex process that requires expertise. Here are some tips that can help ease the process for you.

Take Advantage of Community Resources

K8s has the backing of a huge open-source community with webcast office hours, community meetings, and member talks, in addition to extensive documentation. If you run into issues with your deployment, it is likely that someone else has as well and may have already figured out a solution that they would be happy to share. Depending on where you're located, there are relatively frequent conferences and meet-ups where you can connect to other Kubernetes users and exchange advice or lessons learned.

The significant number of large enterprises making use of K8s, including Intel, Red Hat, and Microsoft, means that many community members have experience with large scale deployments and will gladly give you pointers for managing considerable workloads across expansive networks. 

Consider a Managed Service

Kubernetes enterprise-level deployments are a significant undertaking, with many dynamic parts and opportunities for things to go awry. Fortunately, a wide variety of tools have already been developed to help smooth the process, including managed services. These services are available along a spectrum including self-service deployments, management of self-hosted operations, and fully-managed Platform-as-a-Service (PaaS) solutions.

Unless you are looking to further develop K8s or create a platform for the solution yourself, there is little reason not to take advantage of these shortcuts. Doing so will allow you to take full advantage of Kubernetes benefits without having to deal with the operational burden, and potentially provide you additional benefits you would not otherwise have access to, such as increased automation or Service Level Agreements (SLAs).

Take Care of Monitoring and Logging

Although K8s allows you to automate a significant amount of the orchestration process, you still need to consistently monitor and log your operations. This is especially true in large enterprise deployments where even small drops in availability or brief downtimes can have large revenue consequences. Monitoring helps ensure that you are getting the best possible performance from your configuration and allows you to more quickly respond to any issues that might arise. Logging will help you track down the root of any issues and provide insight into how your set-up can be optimized.

For monitoring, you can either make use of the resource metrics pipeline or a full metrics pipeline, such as Prometheus, Sysdig, or Google Cloud Monitoring. The former stores a limited set of collected metrics related to cluster components and the kubectl top utility and can be accessed through API. The latter provides access to a richer set of metrics that can be used to direct automated responses to changes in performance. Metrics gained from consistent monitoring are useful for confirming ROIs and informing KPIs for future growth.

Logging can be done through kubectl logs directly or through the integration of tools like Fluentd or Elastic, which can grant additional log aggregation and search functionality. Regardless of the method you choose, you should write your logs in stdout/stderr format to comply with twelve-factor app guidelines. Log data makes it easier to diagnose system issues and simplifies compliance and auditing procedures.

Ensure High Availability (HA) of Clusters

Highly available clusters are vital to large enterprises wanting to ensure reliable service for their teams and customers. To ensure that your operations aren't negatively impacted by lack of availability, you should implement a multi-cluster deployment model in which your nodes are mirrored across multiple regional data centers.

Used in combination with a load balancing solution capable of external distribution, this configuration can help you eliminate single points of failure and reduce downtimes. When doing this, you should ensure that any storage solutions you have mapped are also HA so your clusters aren't negatively impacted.

Use Stateless Applications

Stateless applications, which use client-side cookies to store data, as opposed to within a connected database, are generally better suited to Kubernetes functionality. They allow multiple simultaneous requests and can easily be restarted in case of error, providing an overall smoother experience for end-users.

Use of stateless applications will allow you to avoid the complication of balancing consistency, availability, and partition tolerance, per the CAP theorem, that stateful applications require as well as eliminate single point of failure concerns. Stateless applications can be additionally beneficial in production environments by reducing the liability associated with storing large amounts of user-specific data in enterprise databases.

Wrap Up

Deploying Kubernetes in large enterprises can be challenging, particularly if you don't have experts in-house to perform the configuration and ongoing operations tasks required. However, the benefits that K8s can provide are immense and should not be sacrificed or ignored due to lack of expertise.

A carefully constructed plan will allow you to successfully roll out even the largest deployments. Implementing the tips covered here should help you build and execute such a plan. Working methodically from the start will help your team keep a firm grasp on your operations and simplify the process as you continue to scale up.

If you already have in-house professionals devoted to mastering Kubernetes, your next step should be to assess which applications you want to deploy first. If you want to go into production as soon as possible but don't have the expertise needed, you should explore some of the managed services available.


About the Author

Gilad Maayan 

Gilad David Maayan is a technology writer who has worked with over 150 technology companies including SAP, Imperva, Samsung NEXT, NetApp and Ixia, producing technical and thought leadership content that elucidates technical solutions for developers and IT leadership. Today he heads Agile SEO, the leading marketing agency in the technology industry.

Published Wednesday, August 07, 2019 7:37 AM by David Marshall
Filed under: ,
There are no comments for this post.
To post a comment, you must be a registered user. Registration is free and easy! Sign up now!
<August 2019>