Virtualization Technology News and Information
Article
RSS
Demystifying Kubernetes Clusters at the Edge

The once-emerging edge computing market is now at the center of data processing and analyzing decisions across industries, from retail, hospitality, and consumer electronics to manufacturing, renewables, and oil & gas. Due to the widespread introduction of smart, connected devices, equipment, sensors, and applications, data traditionally created, housed, and managed inside cloud environments and data centers is increasingly moving to the edge. This rampant growth is also beckoning a comprehensive edge computing solution that not only moves data processing and analysis closer to endpoints where data is generated for increased value and faster decision-making, but also removes the cost associated with transferring large amounts of data to the cloud.

Is Kubernetes a good fit for edge computing?

As the de facto, open-source solution for container orchestration and management across multiple hosts, including both cloud and centralized data center environments, Kubernetes provides a powerful platform for deploying, maintaining, and scaling applications. With much of new software being managed in a container, it stands to reason that Kubernetes would make an ideal solution for the distributed edge. However, when compared to cloud-native applications and centralized data center environments, deploying Kubernetes clusters on edge infrastructure presents several complexities. You need to consider the inherently homogeneous nature of hardware, software, and skill sets, limited computing footprint, geographic distribution, and expanded security needs. Not to mention the challenge of extending cloud-native development principles to the edge while integrating legacy systems and software, and the unique needs of operational technology systems. Being aware that these challenges may exist at the edge before attempting to deploy Kubernetes is the first step to ensuring a successful Kubernetes implementation.

So, let's take a closer look at the most critical considerations for Kubernetes at the edge, along with a few real-world strategies to manage them.

Overcoming edge deployment, orchestration, and management challenges

  • Standard Kubernetes distribution, or K8s, is simply too big for the edge.

When considering the inherently constrained nature of devices and resources at the edge, Kubernetes clusters built for cloud deployment just don't make for a one-to-one solution; therefore Kubernetes at the edge must be deployed differently. The first step to addressing this conundrum is selecting the right Kubernetes distribution to meet the requirements of your edge hardware and deployment strategy. Many developers look to compact, open source container management, like K3s, Tanzu Tiny Stack, and OpenShift, with lower memory and CPU footprint, as a singular solution, but these small, focused distributions may not adequately address data sharing and communication, system interoperability, and elastic scaling needs. 

A more fitting solution would be working with an orchestration vendor or platform that provides the flexibility necessary to support any Kubernetes distribution, including K3s, K8s, KubeEdge, and MicroK8s to name a few.

  • Scaling clusters, sites, and locations at the edge requires a distinctively different approach than the cloud. K8s was designed for scale and flexibility. In the cloud, this design enables running thousands of containers in a cluster. An operator can easily manage three to five one-thousand-node clusters. But the edge presents a very different scenario. At the edge, you might have thousands of three- to five-node clusters, a scenario that current management tools aren't designed to handle.

So, how can you scale with Kubernetes at the edge? There are a couple of viable options: 1) Maintain a manageable number of clusters and, if required, run multiple instances of a container orchestration platform; or 2) Implement Kubernetes workflows in a non-Kubernetes environment, like EVE-OS, an open-source operating system developed as part of the Linux Foundation's LF Edge consortium, which supports running virtual machines and containers in the field.

In option 1, manageability is a key factor. Managing up to 500 clusters, for example, might be a suitable approach, but thousands of small clusters would likely result in an unmanageable and problematic scenario that could stress all of your resources. This approach is ideal for users who intend to leverage core Kubernetes capabilities or are looking to manage a large number of containers at a site.

Option 2 may require a significant investment, but several real-world companies are having success with this approach. Option 2 takes advantage of existing Kubernetes-based workflows without implementing the Kubernetes runtime on the edge node. For example, a user can define a Helm chart application within the Kubernetes ecosystem and deploy it into containers running on a platform like EVE. This works well if you have a manageable number of containers, but if you're looking to manage more than 10, for example, you may begin to push the limits. At this point, it would be advisable to go back to Kubernetes and rethink the approach.

  • Network connectivity is implicitly unreliable at the edge. When you build applications in the cloud or in a data center, it's safe to say you can depend on always-on connectivity with the exception of planned maintenance. Devices in many edge environments, like solar and wind turbines, however, are often in remote locations with harsh conditions, making network connectivity untenable and unreliable.

When unexpected outages occur, the logical choice in a data center or cloud environment might be deploying an IT technician, but sending an expert to a distributed location at the edge is not only impractical but costly. If keeping up with diagnostics in the field, maintenance predictions and unexpected outages is critical, consider a centrally managed orchestration and management solution that continues to work even when networks and devices go offline for extended periods, delivers reliable updates, and supports diverse communication hardware such as satellite, LTE, 5G, etc.

  • Securing your Kubernetes cluster, alone, isn't enough at the edge. When devices are no longer managed in a centralized data center or the cloud, it opens up a variety of new security threats, including physical access to both the device and the data it contains. As a consequence, you'll want to extend your security measures beyond Kubernetes containers and develop infrastructure security measures that protect the devices themselves, as well as any software running on the devices.

An infrastructure solution, like EVE-OS, was purpose-built for the distributed edge. It addresses common edge concerns such as avoiding software and firmware attacks in the field, ensuring security and environmental consistency with unsecured or flaky network connections, and deploying and updating applications at scale with limited or inconsistent bandwidth.

  • Interoperability and performance requirements vary at the edge and must be thoroughly investigated. The diversity of workloads, as well as the number of systems and hardware and software providers inherent in distributed edge applications and across the edge ecosystem puts increasing pressure on the need to ensure technology and resource compatibility and achieve desired performance standards.

To face this issue head on, you'll want to consider hyperconverged or hypervisor-based platforms that enable running diverse workloads. Consider an open-source solution such as EVE-OS that disavows vendor lock-in and facilitates interoperability across an open, edge ecosystem.

An open mind is critical to the future of edge computing

It's clear from these examples that Kubernetes, alone, may not offer an exclusive, singular solution to distributed edge deployment for every manufacturer, but it absolutely does work and can be well-suited to many edge computing projects. Not to mention, Kubernetes clusters are already up and running in many successful edge applications from notable manufacturers across industries.

The key to success with Kubernetes at the edge is building in the time to plan for and solve potential issues and demonstrating a willingness to make trade-offs to right-size a solution for your specific concerns. This approach may include leveraging vendor orchestration and management platforms to build the edge infrastructure that works best for your specific edge applications.

Looking to orchestration providers also holds additional promise because they are on the ground floor working directly with customers and capturing current challenges, as well as needs and wants for future projects. Many are also working to create new edge management tools that will help to solve today's issues and safeguard against issues anticipated in the near future.

As for the future role of Kubernetes at the distributed edge, the question of whether Kubernetes will one day be compatible with every edge computing project or provide as powerful a solution as it does in the cloud has yet to be answered. In the interim, however, developers, operators, and manufacturers will continue to rely on Kubernetes as one of the key solution drivers at the edge, along with many of the creative techniques covered in this blog, to deploy and manage edge applications.

++

Join us at KubeCon + CloudNativeCon North America this November 6 - 9 in Chicago for more on Kubernetes and the cloud native ecosystem.  

##

ABOUT THE AUTHOR

Michael Maxey, VP of Business Development at ZEDEDA

michael-maxey 

Michael Maxey is the VP of Business Development at ZEDEDA, where he focuses on building and executing go-to-market (GTM) strategies with customers and partners. Maxey is also an LF Edge Governing Board Member, helping drive efforts around standardization, developer recommendations, and solution building. Prior to ZEDEDA, Maxey held executive product management and corporate development roles at various infrastructure companies like Dell, Greenplum, Pivotal Software, Smallstep Labs, and EMC.

Published Thursday, October 12, 2023 7:30 AM by David Marshall
Comments
There are no comments for this post.
To post a comment, you must be a registered user. Registration is free and easy! Sign up now!
Calendar
<October 2023>
SuMoTuWeThFrSa
24252627282930
1234567
891011121314
15161718192021
22232425262728
2930311234