By Keith Basil, VP of Product, Cloud
Native Infrastructure, SUSE
Imagine
managing a global deployment of 15,000 locations and 500 industrial Internet of
Things (IIOT) devices at each site. Your
team has embraced containers and is 100 percent on board with a cloud native
approach to meeting this challenge. So, where do you begin? This scenario of 7.5M assets under management
illustrates the core challenge of implementing and orchestrating cloud native
approaches in the new, exciting, and yet potentially complex world of the
edge.
According to the Linux Foundation, "edge computing will be
4x larger than cloud and will generate 75 percent of data worldwide by 2025."
Given the future size of the market and where we are today, we have quite a bit of runway in front of
us. Serious attention to the edge brings
the realization that the law of large numbers is at play here, and we need to
be ready to scale.
To add to the
complexity, we see deep diversity in edge scenarios. From underwater
deployments to satellites in space and everything in between, Kubernetes is
being used to manage cloud native applications everywhere. Within the
Kubernetes ecosystem, we have the facilities to tackle this. But before we dive
into that, let's establish a framework for defining the edge.
Everyone has
a different definition of the "edge." Collectively, we've found it
useful to establish a baseline definition of the segments within the edge space so that meaningful discussions can
ensue.
Defining
the Edge
First, we must understand that we consider the
hyperscalers, large data centers and core telecommunications infrastructure centralized infrastructure. We begin our
definition of the edge from this perspective, as one of the fundamental first
questions is "the edge of what?"
If we place the centralized infrastructure on
the left side of our mental model, we then move away from centralized services
to the right side of the world and into the edge segments. The first edge
segment that we encounter is the realm of the communications service providers
(CSPs). Referring back to our mental
model, we call this the near edge as
it is nearest to the centralized
services. In this edge segment, we find a diversity of use cases that address
deploying compute and storage resources that meet the needs of the 5G core and
multi-access edge computing (MEC) services as examples. One useful determinant
that helps with our definition is to ask who owns and operates both the IP
space and the infrastructure hardware in this segment. We find that CSPs own
and operate both the IP space and the infrastructure hardware within the near
edge. Multi-service operators (MSOs), such as a traditional cable company
providing voice, video and data, fall into this category as well.
A logical and physical distinction also
sharpens our mental map as the line of demarcation is extremely useful in crisply
supporting edge segmentation where that IP space and infrastructure management
is still, in most cases, managed by the CSP.
This line is important because there are near edge use cases that take
the form of communications provider appliances that support next-generation
services like software-defined wide area networks (SD-WAN) and Secure Access
Service Edge (SASE). We see strong interest in Kubernetes at the core of these communications
provider appliances. The implied notion of network and infrastructure ownership
places these applications on the line of
demarcation while remaining in the near edge segment as the communications
providers own and manage those appliances.
Referring
back to our mental model, the next segment we encounter is the far edge. Here, the IP space and
infrastructure are typically owned, operated and managed by the end-user
organization. It is the segment that is to
the right of that line of demarcation.
Here we find a diverse set of use cases. Broadly, these fall into
commercial, industrial or public sector deployments and can reach numbers into
the tens of thousands of locations.
Remaining
true to our cloud native roots, the use cases in these industries are moving toward
Kubernetes with varying cluster sizes. The main driver here is one of
information and operational transformation where organizations desire to move
cloud native applications to where they are needed to realize business value.
We think most edge use cases will fall under the far edge segment, with data
aggregation and analysis being the core function of the cloud native
applications deployed here.
This leads us
to the final edge segment: the tiny edge. This segment represents the world of
fixed-function devices. One of the main
drivers here is the requirement to bring the IIOT under cloud native
management. These devices include
sensors, actuators and IP cameras, which are typically found within the same
layer 2 network as the Kubernetes cluster running in the far edge segment. So, in essence, the tiny edge is a
sub-segment within the far edge we've defined above.
The tiny edge
is where the law of large numbers kicks in.
Capabilities in this segment are still maturing because of the diversity
and quantity of IIOT protocols. In this segment, management solutions are
sometimes proprietary and classically implemented with protocols that we must
embrace if we are to bring these devices into our cloud native way of
thinking. We are encouraged by several
upstream communities dedicated to enabling these gateway models and thus
standardizing our approach to managing the devices in this space.
Our
Main Challenge
Given the law
of large numbers and the diversity brought to the table, we see three pillars
that are required to address the management
at scale challenge we see at the edge.
First, we
need a lightweight CNCF-certified Kubernetes offering. Many of the far edge use cases require a
single node cluster, which in deployment could be a system on chip (SoC)
computer system, running a single or low number of application containers. These are typically resource-constrained
machines, so having a lightweight multi-architecture Kubernetes offering provides
the best flexibility in meeting the needs of these deployments. CNCF certification is vital as it
standardizes our interfaces at the edge and allows us to leverage existing
tooling and learnings.
Second, we
need a lightweight, cloud native operating system that provides enhanced
security due to its low attack surface and ease of lifecycle management.
The third piece of the puzzle addresses the
management at scale challenge head on.
We believe a GitOps approach to managing a large number of downstream
clusters gives us the leverage we need to scale our team skill set. The Gitops
approach is well tested with the cloud native world, and having a declarative
source of truth addresses the complexity inherent in edge deployments quite
nicely.
Collectively,
we should work toward meeting these challenges.
SUSE will be doing this, and we ask that you join us in the areas we've
outlined. We'd love to see the definition framework adopted, as we believe it
enables us to have meaningful discussions that are efficient in guiding us to
solutions that work. Overall, we should
strive to remove the complexity at the edge and focus on the business value
that increases efficiency.
##
ABOUT THE AUTHOR
Keith Basil VP of Product, Cloud Native
Infrastructure, SUSE
Basil brings over 21 years of hands on experience in
cloud and related industries. As Vice President of Cloud Native Infrastructure,
Basil drives strategy and management of SUSE Rancher cloud-native products.
Working with the SUSE global customer base, he is also driving development of
cloud-native edge solutions that encompass cluster management, heterogeneous
architectures, and zero-trust security approaches at scale.
Basil is also passionate about the next generation of decentralized cloud
computing models. As an advocate in this area, he is working with communication
service providers and public sector organizations to establish decentralized
cloud infrastructure, applications and new revenue models.
Before Rancher, Basil led product management, positioning, and business
strategy for security within Red Hat's Cloud Platforms business unit. Prior to
Red Hat, he was instrumental in the design of a secure, high-performance cloud
architecture that provided compute, storage and application hosting services
for US public sector civilian agencies and contractors.
Basil holds a Bachelor of Science in Interdisciplinary Studies from Norfolk
State University.