Virtualization Technology News and Information
Article
RSS
Towards a scalable, reliable, and secure edge computing framework

By Dr. Peng, Software architect, Futurewei

The prosperity of cloud technologies, 5G and AI brings vast opportunities for edge computing where computation migrates from centralized data centers to on-prem, at-home, and remote areas. In this post, a few key distinguishing aspects of edge computing will be discussed including hierarchy, autonomy, and securely efficient networking.

Comparing with cloud computing where rich computing resources, storage and technical supports are plentifully available in well maintained data centers, edge computing offers its advantages and specialties in terms of low latency, high security and scalability by placing compute, network and storage closer to end users. Unlike the "fleet of army" formation of servers often found in cloud data centers, edge computing resources come with more heterogeneous flavors and configurations. Not only could the type of devices range from a single Raspberry Pi to well-equipped clusters or a mixture of anything above, but also the software application architecture has the luxury of a hierarchical architecture to implement the best balance between cost, flexibility and performance. Figure 1 exemplifies some of the hierarchical edge system layouts.

Hierarchical-Edge-Layouts 

Figure 1. Hierarchical Edge Layouts

Depending on its function, different component of a distributed application could be put at various distances from the end users. For example, lightweight compute nodes with device sensors could be placed right above road highways to collect data on traffic. Basic analysis such as license plate retrieval could be executed without the need to send the much larger images away, saving network bandwidth and maintain data security by keeping data local in some scenarios. Further, at an upper-level regional edge data center hovering logically above all the bottom-level edge sites, the license plate data together with other metadata from the highway edge sites could then be corelated throughout a larger geological area to gain more analytical insights. This type of layering could continue above till it reaches the cloud. Geological hierarchy is a key feature of edge computing separating it from "data center" based architecture. At each "vertex" of the hierarchical edge graph could sit either a single compute node or a standalone cluster with its own control plane. Figure 2 shows some examples of different types of edge configurations.

Edge-Layout-Varieties 

Figure 2. Edge Layout Varieties

Edge cluster often comes in some specific form of Kubernetes, and it provides the benefits of autonomy out of the box where node failure incidents could be tolerated by automatically placing containers onto a surviving node if resource allocation allows it. In addition, it is very likely that edge computing environments could wind up at a desolated or remote location where networking is unreliable. Some examples include a desert, the middle of an ocean or simply automobiles moving out of 5G tower range. In time of network disconnect, applications already running in the edge environment should be shielded from such impact and continue functioning. And new workload assignments and status reporting from and to upper level should be allowed to resume when network connection recovers. Altogether, autonomy for both network and node failures is another key feature and requirement for edge computing frameworks.

Aside from hierarchy and autonomy, the third key feature for edge computing is secure and efficient networking. One might argue that distributed application running in different edge clusters could simply "chat" using various safe protocols such as HTTPS. This, while valid for some scenarios, could bump into roadblocks if some edge clusters end up behind a private networking confine without publicly addressable connection. Additionally, traffic between edge clusters needs to be properly protected for data security. In a "vanilla" Kubernetes, application in form of containers in a pod likely use some type of overlay networking. This allows pods to communicate with "virtual" IP addresses that are only meaningful in application context. For edge computing, such overlay mechanism shall be extended to go across clusters, and this means going through public internet exposed to more threats. Furthermore, in a multiple-tenanted system, network could be "virtualized" into concepts such as VPC (virtual Private Cloud). With this, each tenant of the cluster could freely choose their IP CIDR for their VPC. This brings the demands of secure traffic segregation between VPCs that belongs to different tenants. These security requirements need to be addressed before edge computing environment could be trusted in less-protected physical placements comparing to cloud data centers. 

Finally, for clusters and nodes running on the edge, network communication places further demand on efficiency. Towards scalability, communication between applications running in different edge clusters should use the most direct path in terms of routing, and avoid the need to go through any single networking hub for the purpose of route discovery, that being the cloud or components on the edge. A distributed gateway system connected in a "P2P" fashion could be used to implement an efficient solution.

In summary, with the proliferation of edge computing and pushing application closer to end users for the benefit such as latency, salacity and security, a fundamental change of architecture is imminently required. Edge has a nature of being high distributed, hierarchical with possibly unreliable networking and compute resources. These are unique challenges distinct from cloud computing solutions, and as a result, these should dictate the development of edge computing frameworks.

##

To learn more about cloud native technology innovation, join KubeCon + CloudNativeCon North America 2021 - which takes place from October 11-15.

ABOUT THE AUTHOR

Dr. Peng, Software architect, Futurewei

Peng Du 

Du Dr. Peng Du works as a software architect at Futurewei. Dr. Du is responsible for developing next generation cloud technologies. His current focus is edge computing framework. Before joining Futurewei, Dr. Du worked at Amazon Web Services and Microsoft Azure. Dr. Du holds a doctorate degree in the field of high performance computing. He has presented his research work as a speaker at various IEEE & ACM conferences. Dr. Du is also a part-time lecturer at University of Washington, Bothell teaching algorithms and data structures.

Published Monday, September 13, 2021 7:59 AM by David Marshall
Filed under:
Comments
There are no comments for this post.
To post a comment, you must be a registered user. Registration is free and easy! Sign up now!
Calendar
<September 2021>
SuMoTuWeThFrSa
2930311234
567891011
12131415161718
19202122232425
262728293012
3456789