Virtualization Technology News and Information
Article
RSS
Service Mesh with Linkerd on Arm-based platforms at the Edge

By Pranay Bakre, Principal Solutions Engineer, Arm

The concept of service mesh was introduced as an abstraction layer between different services and service-to-service communication. Monolith applications have long been prevalent in the multi-tiered model, which includes the web, app, and database tiers. Although this addressed the problem of hosting applications as a single block of code, it also introduced new ones, such as traffic management, authentication, and so on. The problem was mainly solved by increasing the bandwidth, CPU, and RAM on the servers that hosted the application. It was, however, inefficient in dealing with the massive load produced by incoming traffic. With the introduction of microservices, all of these various components of an application were then broken down into several small services, each handling a different purpose. Although this improved the code's portability, traffic management remained a concern.

Service Mesh evolved over time to address the complexities of service-to-service communication, observability, monitoring and authentication. It is designed as a language agnostic, lightweight network abstraction layer for cloud native applications. Its purpose is to blend in the background and seamlessly handle all communications between micro-services.

Now, requirements for running service mesh and cloud native applications in the data center and the cloud are different from running them at the edge locations. With the exponential growth in IoT driven data and the need for processing data much closer to the source, developers need to consider factors such as latency, network bandwidth, and hardware resourcing footprint such as CPU, RAM and Power consumptions. Arm based platforms at the edge are very diverse in nature and cover deployments across all industries such as Retail, Manufacturing and Industrial IoT, Transportation, Utilities and beyond. For driving cloud native deployments for edge computing use cases, Arm Project Cassini is an open, collaborative, standards-based initiative  to deliver a cloud-native software experience across a secure Arm edge ecosystem.

Linkerd announced support for Arm based platforms with its 2.9 release. This enabled wide array of Arm-based platforms at the edge to utilize the standard Linkerd features previously designed for cloud-based deployments. Linkerd is a lightweight service mesh that adds observability, reliability, and security to Kubernetes based applications without code changes. Linkerd uses its own Rust based proxy called Linkerd2-proxy, which is specifically designed to focus on being a sidecar proxy. This proxy has very low CPU and memory requirements, minimal latency overhead for efficient network traffic flow and security focused. With this release, Linkerd also added automatic mTLS feature for TCP connections, further strengthening its security footprint. With such minimal footprint and zero-trust security model makes Linkerd an ideal candidate for executing service mesh use cases at the edge.

Major features of Linkerd include - mutual authentication (mTLS) protocols, proxy and fault injection, traffic split and distributed tracing. Authentication between services is a key component for any cloud native deployment. A secure communication channel gets established between service mesh proxies via an exchange of certificates and correct identity. Fault injection is relevant in identifying gaps in the application and effective remediation. Traffic split is utilized to deploy a new version of application on a subset of devices without impacting major services. All of these features are described with examples in a real-world use case below.

While Linkerd offers a lot of features; following are more relevant for an edge-based application:

  • Automatic mTLS - automatically enables mutual Transport Layer Security (TLS) for all communication between meshed applications
  • Automatic Proxy Injection - automatically inject the data plane proxy into pods-based annotations
  • Distributed Tracing - enable distributed tracing support
  • Fault Injection - mechanisms to programmatically inject failures into services
  • High Availability - multiple replicas of the components of linkerd control plane for HA mode
  • Traffic Split (canaries, blue/green deployment) - Linkerd can dynamically send a portion of traffic to different services

In this blog, we will deploy Linkerd on a K3s cluster hosted on Raspberry Pis. K3s is a CNCF project developed by Rancher Labs and is a lightweight production grade Kubernetes distribution optimized for edge devices. It has a small footprint and optimized to run on low memory and compute edge devices. Additionally, we will deploy a sample application called Fleetman - an Angular JS and Java based app to simulate a fleet of trucks and their respective location. This is a lightweight application with multiple services communicating with each other to handle different tasks. For more details on the architecture of the application and steps to deploy all of these components, check the configuration section below.

After deploying the Fleetman application with Linkerd service mesh on three node K3s cluster, we observed the CPU and RAM footprint of each node. For reference, we used the 8GB model of Raspberry Pi4 in this use case. Figure 1 shows the CPU and RAM usage of each K3s node. As we can see, even after installing all of the components, the max RAM consumption of Linkerd is 415 MB, while K3s consumes ~650 MB, plenty of RAM is available for the application to function. Similarly, the total CPU stats also stay around 20-25% mark, with Linkerd consuming a minimal portion.

 

Figure 1 CPU and RAM footprint of Linkerd, Fleetman and K3s on each node

 

Figure 2 shows a consolidated view of Average CPU and RAM consumption of all three components - Linkerd, K3s and Fleetman - across the Raspberry Pi nodes.

 

Figure 2 Consolidated view of average CPU and RAM consumption for Linkerd, Fleetman and K3s

Configurations:

Following is the service architecture of the Fleetman application:

 

Figure 3 Service-to-service architecture of Fleetman application

A brief description of its components:

  • webapp - Front end for the Fleetman application
  • api-gateway - API Gateway serves as backend facade for the Angular front end to connect to
  • staff-service - A service that shows the name of the truck driver and photo
  • position-tracker - Consumes vehicle position reports from a queue and stores them in-memory
  • vehicle-telemetry - Calculating speed based on the latest and the oldest reports for each vehicle
  • position-simulator - Simulator of vehicle position in a wide area

Pre-requisites for installing Linked and Fleetman:

  • Raspberry Pi4 (3)
  • Ethernet for networking connectivity
  • K3s cluster with 2 worker nodes configured on the RPi4s
To install Linkerd on K3s cluster use the following commands:

curl -sL https://run.linkerd.io/install | sh

export KUBECONFIG=/etc/rancher/k3s/k3s.yaml 
export PATH=$PATH:/home/ubuntu/.linkerd2/bin 
linkerd version

 

Run pre-check to validate the K3s cluster before Linkerd installation -

linkerd check --pre

Now, install linkerd on the cluster with the following command -

linkerd install | kubectl apply -f -

Enable the local linkerd dashboard to check the status of services and keep track on deployments.

Use the yaml file from this github repo to create a deployment named Fleetman and deploy on the cluster. Check the status of its pods -

 

Figure 4 Fleetman application pods

By default, Linkerd establishes and authenticates stable, private TLS connections between Linkerd proxies, enabling mutual Transport Layer Security (mTLS) for most TCP traffic between application pods.

Proxy Injection:

Use the following command to inject Linkerd proxy on each of the application's services -

kubectl get deploy -o yaml | linkerd inject - | kubectl apply -f -

 

Figure 5 Fleetman application pods with linkerd proxy injection

Fault Injection:

Injecting faults into Fleetman requires a service that is configured to return errors. To do this, start NGINX and configure it to return error code: 500 by applying this yaml file:

kubectl apply -f fault-injector.yaml

With Staff-Service and NGINX running, we can partially split the traffic between an existing backend, Staff-Service, and the newly created error- injector. This is done by adding a TrafficSplit configuration to the K3s cluster:

apiVersion: split.smi-spec.io/v1alpha1
kind: TrafficSplit
metadata:
  name: fleetman-staff-service
  namespace: default
spec:
  # The root service that clients use to connect to the destination application.
  service: fleetman-staff-service
  # Services inside the namespace with their own selectors, endpoints and configuration.
  backends:
  - service: fleetman-staff-service-stable
    weight: 90
  - service: error-injector

weight: 10

 

Apply the manifest with the following command:

kubectl apply -f fault-split-traffic.yaml

When Linkerd sees traffic going to the Stuff-service service, it will send 9/10 requests to the original service and 1/10 to the error injector, as shown in the figure below:

 

High Availability:

Linkerd can run in an HA mode to support production deployments. We can enable the HA mode by using the following command:

linkerd install --ha | kubectl apply -f -

We can also override the number of replicas with the following command:

linkerd install --ha --controller-replicas=2 | kubectl apply -f -

After installation check the pods in linkerd namespace:

 

Figure 6 Linkerd pods with High Availability mode enabled

Conclusion:

In conclusion, we'd like to say that with Linkerd service mesh enabled for Arm, server grade features are now accessible to smaller footprint edge devices. The rich set of features it provides, outweigh the risks and minimum overhead and opens up new service mesh use cases outside the data center. Please let us know your thoughts and queries on sw-ecosystem@arm.com and Join us at the Arm virtual booth at Kubecon EMEA by registering for the conference here.

##

To learn more about cloud native technology innovation, join us at KubeCon + CloudNativeCon Europe 2021 - Virtual, which will take place from May 4-7.  

ABOUT THE AUTHOR

Pranay Bakre 

Pranay is a Principal Solutions Engineer at Arm focusing on developing cloud native solutions spanning cloud to edge deployments with strategic partners. Pranay has over 10+ years of experience designing and implementing wide range of virtualization and cloud solutions and has authored multiple blogs, demos and presented at industry events providing his technical thought leadership.

Published Friday, April 16, 2021 7:33 AM by David Marshall
Comments
There are no comments for this post.
To post a comment, you must be a registered user. Registration is free and easy! Sign up now!
Calendar
<April 2021>
SuMoTuWeThFrSa
28293031123
45678910
11121314151617
18192021222324
2526272829301
2345678