Virtualization Technology News and Information
Secure and Efficient Management of Service-to-Service Communication with a Service Mesh

By Krunal Chaudhari, IAURO Systems

Microservices has redefined software engineering, and the way businesses operate, compete and grow in the present ever changing world. Microservice architecture takes complete advantage of the cloud and offers businesses the flexibility to scale, and address change and cater to customer demands faster than ever before.

This architecture has shaped businesses and has given rise to services and products which otherwise would have been difficult if not impossible to create. Giants like Google, Amazon, eBay, Netflix, Uber, SoundCloud and many others owe their fair share of their success to this engineering marvel.

DevOps, CI/CD and Containerized Infrastructure has tremendously changed the way applications are built, deployed and maintained today.

Older businesses too are slowly migrating their applications from Monoliths to Microservices to stay abreast their competition.

Nevertheless, Microservices, like any other tech and business solution, has it's fair set of challenges. However over the years, new solutions have been built successfully to address these problems and streamline the development processes even more.

Today we're going to focus on one such problem and talk about a suitable solution to deal with it.

Let's get started.

First, let's discuss Microservices in brief and talk a bit about the problem we're trying to eliminate.

What is a Microservice?

Within a Microservice architecture, the entire application is broken down into smaller manageable Services. Each service is identified by its own business logic and is developed and deployed independently within a cluster.

Breaking down the application development into multiple logical units gives developers and the operations team several advantages.

Each service can be autonomously developed using a language and a database of their choice. The operations team can independently deploy this Microservice on hardware configured to run this service alone.

Individual Services can be scaled separately based on their workload. A continuous integration and continuous delivery pipeline with complete automation, that forms the overarching theme of such development practice results in less errors and faster rollouts.

In this manner, new features can be introduced rapidly within the application. This reduced time-to-market gives businesses a competitive advantage.

Now that we have seen the brighter side of Microservice Architecture, let talk a bit about its shortcomings, the chink in the armor if you will.

Challenges of Microservices Applications

Besides business logic, each service has other networking tasks which are essential for Microservices to function properly. These can make the Services very complex and difficult to maintain. Let's discuss each of these in short.

Communication: Each service needs to communicate with two or more Microservices, and the Webserver to send and retrieve data. Sometimes communications may fail due to transient faults in the system.

Timeouts, retries, and circuit breaking functions have to be implemented to handle the network, hardware and software problems appropriately. Doing this for every service can be expensive and could be an almost impossible feat for large scale applications.

Security: Within a Microservice Architecture, Services can talk to other Services within the cluster freely, unless there's an additional layer of security between them.

If left unconsidered, it may pose a huge security threat and can be disastrous if your application holds sensitive data within it.

Additional configuration is required within the cluster to address security issues, and to employ a stricter security policy. Again, implementing this inside every service could add to the complexity of the service, and raise the cost of developing the application.

Monitoring Performance: Insight into performance of each Microservice can be vital to your development and operations team, to help make further improvements and enhancements to the application.

A thorough monitoring of errors, number of requests each Microservice receives and sends, and bottlenecks in regards to speed and performance of each Microservice, can help teams identify critical problems before hand. These issues can be prioritized and resolved  in the following iterations of development.

Now that we've outlined some of our concerns within a Microservice application, let's see how these can be tackled with a Service Mesh.

What is a Service Mesh?

A Service Mesh is essentially an infrastructure or network layer which employs a proxy to handle all essential non-business communications between Microservices in a Cloud Native application.

This is done with the help of two important components of a Service Mesh - the Data Plane and the Control Plane.

Data Plane: A Proxy is implemented as a sidecar alongside each service. This proxy mediates and controls all network communication to and from a Microservice.

Control Plane: This is the abstract layer through which you can implement, configure and manage proxies for every Microservice pod.


The diagram above shows the Service Mesh in a Microservice Architecture.

Eliminating Problems with a Service Mesh

Let's dive deeper and take a look into how a Service Mesh addresses each of the Microservices' problems mentioned earlier.

Communications: The sidecar pattern takes care of all the network logic for the application. The proxy for the Services can be configured for service discovery, routing and load balancing. Further you can configure policies for timeouts, retries and circuit breaking easily.

Security: You can easily configure security policies for the Services via Transport Level Security (TLS) and policies such as Access Control Lists (ACLs) can be implemented.

Monitoring Performance: To observe and analyze performance metrics, or tracing logic, a third party monitoring application like Prometheus can be implemented on the Service Mesh. This can give the development and the operations team all the necessary insight needed, to further perfect their build.

So the Service Mesh addresses these problems for you. Moreover, the complexity of the Services itself is reduced by separating the business logic from the security and communication tasks.

Since the developers are freed from the task of managing non business logic, they can now focus on developing features, and be more business and market oriented.

Given below are additional features which make a Service Mesh all the more indispensable.

Control: A Service Mesh gives granular control over setting up rules for communication between Services. Also, you do not need to configure each proxy separately. All the rules can be configured via the Control Plane, and the networking and security rules are propagated to the proxies. The Proxies then, with the new rules, deal with the service to service communication on behalf of the Services.

Automatic Service Discovery and Registration: When a new Microservice gets deployed it gets registered to the Service Registry. The Service Mesh automatically detects the Service and the endpoints in the cluster. The proxies are able to reference the Service Registry and query the endpoints to communicate with the relevant Services.

Canary Deployment: This is another strong feature within a Service Mesh that can be a life saver. When a new version of a service is released, you can configure the Webserver Microservice to split the traffic and send a small percentage of it to the latest version you've rolled out, for a defined period.

This way you can check and identify bugs and performance issues and rollback the service for further correction if required. Traffic splitting can also be used for A/B testing to check if new application features are beneficial or damaging to your business or users.

Gateway: A gateway serves as an entry and exit point for inbound and outbound traffic to your application cluster. A Service Mesh allows you to configure load balancing properties such as port and TLS setting to this proxy that runs at the edge of the mesh.

Popular Service Mesh Applications.

A Service Mesh is essentially a paradigm or a pattern. There are several popular third party applications  such as Istio, HashiCorp Consul and Linkerd which are implemented more than others.

Linkerd is a lightweight and simple alternative which is being developed under the patronage of the Cloud Native Computing Foundation (CNCF).


Microservices may surely be an answer to today's demanding business needs. However it has its own challenges in regards to communication and security.

But when coupled with a Service Mesh, you could build applications that are easy to manage, deploy, and configure, with the needed security built in.

I hope you've found this information helpful. Let me know your views in the comments below.


To learn more about cloud native technology innovation, join us at KubeCon + CloudNativeCon Europe 2021 - Virtual, which will take place from May 4-7.    


Krunal Chaudhari Principal Consultant - Microservices and DevOps 

Krunal Chaudhari 

Architecting the golden gate bridge between technology and business over a decade. Coding, Building, Scaling and Delivering consistently and catering to the software ecological challenges is what takes up most of Krunal's time.
Published Thursday, April 22, 2021 7:31 AM by David Marshall
There are no comments for this post.
To post a comment, you must be a registered user. Registration is free and easy! Sign up now!
<April 2021>