Virtualization Technology News and Information
Article
RSS
What's All the Fuss About Cloud-Native Computing?

By Puja Abbassi, Developer Advocate at Giant Swarm

If you keep abreast of the popular trends in IT, then you cannot have failed to notice the buzz around ‘cloud-native' in recent years. Everyone talks about it and whole conferences are organized to discuss it. There's even a prominent, vendor-neutral foundation whose purpose is to promote ‘cloud-native' computing.

But what is it, and what benefits does it provide us? Let's start by defining what we mean by ‘cloud-native'.

What is Cloud-Native?

During the last couple of years, the term ‘cloud-native' has permeated our vocabulary without really having a concrete definition. It appears to mean slightly different things to different people, and even the definition provided by the Cloud-Native Computing Foundation (CNCF) has evolved over a relatively short period of time.

What we can say for certain is that it is a new approach to architecting software applications as small, distributed, loosely-coupled services. These can be deployed using a high degree of automation, to different target environments (including the cloud), without the need to re-factor the service to take account of the environment. The makeup of these services allows for them to be scaled easily to cope with fluctuating demand and to provide resilience through redundancy.

It's always tempting to express the purpose of cloud-native in technological or process terms, but fundamentally the purpose of cloud-native is to aid businesses. The goal being becoming more agile as they respond to stimuli in their spheres of operation. It helps innovators lead their chosen markets. It helps established businesses to outwit their competition. It helps service providers to respond to customer demands quickly.

Adopting a cloud-native approach for your business is not a walk in the park. In addition to the significant strategic and cultural changes that need to be adopted within your business, it's also necessary to become familiar with the patterns, techniques and technologies, that all contribute to the composition of the cloud-native stack - the environment where software applications are deployed and operated.

Microservices

One of the fundamental components of the cloud-native approach is the microservice. It is a small, independent unit of code, that either performs a task on behalf of a peer service, consumes another peer service, or both. It's characterized by a well-defined interface and is usually initiated through an API call. A set of loosely-coupled microservices, form a complete, logical software application.

As an architecture pattern, the interest in microservices has been exponential since the advent of Docker and the commoditization of containers. The terms ‘microservice' and ‘container' are often used interchangeably, and it's frequently assumed that they are synonymous. Regardless of the hype concerning the pattern as the solution to all problems, there is also a lot of merit to be taken from its adoption. The trick is to have your eyes wide open and to learn from those who have gone before.

Intended Benefits

There is an abundance  of documented benefits associated with adopting the microservices pattern and here are just a few examples:

Autonomy for Teams of Developers

Providing teams with clear, delineated boundaries when it comes to the code they are responsible for, provides them with greater autonomy. This means they can develop their code independently from other teams that are responsible for other services, which frees the innate potential in each team.

Faster Iteration

This independence allows the team to go at their own speed. This gives them the means to iterate over problems far more quickly than if they were required to accommodate the concerns of every other team. They get to release code features at their own pace.

Experimentation

Once again, the independence that a team gets from the microservices approach allows them to experiment more freely. They can evaluate and act on the results of experimentation without having to worry about the impact it might have on another team's work. This fosters innovation, and promotes flexibility and adaptability.

Resilience and scale

And, of course, the ability to scale services to accommodate for fluctuations in service demand, and in order to ensure service resilience, is of paramount importance. Having loosely-coupled services, allows them to be independently scaled and replicated, which all helps in the bid to maintain effective service levels.

The challenge

Architecting microservices to realize these benefits is a whole subject in itself, and there are copious articles, books, and workshops devoted to the topic. And whilst we're clear on the potential benefits we can gain, we should be equally clear that you don't get something for nothing. If the microservices pattern relieves us of the complexity associated with traditional monolithic applications, it has only achieved this by moving the complexity into a different layer of the stack.

So, instead of manually deploying our carefully crafted monolith, which we can love and nurture with relative ease, we get a distributed set of ephemeral microservice deployments. These require reliable and competent automation, observation, communication, and (hopefully only) occasional remediation. How do we scale these services on demand? How do we facilitate reliable, secure inter-service communication? How do we deal with service degradation or failure?

The need to deal with this complexity has seen the evolution of orchestration platforms dedicated to running microservices, such as Apache Mesos or Kubernetes. Kubernetes in particular has gained enormous interest in the cloud-native community. By hiding some of the complexity associated with microservices in the platform itself, developers can get on with the job of creating value through the software applications they build.

Whilst a platform takes care of providing an automated runtime environment for microservices, it doesn't (and perhaps shouldn't) always provide everything that we need to operate a robust environment for microservices. We need the ability to observe the deployed services through monitoring and logging capabilities, the means to facilitate reliable traffic delivery, to be able to debug errant transactions through distributed tracing, and much more. Luckily, we don't need to write all these capabilities from scratch. Instead, we can achieve most or even all of these things through the careful selection and implementation of appropriate tooling.

Standardization

One of the remarkable consequences of the advent of cloud-native, is the abundance of choice when it comes to the tools that comprise a cloud-native stack.

 

It might be tempting to leave the choice of tools for performing these critical functions to the different teams responsible for each microservice, but this approach is likely to prove counter-productive. Team independence is one of the great benefits of the cloud-native approach.  But when it comes to operating a microservices environment, then commonality or standardization in operating tools, makes the task more efficient, less error-prone and enhances team and organizational velocity.

Automation

A cloud-native approach relies on facilitating change at speed, reliably and effectively, with little or no impact to the consumers of applications. With constant change to software and the infrastructure that underpins it, unless we can swap manual patterns for robust automation, we'll eventually sink under the complexity we've introduced. Automation then, is a key facet of the cloud-native approach.

Infrastructure

With cloud computing now an accepted norm for consuming compute infrastructure, it makes sense to benefit from all it has to offer. It suits the on-demand nature of cloud-native application consumption and helps us to maximize the value from the infrastructure the applications run on top of. Even if you choose to abstract your entry into the stack to a higher level, perhaps at the Kubernetes platform level for example, you can still benefit from the dynamic, on-demand, elastic nature of cloud-based services.

It's imperative, however, to ensure that infrastructure is defined declaratively, and that it is backed by suitable automation tools. Having the ability to create, re-create, and scale infrastructure up and down in an idempotent manner, is important to maintaining the integrity of the platform that hosts the application services. For example, you could use cloud-specific automation such as AWS CloudFormation or Google Cloud Deployment Manager, or settle for a more cloud-agnostic approach with Terraform, Pulumi or some such.

Continuous Integration and Delivery (CI/CD)

Another cornerstone of the cloud-native approach is the speed and frequency of software application updates for new features and fixes. When code is being developed by a team of developers, and finished code needs merging, building, testing and packaging before it's ready for deployment, there needs to be a high degree of automation. If this pipeline to deployment consists of manual steps, then it's error prone, and can only progress as fast as humans are able to execute the steps.

That's why there has been a proliferation of tools that support the continuous integration and delivery of software. Once again, in the cloud-native era we're spoilt for choice. There are tools that pre-date the cloud-native era, such as Jenkins, which have evolved and adapted to provide CI/CD capabilities specifically aligned to the cloud-native approach. And there are tools that have grown up with the cloud-native era, which includes tools like GitLab and Drone.

GitOps

One interesting direction is the move to consolidate automation techniques for both infrastructure and cloud-native applications. This approach has been coined ‘GitOps', because of its dependency on holding declarative configuration in a source code version control system like Git, and the operational nature associated with deploying infrastructure and applications. The declarative configuration is the ‘single source of truth' for the environment in which the application runs, as well as for the code that defines the application. For a Kubernetes cluster, we can continuously discern the delta between the desired state and the actual state, at all levels of the stack. The GitOps approach is exemplified by Weaveworks Weave Flux.

DevOps

Naturally, there is a human element to all this, because you need software developers to write code and you need engineers to deploy and manage workloads in production. Before the cloud-native era, these two roles were effectively separate and distinct. The DevOps movement seeks to break down these barriers through collaboration, as well as a shared responsibility and ownership for the software applications and the environments in which they run. The term DevOps is often mistakenly defined by the tools an organization uses, or the processes and practices it adopts. Whilst these things might be a by-product of DevOps adoption, ultimately DevOps is matter of culture within an organization where the role of developer and engineer is either blurred to the point of being indistinct or characterized by a high degree of collaboration.

The cloud-native paradigm cannot claim to have ‘invented' DevOps. Rather, just like the microservices pattern and advances in automation techniques, it has been an enabler in its accelerated adoption amongst companies of all sizes, shapes and hues. Changing culture is a big challenge for those organizations but without it, the journey to realize the benefits of a cloud-native approach will be severely hindered.

Conclusion

Over the last couple of years, cloud-native has progressed from being a hyped buzzword to a tangible approach for delivering genuine business value for its adopters. It allows a business to adapt, change and innovate in ways they simply weren't able to do previously. But to master the cloud-native approach, requires significant change in terms of strategy, culture, process and employee skills. Your journey should be carefully planned and executed. To help you navigate through the wide range of technology and confusing choice, download Giant Swarm's recent guide to the cloud-native stack.

##

About the Author

Puja Abbassi 

Puja Abbassi is a Developer Advocate at Giant Swarm. Next to representing the voices of customers and the community in our product, he loves helping people and enjoys writing blogs and documentation.

Published Wednesday, September 04, 2019 7:53 AM by David Marshall
Comments
There are no comments for this post.
To post a comment, you must be a registered user. Registration is free and easy! Sign up now!
top25
Calendar
<September 2019>
SuMoTuWeThFrSa
25262728293031
1234567
891011121314
15161718192021
22232425262728
293012345