Virtualization Technology News and Information
A Field Guide: Microservices vs. Containers

Written by Joe Brockmeier, Senior Strategist, Linux Containers, Red Hat

Microservices and containers are often discussed together, sometimes as interchangeable terms -- even though they're distinct concepts and may or may not actually go together. To clear any confusion, we'll take a look at microservices, containers, as well as a few other terms that you're probably seeing everywhere -- or will very soon.

What's a microservice?

Traditional or "legacy" applications are monolithic -- you could even say that they're macroservices. Take a very simple application like WordPress. While it looks like a pretty simple application compared to most enterprise software, it still has a lot going on.

WordPress handles requests for pages, selects information from a database, and builds the page. It's also a full featured Content Management System (CMS), provides an editing interface, authentication, commenting, and much more. As an application or system grows, though, it can become difficult to maintain and add features to (or remove features from).

A microservice splits out services to allow you to build an application with separate services so that they're easier to maintain, and so that they can be re-used. Additionally, microservices allow you to scale-out instead of scaling up -- that is, instead of a monolithic app which will require more CPU, RAM, etc. over time you can add more instances when demand increases.

Most importantly to this conversation, a microservice could be packaged in a number of ways -- one way would be to containerize it. Alternately, you might just deploy Java web application archives (war files) for microservices running under Red Hat JBoss EAP.

Martin Fowler and James Lewis provide an excellent in-depth backgrounder on microservices if you really want to dive in to the topic, and there's a fun video with Red Hat's Grant Shipley and Steven Pousty that dives into a practical introduction on microservices from DevNation 2015.

What's a container, anyway?

Containers, specifically Linux containers, are a way of isolating a process or set of processes using kernel features like cgroups, namespaces, SECCOMP, SELinux, and so forth. There've been a number of container implementations over the years, but the concept really took off with Docker in 2013 and a simpler way for developers to utilize the technology.

What you get with that style of containerization is not just isolation of processes, but also a standard format for the container image that can be shipped from a developer's laptop to test, to staging, and to production unchanged and work across each environment.

Containers are decidedly not microservices by default. If you like, you can run multiple services inside a single container and treat it like a lightweight virtual machine. That's not to say you should do this, but it's possible and some people do use containers this way. To go back to the WordPress example, some folks just run WordPress in a single container.

But using containers, you could also break an application up into multiple services and ship each one separately. To use the WordPress example, you might break the database service (MySQL or MariaDB) into a separate container and run them in a separate container from the Apache/PHP services. Note that this wouldn't be a "true" microservices architecture, according to Hoyle. (Or Martin Fowler.)

If you want to get a little closer to the heart of a true microservices application, check out Alessandro Arrichiello's blog about a "containerized polyglot microservices application" that includes JBoss Middleware, NodeJS, Spring, and running on OpenShift.

By using containers, you can re-use services easily for multiple applications, and you can break up responsibility for each service/application much more easily. You get a uniform packaging format, no matter how the application is architected. It's up to you to determine how best to utilize the packaging format, depending on what best suits your environment.

And what about serverless?

To throw in another wrinkle, you might be hearing a bit (or a lot) about "serverless" now. This is continuing the trend of "NoSQL" and "No Ops" where you describe services/technology by the absence of something rather than the presence of something...

When cloud computing became the hot buzzword du jour, the standard refrain from many IT practitioners was "there is no cloud, there's just other people's computers." Likewise, there's no "serverless" computing, there's just letting someone else manage the complexity of the servers (in the "cloud") that are providing "event-triggered, ephemeral (may only last for one invocation), and fully managed by a 3rd party" services run in stateless compute containers.

The idea with serverless is that someone else provides a service that you can access without having to worry about any of the underlying architecture - how much RAM, how much disk, how big the network pipe is - and simply consume the service.

If you really want to get deep into it, you can break serverless down into two smaller buckets -- Backend-as-a-Service (BaaS) and Functions-as-a-Service (FaaS). BaaS would be something like a database service where someone else worries about managing the database and you just consume it as a metered offering. FaaS would be the ability to write code and have someone else run it, again without having to concern yourself with how you scale it or manage it.

Scaling is an exercise left to the provider. Estimating the monthly bill is an exercise left to the consumer of the service, and (at least for now) the hosted serverless options (like Amazon's Lambda) come at a premium. This premium may be worth it if you consume very little, since you can offset the cost of having systems running full time to handle sporadic requests.

If you like the concept of serverless but want to implement your own, there are projects like Apache OpenWhisk (incubating) that that seek to provide an open source platform for building your own. And if you guessed that OpenWhisk uses containers as part of its workflow, you'd be right. Specifically, OpenWhisk uses containers for each action that OpenWhisk processes and then destroys the container as soon as its results are obtained. Containers aren't  required to implement serverless, but it's a popular option.

That's microservices, containers, and serverless in a very high-level nutshell. They're often complementary, but not identical or interchangeable.


About the Author


Joe Brockmeier, Senior Strategist, Linux Containers, Red Hat

Joe Brockmeier is a long-time participant in open source projects and former technology journalist. Brockmeier has worked as the openSUSE Community Manager, is an Apache Software Foundation (ASF) member, and participates in the Fedora Atomic Working Group. Brockmeier works for Red Hat to help educate IT professionals, customers, and partners on all aspects of Linux containers, and works to advance Red Hat's go-to-market strategy around containers and related technologies.
Published Tuesday, March 13, 2018 7:32 AM by David Marshall
Newsletter: March 24, 2017 – Notes from MWhite - (Author's Link) - March 24, 2018 11:23 AM
To post a comment, you must be a registered user. Registration is free and easy! Sign up now!
<March 2018>