A Contributed Article by James Bottomley, CTO, Server Virtualization,
Parallels, Inc.
In the world
of virtualized computing, containers have become the hot topic of conversation.
Hosting service providers have been using the technology to lower operational
costs and increase efficiency for years. But in the enterprise, containers
remain a bit of a mystery.
Data
centers today are beginning to outgrow the traditional hypervisor method of
virtualization. Just as Linux was the upstart operating system that took over
the web and found mainstream acceptance, containers are the next wave of
web-scale technology to move into the collective awareness of CIOs, CTOs and IT
professionals. This is where confusion comes in.
This
rogue technology is new to the enterprise, and as a result has perpetuated
quite a few myths recently. There are five in particular that we now debunk.
1. Containers are not Reliable enough to
Support Mission-Critical Workloads
It's hard to understand where
this myth comes from. Hosting service providers have been using the technology
to lower operational costs and increase efficiency with virtual private servers
(VPSs) for over a decade. VPSs are used to provide companies with web and other
services like processing credit card transactions. All of which are incredibly
mission critical for a business.
2. Containers are not Secure
While still a myth, this one has
a few likely origins. First and foremost, VPS hosting environments were
initially largely developed out of the Linux mainstream. Security was not of
critical importance in the first Linux containers, so the there's a perception
problem with modern containers. In the past three years - Parallels, Google and
a host of other companies have been working on pushing all the necessary
security technologies upstream. As a result, today's upstream kernel has enough
security technology to make containers highly secure and isolated.
The other origin for the
containers security myth is based on the granular property of the technology. You
can set up a fully-secure, fully-isolated operating system container - but you
can also set up a very porous one. There can be good reasons for doing the
latter, but it's not always done on purpose. And with most computer systems,
security relies on following best practices. These aren't always obvious or
followed by people new to containers.
3. Running Containers inside Virtual Machines
adds Efficiency
The belief here is generally
that you can overcome the first and second myths by running containers inside
virtual machines. While you can do this, you won't actually add efficiency since
you lose the density and elasticity of the container system - arguably the two
biggest benefits of the technology.
When you run containers in a
virtual machine, the final properties are dependent on the hypervisor, which
supports less density and is inelastic. Additionally, you add a second layer of
virtualization technology, creating more physical and management overhead and
three separate technology layers to manage.
4. Anything a Container can do, a Hypervisor
can Do
In the abstract, this is true
because they're both computing environments. But thinking practically, if you
give hypervisors and containers similar density and elasticity, you strip down
the guest and host of a hypervisor to a point where they become mere shells of
themselves. And even after doing this, you still don't have the granular and
just-enough virtualization properties of containers. It's a bit like beating a
square peg in to a round hole; with a big enough hammer you can do it, but it
may not be the best way of achieving the desired outcome.
5. A Container is a Container
With all of the hype lately
about containers, it's not surprising that there's a lot of misinformation
being communicated about the technology. Perhaps the best example is how often
we hear the phrase: "Docker containers". The truth is - Docker itself is not a
container. It is making strides in helping the technology reach a broader
audience, but Docker is actually an application packaging and transport system.
It relies on the just-enough virtualization properties of containers to
function. This confusion often feeds the four myths we referred to above, and
can help skew the containers perception problem.
There you have it, five common myths debunked. Now there
will always be people willing to compromise on density, elasticity and
granularity to ensure that hypervisors are used for specific workloads.
However, there are a growing number of use cases that call for mixed
environments. And while both technologies have their place, myths aside,
containers are making strides to unseat hypervisors as the dominant
virtualization technology because they can go places, and do things, that
hypervisors just never could. That's the truth.
##
About the Author
James Bottomley is CTO of Server Virtualization at Parallels with a
current focus on Open Source container technologies and Linux Kernel
maintainer of the SCSI subsystem, PA-RISC Linux and the 53c700 set of
drivers. He has made contributions in the areas of x86 architecture and
SMP, filesystems, storage and memory management and coherency. He is
currently a Director on the Board of the Linux Foundation and Chair of
its Technical Advisory Board. He was born and grew up in the United
Kingdom. He went to university at Cambridge in 1985 for both his
undergraduate and doctoral degrees. He joined AT&T Bell labs in 1995
to work on Distributed Lock Manager technology for clustering. In 1997
he moved to the LifeKeeper High Availability project. In 2000 he helped
found SteelEye Technology, Inc as Software Architect and later as Vice
President and CTO. He joined Novell in 2008 as a Distinguished Engineer
at Novell's SUSE Labs and Parallels in 2011.