Article Written by Gunnar Hellekson, director of Product Management for Linux and Virtualization, Red Hat
To some, containers and virtualization are essentially the same thing - a rip and replace alternative for the other. To others, they are completely different technologies with different use cases. The truth is that containers and virtualization do have a lot in common, but not as much as some people think. To get the most out of each of these important technologies, we must understand the ins and outs of containers and virtualization and how they do and don't work together.
Organizations struggling to meet increasing demand for more and better applications, which need to be delivered faster than ever before, are warming up, if not outright flocking, to container technology, especially when paired with OpenStack infrastructure. By offering a uniform application packaging paired with application isolation and improved workload density, Linux container technologies can help enterprises meet these new application challenges as well as end user expectations. This does, however, sound very similar to the benefits of virtualization, an already-proven technology adopted by a broad cross-section of enterprises.
So why would sophisticated virtualization users, like those within the OpenStack community, be interested in containers?
Containers Are Not Virtual Machines, and Vice Versa
A common misconception is that containers are just an evolution of virtual machines (VMs), but there are some major differences between the technologies.
Like virtual machines, application containers keep all components of an application together - including the libraries and binaries on which they depend. By combining the ability to isolate applications with lightweight and image-based deployment capabilities, we can put more applications on a single machine and start them up much more quickly. How do containers achieve their light weight? Unlike virtual machines, they do not contain an operating system (OS) kernel; rather, they rely on the kernel of their host.
This flexibility can, however, introduce potential security and manageability issues. Since containers are more dependent on the environment that hosts them, this introduces more risk, and therefore more chances for being breached. If the host is compromised, then all of its containers are compromised as well, just as we expect from virtual machines and hypervisors. Unlike virtual machines, if a single container is compromised, there is also a chance the intruder can gain access to the OS. So while there are considerable added benefits with containers, IT departments need to ensure that the entire environment is secure when setting them up.
Because of their speed and light weight, containerized applications are much more likely to be distributed and modular, where virtual machines are much more likely to be centralized and monolithic.That means that containerized applications rely on orchestration and management tools in a way that virtual machines do not. Because there are many orchestration solutions that connect different containerized components together into a single coherent application, there is a real chance of compatibility issues, or even future flaws that have yet to be discovered -- but this is a challenge that container platforms, an evolution of Platform-as-a-Service, intend to address.
Virtualization technology, of course, has been used and trusted in the enterprise for many years now. Virtualization enables servers to run multiple operating systems and applications. The main difference from containers is that each VM contains the full application stack, from the server to the database to the OS. Many companies have even successfully used virtualization to consolidate server systems, with hardware abstraction creating an environment that can run multiple operating systems and applications running on VMs. And, because virtualization systems run workloads inside a guest operating system that is isolated from the host OS, it offers more security than container technology currently does. In addition, over time many products have been developed that boost the security and manageability of virtualization systems.
On the downside, VMs can be much slower to start up, and their size makes them much less flexible when it comes to implementing changes in the development process, such as adding new features. Virtualization also takes up a lot of system resources since each VM runs not just a full copy of an operating system, but a virtual copy of all the hardware that the operating system needs to run.
How to Get Virtualization and Containers to Live in Harmony
In short, virtualization provides flexibility by abstraction from hardware, while containers provide speed and agility through lightweight application isolation. So, instead of thinking of containers as replacing VMs, organizations should be thinking about containers as a complement to VMs - with the workload determining what to use when.
For example, containers can be used in development environments for speedier deployment of new applications. Many companies are also using containers within VMs to take advantage of virtualization's security and management features. Some virtualize their container hosts to maintain operational consistency between the "old" and the "new" infrastructures. Indeed, as companies evolve their use of containers, they are increasingly finding the need for a new "stack"-one that provides the level of security, management and standardization required for running any technology in enterprise production environments.
Companies looking to stay ahead of the competitive curve are not making a choice between containers and virtualization; they are looking for ways to use and integrate both technologies to their fullest potentials.
##
About the Author
Gunnar Hellekson is the Director of Product Management for Red Hat's Linux, Virtualization, and Atomic container product lines. Before that, he was Chief Strategist for Red Hat's US Public Sector group. He is a founder of Open Source for America, one of Federal Computer Week's Fed 100 for 2010, and was voted one of the FedScoop 50 for industry leadership. He was a founder of the Military Open Source working group, a member of the SIIA Software Division Board, the Board of Directors for the Public Sector Innovation Group, the Open Technology Fund Advisory Council, New America's California Civic Innovation Project Advisory Council, and the CivicCommons Board of Advisors. He perks up when people talk about commoditization and the industrial mobilization of World War II.