Virtualization Technology News and Information
Going Bare Metal and Multi-Cloud with Kubernetes

By Ev Kontsevoy, CEO at Gravitational

The future is multi-cloud. This means more companies than ever are building services across clouds to help customers avoid vendor lock-in, retain greater control of their data, and prevent costs from spiraling out of control. For software providers, the ability to offer your clients the ability to run your application in different data centers-even on-premises or air-gapped environments-is a huge competitive advantage.

In fact, private clouds are rising in popularity as fast as public clouds. This move isn't pain-free or a well-worn path. For people who are considering the move to multi-cloud or running on bare metal servers, the question is how. This is what we will touch upon in this article.

Kubernetes for a Multi-Cloud World

This challenges DevOps and engineering teams to find a way to deliver services in ways that matter to the business.  It sounds easy enough, but giving up the luxury of selecting a single cloud provider means work, especially for technology teams used to deployment via API, not the knuckle-nicking work of building bare metal. 

This is what makes Kuberenetes so transformational. As an open source, vendor-agnostic platform for container and cluster management, Kubernetes automates application deployment through portability. A single button deploys the container: application, run-time, configuration and the rest.  It lets organizations:

  • Write code once, run across diverse servers and environments
  • Maintain high level control and consistency across diverse public and private clouds
  • Reduce the cost and complexity of management by consolidating tasks into fewer VMs

Ultimately, Kubernetes or it's shortened `K8s` removes the dependencies between application and environment and shorten the cycles between ideation and implementation. This portability is a game-changer for organizations considering a return to prem.

Virtualization vs Kubernetes: Virtualization's Shrinking Edge

Since the first wave of server virtualization, we've managed to abstract everything from user desktops to network appliances. While containerization is designed for similar distributed manageability, virtualization completes that richness of functionality and flexibility that defines a cloud-native world. 

But, is virtualization still needed in modern data centers? With the rise of Kubernetes, the benefits are becoming questionable. 

  • Advances in software-defined networking are giving K8s users ways to create floating IPs and balance loads without the use of virtual private clouds
  • Bare metal storage products are getting more sophisticated, giving users more network-ready options for replacing expensive enterprise-storage solutions
  • More advanced container security is giving environments a level of granular security previously only found with true tenant isolation

Kubernetes on top of Virtualization: Do We Need Both?

We've already seen where virtualization excels and where containerization is up to the task.

What about running both?  While the decision is complex, you should at least consider removing virtualization for a couple of reasons.  You'll get a simpler infrastructure, with better performance, at a lower cost.  If you can deliver the same application experiences, why not?

Additionally, if you can answer Yes to the following questions, bare metal Kubernetes might be ideal for you.

  • You have no investments in third-party cloud software
  • You are running no untrusted 3rd party software on the cluster
  • Your network design and scale are simple and predictable
  • Your applications are not dependent on either virtualized storage or hardware assisted high-availability features like live migrations

The first three questions are pretty clear, but the last may need some explanation.

Solving for Statefulness and Storage

Kubernetes users are challenged to manage state due to the high number of dependencies between application and environment, this is when Kubernetes shortcomings become clear. It was built for stateless computing in an increasingly stateful world, where value is created by manipulating and sharing data.

This is at the core of the K8s problem with highly stateful applications. To deliver optimized infrastructure utilization, Kubernetes continuously moves applications across servers. A database "chained" to local storage makes this impossible.  You have three possible workarounds:

Do not run databases under Kubernetes. In most distributed database platforms predate Kubernetes and already have similar clustering capabilities built in, offering a practical, if impure, workaround.

Use network-attached storage whenever possible. Not relying on locally attached storage is nearly always preferable because it enables independent scaling of storage versus compute.

Choose from a new generation of microservice-oriented storage products that are closely aligned with the scaling model of Kubernetes.

Deployments and Configuration

Managing the configuration of larger server fleets was traditionally a time-consuming and error prone process. Modern configuration management tools like Ansible, Chef, and Puppet have changed the equation, automatically distributing configuration files across a fleet.

As these tools continue to grow and evolve, you can now find teams using Ansible not just for configuration, but also for provisioning and deployment. How will Kubernetes change how you resource these tasks?

You will still need a configuration management tool.

Kubernetes needs an operating system to run and this OS needs to be configured, updated and secured. However you choose to manage your Linux server fleet, you'll probably be happiest to continue using it with Kubernetes.

True application portability and predictability

With Kubernetes up and running in your cabinet, it's critical to not overlook another benefit of huge benefit K8s not mentioned yet in much detail-total application portability.

Today's cloud-native applications are too complex and fragile to be distributed via VM or installation package. For companies delivering software as a service, this is easy.  But when deploying cloud-native code in a multi-cloud world, we need solutions like K8s.

But this portability is about more than continuity and convenience. As Kubernetes gains momentum, it will become best practice to quickly move an application across all major cloud providers and simply pick the best performance or price.

This choice is more than an amazing product benefit. As hosting becomes consolidated by two or three top vendors, the ability to commoditize providers is essential to the success of the cloud as both a philosophy and a platform, keeping the economics balanced in favor of consumers.

In the future we will one day be able to simply snapshot an AWS account, share via email, and then watch it deploy effortlessly at a co-location. It'll be just another page turning in a cloud story that shows no signs of slowing down anytime soon.

It's not easy, but it's worth it

Imagine being able to throw off the shackles of a single public cloud provider and find the ability to port your applications anywhere. Think of the freedom gained, the costs lost, and the time you can spend doing more valuable things. It's an important idea, but not entirely easy - the details make all the difference. How much do you want to spend? How talented is your network team? What are your space options?

These are just a few of the questions you need to ask and answer as you consider whether or not bare metal co-location makes sense. This article is just a starting point.

To learn more about containerized infrastructure and cloud native technologies, consider coming to KubeCon + CloudNativeCon NA, November 18-21 in San Diego.


About the Author

Ev Kontsevoy is the CEO of Gravitational, a company that builds software for application delivery. Ev's company maintains Gravity, an open source tool for application imaging and delivery. Ev was also the founder of Mailgun, a popular email API service. Mailgun was acquired by Rackspace, and in his time running Mailgun, Ev became deeply aware of the problems faced by developers and operators who manage server infrastructure.

Published Wednesday, November 06, 2019 7:41 AM by David Marshall
There are no comments for this post.
To post a comment, you must be a registered user. Registration is free and easy! Sign up now!
<November 2019>