Virtualization Technology News and Information
Mirantis 2021 Predictions: Coming in 2021 - Everything old is new again

vmblog 2021 prediction series 

Industry executives and experts share their predictions for 2021.  Read them in this 13th annual series exclusive.

Coming in 2021: Everything old is new again

By Nick Chase, Director Technical Marketing and Developer Relations, Mirantis

The prediction I'll make this year with the highest guarantee of being right is that most people making predictions for 2021 will talk about being grateful for the end of 2020. And why not? It's certainly been a year to remember -- a year when everything changed, like it or not.

While we may all talk about things going back to "normal," the reality is that things are never going to be truly like they were before. Of course, they never are. The pandemic is like any other force for change; pushing some things forward, stopping others, and having no effect on others still.

In some cases, its effect is turning out to be different from what one might expect. Take private bare metal data centers, for example -- long assumed to be a diminishing paradigm. One might assume that a pandemic-driven exodus from centralized workplaces (which obviously includes physical data centers) would accelerate this trend.

On-Premises Cloud Redux

But instead, something else seems to be happening: the price of servers and storage is gradually falling, but (more important) manageability of servers and storage is improving, meaning it's getting cheaper and easier to operate physical data centers, overall. Meanwhile, the use cases for private bare metal and private clouds -- citing the WWT article linked above, to include "cost, control, security policies, data gravity, workforce skill set and business process integration" -- aren't going away, and a host of new use cases such as high performance computing for analytics, machine learning, large-volume transaction processing, and so on are emerging to make private clouds increasingly viable and needed. This may continue to hold true, even in the face of expanding public cloud bare metal services, on-premises workload hosting options like AWS Outposts and Microsoft Azure Stack, and other alternatives.

What's driving "improved manageability" is also pretty interesting. Perhaps most important: Kubernetes is emerging as a consistent substrate for commodifying compute, network, and storage -- insulating properly designed and appropriately redundant workloads from disruptive impacts of underlying hardware and software failure, trivializing hardware provisioning, and providing a conceptually simple, uniform workflow for application lifecycle management.

Kubernetes as Substrate

Simply put: if you run Kubernetes on bare metal with appropriate redundancy, losing a hard drive or a blade is no longer an emergent problem -- instead, it's something you can dispatch a person to deal with asynchronously, in a systematic, safe, simple, and socially-distanced way. If you need more capacity, you can rack-and-stack a server, plug it into your network, and let software PXE-boot and provision it as a new cluster node onto which workloads will be scheduled as their configurations, operators, and other resources demand. And if you run properly-designed apps on top, you can use rolling updates to keep them fresh.

That includes complex applications like OpenStack, where the Kubernetes substrate vastly simplifies deploying, managing, scaling, and updating one or multiple private clouds. These can, of course, host VM workloads in simple, familiar, scale-friendly, and mostly unconstrained ways. And these VM workloads can include more Kubernetes clusters, which can be scaled for software development, testing, or production use where bare metal performance isn't needed.

Organizations interested in adopting this strategy, of course, will need to acquire solutions that provide for fundamental bare metal provisioning and management, observability up and down the stack, and of course, the ability to deploy and manage consistent Kubernetes substrates and their more complex "service layer" workloads, like OpenStack.

Consistency is Key

These solutions also, arguably, need to provide Kubernetes substrate (and other) clusters that are logically consistent with one another, because the friction of not doing so is potentially enough to cancel cost/time/expertise and other efficiencies gained by using Kubernetes in the first place. Imagine two containerized OpenStack clouds, each built on a different configuration or version of Kubernetes: all your lifecycle management, integration, and other automation now needs to fork, and each fork must be maintained independently.

Using Kubernetes as an infrastructure substrate, therefore, is likely to become a Big Deal IT Strategy in the next little while -- something like the choice of OS or hypervisor (or cloud provider) is, today. IT organizations will be concerned to find substrate configurations that are "opinionated, but only in the right ways," providing a product with enterprise-grade fit-and-finish, but that also leaves room for customization without forcing the user to pay a heavy operational cost for innovating.

So, what we're predicting is complicated. Renewed interest in private cloud frameworks like OpenStack, where these offer advantages over public cloud VMs (private cloud cost advantages tend to grow as scale and longevity increase); this conditioned on use of Kubernetes as "the substrate for everything" -- since improving operational efficiency is the most important part of cost control at scale. The real, underlying trend, we figure, is that the most far-sighted organizations will use these technologies to "become their own cloud providers": administering what is in fact a diverse, hybrid cloud, but using Kubernetes to pave over service inconsistencies, and running everything from one place.    

Growth Towards the Edge

Contributing to this trend, more and more computing power is "sitting around" and can be used in Kubernetes-oriented private clouds at many scales: potentially, all the way out to quite-small edge clusters and IoT-scale devices. (P.S. did you know k0s Kubernetes runs on Raspberry Pi 4B ARM boards?) As 'edge clouds' become more real, more diverse, and squeeze into smaller and cheaper hardware, it seems in many ways we're seeing a repeat of the dispersion of work similar to what we saw before from mainframes to servers to PCs.

The network edge will see more trends emerging as 5G makes mobile bandwidth more available.  Look for a price war beginning between mobile and terrestrial carriers, and the beginnings of a move toward "private 5G" that both carriers will need to factor into their plans going forward as technologies such as OpenRAN begin to democratize the technology.

New Kinds of Lock-In

Proliferation of services available from major cloud providers is causing several new(-ish) kinds of lock-in. At the infrastructure and platform level, of course, we see some organizations going "all-in" on one or another provider. This encourages them to build systems and processes deeply dependent on multiple cloud provider capabilities, toolkits, and methods. And of course, this encourages developers to become "full-stack-on-cloud-provider-X" specialists -- particularly as job markets orient to seek these increasingly highly-valued specialties.

At the same time, however, some organizations and developers are pushing back: seeking paradigms that insulate them from provider specifics and effectively commodify underlying services, while still letting them use and benefit from cost-effective solutions. The move towards bare metal can be seen as one aspect of this contrarian movement. So, can -- much higher in the stack -- the move towards simpler, but still performant, web models like "HTML Over the Wire."

Change Still a Constant ... But

Obviously, constant change will remain a core characteristic and principle of technical work. Innovation, after all, is about making things gradually better by making them anew. But with the emergence of Kubernetes as an abstraction layer and universal substrate for applications, we'll likely see a certain class of problems and risks diminishing. Hardware becomes less of a potential point of failure. Clouds -- private and public -- operating systems and other stack components become "competing resource abstraction frameworks," rather than deep and potentially exclusive organizational commitments. Apps will be easier to develop, and work better and more securely, as a result. Good luck in 2021!


About the Author

Nick Chase 

Nick Chase, Director Technical Marketing and Developer Relations at Mirantis, is deeply involved with cloud computing using Kubernetes and OpenStack. A former release team member for Kubernetes, he is a frequent speaker on technical topics and author of hundreds of tutorials and over a dozen books, including Machine Learning for Mere Mortals.

Published Monday, January 25, 2021 7:30 AM by David Marshall
There are no comments for this post.
To post a comment, you must be a registered user. Registration is free and easy! Sign up now!
<January 2021>