Virtualization Technology News and Information
Article
RSS
Loft Labs CEO on the State of Kubernetes and vCluster - VMblog QA
interview loft gentele

Loft Labs recently conducted its 2024 vCluster Community Survey, which polled platform engineers and developers building with traditional Kubernetes or vCluster to gain insight into how teams are managing containerized applications, typical use cases for Kubernetes deployments, and the evolving virtualization landscape. vCluster, Loft's flagship product, helps organizations reduce the cost and complexity of Kubernetes via virtual clusters.

With KubeCon two months away, we wanted to learn more, so I spoke with Loft co-founder and CEO Lukas Gentele about the results and what they mean for the changing Kubernetes ecosystem and future outlook.

VMblog: Tell us a little about this survey. What inspired you to conduct it, and what are some of the key findings?

Lukas Gentele: We conducted this survey in order to more thoroughly understand how the industry is working with Kubernetes and what organizations' cluster deployments look like. We need a clear sense of user wants and needs to inform our roadmap so that we can best support current users, as well as anyone building on Kubernetes who stands to benefit from shifting to virtual clusters. Beyond that, these results tell us a lot about where the cloud-native industry is going, and where we need more robust solutions. More than 125 respondents gave us crucial feedback on their Kubernetes proficiency; platform components and tools they work with; and use cases. Some key findings are:

  • Kubernetes has become foundational, and most organizations choose it to manage containers both in development and production.Docker was the only tool that came close, with a one-point lead over Kubernetes in development, but it is far behind for production. To manage Kubernetes, most respondents are split between Azure AKS and AWS EKS with 57% leveraging these services for development use cases and 68% using them in production.
  • The top Kubernetes deployment use cases are in a microservices architecture (86%), DevOps and CI/CD workflows (83%), and web application hosting (66%). This shows pretty broad dependency on Kubernetes throughout the SDLC, which is part of why we built vCluster, and why it is so effective. It lets teams collaborate more easily while still maintaining isolation and limiting wasted resources. 
  • Our vCluster users typically have a lot of experience and work at large enterprises - 72% of respondents have over 8 years' work experience, most at companies with over 1,000 employees. In fact, almost 60% of respondents have more than 10 years of experience, which reflects how complicated Kubernetes is, given that senior team members are leading the push to virtualize. It makes sense that enterprises, with the largest Kubernetes workloads, are most urgently looking for solutions to cut costs and complexity.
  • The vast majority of users run Kubernetes in public clouds - 75% in our survey. Leveraging public cloud providers and their managed services can help outsource operational responsibility, but deploying exclusively on a public cloud leads to limited flexibility and visibility into costs.
  • Kubernetes proficiency is actually quite high for our respondents; 57% of respondents ranked their proficiency highly, either a 4 or 5 on a scale from 1 to 5 - only about 15% ranked themselves between 1-2. Users are not as confident managing virtual clusters, given Kubernetes virtualization is an emerging field - 30% ranked their skills at a 3, 29% a 4, and 7% a 5. However, it is encouraging to see that as organizations reach Kubernetes maturity, confidence in virtual clusters is catching up.

VMblog: You mentioned most respondents rate their Kubernetes proficiency highly, which may be surprising given its complexity. How do you view this result?

Gentele: It was a bit surprising, and encouraging, to see that users in our community have a strong grasp of Kubernetes. This is no small feat, because Kubernetes is inherently complex given users need to configure a heavy platform stack on each cluster. There is a lot of replication, and each cluster typically has 4-5 compute nodes - not to mention all of the day two requirements like monitoring and observability tools, secret managers, and so on.

These results suggest that working with Kubernetes is really not optional at this stage in the cloud-native ecosystem; again, it is the tool of choice for development and production. When thinking about moving to virtual clusters, a lot of the work we do in evangelizing vCluster is convincing teams that no matter what their current system looks like, virtual clusters are the right architectural choice, for today and for the future. Even the most adept practitioners will face challenges maintaining Kubernetes workloads as they scale; we have seen financial institutions, for example, with thousands of clusters, which is daunting for developers of any skill level.

VMblog: How should teams at all levels of Kubernetes proficiency think about the transition to virtual clusters?

Gentele: A virtual cluster architecture drastically reduces the management burden for any team. Virtual clusters are essentially just virtual isolated Kubernetes environments within a single physical cluster. They operate identically to traditional Kubernetes clusters without heavyweight platform stack components, significantly reducing operational overhead. Rather than configuring an entirely new platform stack every time a team needs a new cluster - running things Istio and Open Policy Agent separately - with vCluster, teams simply configure one underlying "real" cluster, and vCluster can spin up all the necessary virtual clusters in seconds.

While it may seem challenging to adopt a new architectural framework at first, one of vCluster's greatest strengths is ease of use. For one thing, we are a CNCF-certified Kubernetes distribution which makes the lift and shift seamless for teams already running on Kubernetes. Most importantly, the virtual cluster model takes advantage of what Kubernetes was designed to do in the first place. It was built to run as a large-scale distributed system with a multi-tenant architecture, and we are reintroducing this framework with an added layer of virtualization to make it more streamlined, efficient and cost-effective.

VMblog: vCluster can be quite powerful for reducing cloud costs, which is important given most of your respondents run Kubernetes in public clouds. Can you say a bit more about vCluster for cloud cost management?

Gentele: Reducing cloud spend is one of vCluster's most powerful benefits; vCluster works with any cloud provider, so organizations can reap the benefits no matter where they deploy. It makes sense that 75% of respondents are running Kubernetes in public clouds, because this shifts operational responsibilities away from internal teams. However, utilizing public cloud providers exclusively means teams have limited visibility into costs. The main reasons for runaway cloud spend tied to Kubernetes are the heavy platform stack components and resource waste due to idle clusters; we once talked to an organization that was paying more than $10 million every year for Kubernetes for a single division.

Virtual clusters tackle this by making the cluster itself lightweight and ephemeral. Spinning up vCluster only takes about 6 seconds, so teams can respond dynamically to workload demands. Each virtual cluster is also more isolated than traditional multi-tenant workloads, so users will not have to spend money fixing vulnerabilities. And we offer more granular billing capabilities than typical cloud providers, which makes it easy to attribute resource usage to specific tenants and ensure fair and accurate billing practices. Further, vCluster's "sleep mode" feature can detect if someone is currently working, and automatically scale down nodes so that only necessary resources are being utilized. One Fortune 500 customer utilized vCluster sleep mode to save $6 million on cloud costs associated with Kubernetes sprawl, simply by detecting and shutting down idle resources. As genAI workloads continue to increase demand for cloud resources, the next layer of virtualization in Kubernetes will become even more critical.

VMblog: What are some of the key considerations for organizations building on Kubernetes, who want to "future-proof" their systems as AI deployments continue to rise?

Gentele: Every engineering team should think critically about how their systems will scale as they deploy resource-intensive AI workloads. If you are an enterprise today and you need a server, nobody is going to plug in a physical server for you - we can see that the same thing will happen with Kubernetes. Five years from now, if you need a cluster, you are almost always going to get a virtual one, because they are so much more cost-effective and easier to operate. This goes back to the fact that virtual clusters can be spun up and down in response to tenant demands with minimal effort, which is great for AI and machine learning workloads based on GPUs.

In our survey, it is telling that 72% of respondents have over 8 years' work experience, and most work at organizations with over 1,000 employees. This suggests that the push to virtualize is driven by senior team members, and highlights the value of virtual clusters for enterprise-scale systems. Given that Kubernetes is known for its complexity, teams should trust these experienced colleagues who understand Kubernetes enough to see the power in transitioning to virtual clusters. It can be daunting to adopt emerging technologies like vCluster, but sticking with an inefficient cloud strategy will end up being much more costly down the line. For organizations eager to implement AI capabilities, the traditional Kubernetes approach will not be feasible moving forward.

VMblog: Is there anything else VMblog readers should know about these results? What should they expect from Loft Labs moving forward?

Gentele: Overall, these results show that Kubernetes is central to much of the important technology being developed today, which means we need innovative solutions to adapt it to the AI era. One of the most interesting data points is that 26.9% of respondents already deploy Kubernetes for machine learning and AI workloads, and that will only increase. Given the demands for dynamic resource configuration and scaling, virtual clusters will be particularly effective to future-proof IT systems and ensure successful AI deployments.

Loft is continuously working to provide new capabilities tailored to the needs of our users, which is why this survey is so valuable. One thing we are excited about is a cost calculator feature for vCluster, which will let users immediately see the potential cloud cost savings from switching to virtual clusters. I would also encourage anyone attending KubeCon North America to stop by our booth and meet the Loft team! We are sponsoring Platform Engineering Day on November 12, where I will be giving a brief keynote, and we would love to talk to you.

##

Published Friday, September 13, 2024 7:30 AM by David Marshall
Filed under: ,
Comments
There are no comments for this post.
To post a comment, you must be a registered user. Registration is free and easy! Sign up now!
Calendar
<September 2024>
SuMoTuWeThFrSa
25262728293031
1234567
891011121314
15161718192021
22232425262728
293012345