Virtualization Technology News and Information
Article
RSS
Mirantis and Gcore Partner to Tackle AI Infrastructure Challenges
Mirantis and Gcore announced an agreement to facilitate the deployment of artificial intelligence (AI) workloads. The cornerstone of this collaboration is the integration of Gcore Everywhere Inference with Mirantis k0rdent open-source platform, enabling users to effortlessly scale AI inference workloads globally.

"Enterprise AI adoption has entered a new phase and open source has a critical role to play - bridging public, private and managed service clouds, so that users can maintain autonomy and control over their global infrastructure," said Alex Freedland, CEO, Mirantis. "Combining our expertise and commitment to open source technologies with Gcore's AI expertise will accelerate our ability to create solutions and critical capabilities to address these issues for MLOps and platform engineers."

Mirantis recently launched k0rdent for large-scale application management across any infrastructure anywhere and intends to integrate Gcore Everywhere Inference to help global organizations deliver AI inference wherever needed, optimizing compute resource allocation, simplifying AI model deployment, enhancing performance monitoring and cost management, and streamlining compliance with regional data sovereignty requirements. This flexible technology can be deployed in the cloud, on-premises, or hybrid and edge environments.

"The collaboration between Mirantis and Gcore addresses today's AI inference challenges by combining scalable infrastructure management with efficient workload deployment," said Seva Vayner, product director, Edge Cloud and AI, Gcore. "Mirantis' recently announced k0rdent project provides platform engineers with a Kubernetes-native, open-source solution for managing infrastructure sprawl and operational complexity across multi-cloud and hybrid environments. With the integration of Gcore Everywhere Inference, an accelerator-agnostic solution for managing AI inference workloads, the project will provide businesses with an easy-to-use platform for deploying and operating distributed AI inference at scale."

"Deploying AI at scale today can be time and resource consuming: under the wrong setup, it can take businesses a long time to onboard new GPUs, and hours or even days to deploy new models," said Misch Strotz, CEO and co-founder of LetzAI. "Partnerships like the one between Gcore and Mirantis can simplify this: model deployment can be done in a few clicks and new GPUs can be onboarded within hours, enabling infrastructure and ML teams to be much more productive."

Gcore, as a global AI infrastructure provider, has helped enterprises navigate AI adoption. Now, with the increased demand for AI inference, Gcore Everywhere Inference helps businesses efficiently leverage their resources when deploying AI inference, improving time-to-market and ROI from AI projects.

Published Tuesday, March 18, 2025 8:58 AM by David Marshall
Filed under: ,
Comments
@VMblog - (Author's Link) - April 2, 2025 8:19 AM

Mirantis announced that Nebul has deployed open source k0rdent to deliver an on-demand service that enables customers to run production AI inference workloads.

To post a comment, you must be a registered user. Registration is free and easy! Sign up now!
Calendar
<March 2025>
SuMoTuWeThFrSa
2324252627281
2345678
9101112131415
16171819202122
23242526272829
303112345