Vultr and Domino Data Lab announced the integration of Domino Nexus with
Vultr's Kubernetes Engine. This new integration helps businesses achieve
competitive advantage in the era of generative AI by accelerating
innovation while balancing compute cost, performance, and availability
with seamless bursting of cutting-edge AI workloads to GPU-accelerated
compute clusters across cloud and on-premises environments.
This announcement delivers on Vultr and Domino's recently announced partnership,
which gives enterprise data science teams unparalleled access to
state-of-the-art NVIDIA-powered cloud infrastructure on Vultr, including
NVIDIA A100 and H100
Tensor Core GPUs to train, deploy, and manage their own deep learning
models with speed, flexibility and affordability. Vultr and Domino are
both members of the NVIDIA Partner Network program.
"Customers seeking AI-driven competitive advantage must grapple with
staggering GPU demand and cost pressures," said Nick Elprin, CEO and
co-founder at Domino Data Lab. "Our integration with Vultr provides
enterprises on-demand compute to keep developing cutting-edge AI without
budget overspend."
The new joint offering is underpinned by Vultr Kubernetes Engine (VKE) and Domino's hybrid- and multi-cloud architecture, Nexus,
to break down data science silos and open up flexible compute options,
with cost, performance, and scale in mind. Built around a commitment to
openness, flexibility and open standards, it further democratizes AI
innovation for teams of any scale, budget and location.
-
Unified Data Science: Domino Nexus' unified MLOps platform
orchestrates governed, self-service access to common data science
tooling and infrastructure across all environments, including Vultr -
alleviating infrastructure capacity and data sovereignty challenges
during model training.
-
Flexible and Interoperable: Domino's Kubernetes-native platform runs seamlessly on VKE, with the CNCF-certified and MACH-compliant
VKE providing automated container orchestration with support for
geographically redundant clusters, so users can operate with confidence
to easily scale data science workloads across Vultr's worldwide locations without fear of vendor lock-in or outages.
-
Cost Effective and Agile: Vultr offers a variety of full and
fractional NVIDIA A100 and NVIDIA H100 Tensor Core GPU configurations,
enabling enterprises with the agility to optimize infrastructure based
on AI workload demands at a significantly lower cost. Data transfer
costs are minimized with Vultr's global bandwidth pricing plan.
"Rapid access to cost-effective compute is critical to fuel today's
record demand to train and deploy large language models and generative
AI," said Andy Thurai, Vice President and Principal Analyst,
Constellation Research. "By unifying cutting-edge hardware and software
components in one solution, Domino's integration with Vultr offers
companies pursuing rapid AI innovation a single way to handle workloads
without delays or cost overages."
"Vultr is committed to democratizing access to high performance cloud
computing, so that our customers can focus on driving innovation without
worrying about cost, data sovereignty, and security," said J.J.
Kardwell, CEO of Vultr. "With Domino Data Lab, we have created a
best-in-class solution that will empower machine learning and data
science practitioners to solve the world's most pressing problems across
a wide range of industries."
Availability
This joint MLOps and Compute (Cloud and GPU) solution's data plane
functionality is available today to Domino Nexus customers, allowing
them to add Vultr cloud compute environments to their existing Domino
deployments. Full Domino platform deployments on Vultr will be available
by Summer 2023.