DigitalOcean announced general
availability of easy-to-deploy advanced AI infrastructure, available in a
pay-as-you-go model through the new DigitalOcean GPU Droplets. With the
new GPU Droplets, AI developers can simply and quickly run AI
experiments, train large language models, and scale AI projects - all
without complex configurations or large capital investments.
With these latest offerings, DigitalOcean now offers a range of
flexible, high-performance GPU solutions - including on-demand virtual
GPUs, managed Kubernetes, and bare metal machines - that empower
developers and growing businesses to accelerate AI/ML implementations.
Powered by the powerful NVIDIA H100 GPUs, purpose-built for next-gen AI
applications, DigitalOcean GPU Droplets are available in cost-friendly
single-node configurations as well as multi-node configurations. Unlike
other cloud providers that require several steps and technical knowledge
to configure security, storage, and network requirements, DigitalOcean
GPU Droplets can be set up with a few clicks on a single page.
DigitalOcean API users also will enjoy the benefits of the simple setup
and management as GPU Droplets come fully integrated into the
DigitalOcean API suite and can be spun up with a single API call.
The company is also expanding its managed Kubernetes service to support
NVIDIA H100 GPUs, bringing the full power of H100-enabled worker nodes
to Kubernetes containerized environments.
These advanced AI infrastructure offerings lower the barriers to AI
development by providing fast, easy, and affordable access to
high-performance GPUs without requiring upfront investments in costly
hardware. The new building blocks, available immediately, are:
-
DigitalOcean GPU Droplets: NVIDIA H100 GPU virtual servers,
available in 1X and 8X configurations. While other cloud providers only
offer the more expensive 8X configuration that includes a full rack of
eight NVIDIA GPUs, DigitalOcean is offering Droplets with as little as
one NVIDIA H100, providing fast-growing companies with exactly the
computing power that they need, at a price that they can afford.
-
DigitalOcean Kubernetes GPU Support: Managed Kubernetes service supports NVIDIA H100 GPUs, available in 1X and 8X configurations
Customers such as Story.com are already leveraging powerful H100 GPUs
from DigitalOcean to train their models and scale their businesses.
"Story.com's GenAI workflow demands heavy computational power, and
DigitalOcean's GPU nodes have been a game-changer for us," said Deep
Mehta, CTO and Co-Founder of Story.com. "As a startup, we needed a
reliable solution that could handle our intensive workloads, and
DigitalOcean delivered with exceptional stability and performance. From
seamless onboarding to rock-solid infrastructure, every part of the
process has been smooth. The support team is incredibly responsive and
quick to meet our requirements, making it an invaluable part of our
growth."
"We're making it easier and more affordable than ever for developers,
startups, and other innovators to build and deploy GenAI applications
and move them into production," said Bratin Saha, Chief Product and
Technology Officer at DigitalOcean. "To do that, they need access to
advanced AI infrastructure without the added cost and complexity. Our
GPUs as a service open this opportunity to a much broader user base."
Today's announcement is one of many steps that DigitalOcean is taking on
its roadmap to offer AI platforms and applications. The company's
forthcoming innovations include a brand new generative AI platform
designed to make it easier for customers to configure and deploy the
best AI solutions for their needs, including agents such as chatbots.
With these innovations, DigitalOcean aims to democratize AI application
development by simplifying the otherwise complex AI tech stack. It will
provide pre-built components like hosted LLMs, offer easy to use data
ingestion pipelines, and allow customers to leverage their knowledge
bases, allowing them to easily create AI-powered applications.