Parallel Works announced the
launch of
ACTIVATE AI, a powerful evolution of its
enterprise-grade infrastructure platform. With AI Resource Integrations,
Kubernetes, and Neocloud support, ACTIVATE AI dramatically simplifies the
deployment and management of scalable AI and ML workloads.
AI
and high-performance computing (HPC) are converging, but most enterprises need
flexible, scalable AI solutions that avoid overprovisioning. Operationalizing
AI is challenging due to fragmented infrastructure, high GPU costs,
underutilized GPU resources, and the lack of a clear path to production-ready
AI. ACTIVATE AI addresses these hurdles within a unified and scalable AI
solution that works across any cloud infrastructure.
The
ACTIVATE AI platform empowers infrastructure teams and end users to integrate
Kubernetes clusters into the environments they rely on for high-performance
computing (HPC), virtualized, and remote desktop workloads - without requiring
deep Kubernetes expertise.
"Enterprise
leaders need AI to be a competitive advantage, but most enterprises are stuck
navigating fragmented infrastructure and mounting costs, which can turn AI
initiatives into a strategic liability," said Matthew Shaxted, CEO of Parallel
Works. "ACTIVATE AI is a powerful and flexible control plane that helps
organizations bring all their systems - Kubernetes, legacy or cloud-native -
into a single framework to scale AI faster and smarter. Our goal is to bridge
the final gap between complex infrastructure and practical AI deployment,
making systems accessible without requiring deep infrastructure expertise."
ACTIVATE AI:
Delivering Scalable, Production-Ready AI
ACTIVATE
AI eliminates barriers between enterprises and the compute power they need,
enabling more efficient AI infrastructure management across hybrid and GPU-rich
deployments. It also enables early adopters across industries, including
aerospace, defense, climate modeling and water forecasting, to unlock new
opportunities.
Purpose-built
to accelerate the shift from research to production in AI workflows, ACTIVATE
AI allows enterprises to run large-scale model training, inference, and
simulation workloads across secure environments, including GPU-as-a-Service
(GPUaaS) clouds, legacy systems, and next-generation containerized systems.
With
advanced features that help IT and infrastructure teams unify orchestration,
chargeback, and GPU utilization, ACTIVATE AI supports both on-premises systems
and emerging GPUaaS
Neocloud providers. By delivering control and performance adaptability
across diverse environments, the platform helps organizations scale smart, not
just big, and achieve production-ready AI with efficiency and ease.
"ACTIVATE
AI will streamline our computational workflows and allow our team to deploy AI
models across diverse environments," said Junk Wilson, SVP, Government
Relations & Compliance, Orion Space Solutions, an Arcfield company.
"This flexibility will be crucial in advancing our mission objectives
efficiently."
ACTIVATE
AI capabilities include:
- Kubernetes
Support.
Plug into existing Kubernetes clusters, on-premises or in the cloud, and
gain fine-grained control over user access and namespace resource
allocation (GPU, CPU, RAM) across projects and teams.
- Chargeback
and Showback.
Assign usage-based pricing and track internal resource consumption across
Kubernetes clusters for accurate budgeting, optimization, and
accountability.
- GPU
Fractionalization. Seamlessly manage and allocate GPU resources across
multiple users on both MIG-enabled and non-MIG GPUs, with dynamic
partitioning powered by Juice Labs
integration.
- Multi-Infrastructure
Orchestration.
Seamlessly run and migrate workloads across Kubernetes, batch schedulers
and virtualized environments, making ACTIVATE AI a true hybrid
orchestrator.
- Hardware
Flexibility.
Run and optimize workloads across NVIDIA, AMD, and Intel AMX
architectures, ensuring portability across heterogeneous environments.
"As enterprises move from AI
experimentation to production, managing the complexity of hybrid infrastructure
has become a major bottleneck," said Mike Leone, Practice Director and
Principal Analyst, Data Management, Analytics and AI, Enterprise Strategy
Group. "Solutions like ACTIVATE AI reflect a growing need for orchestration
platforms that can unify Kubernetes, GPU-as-a-Service, and legacy systems to
help organizations manage resources more efficiently across diverse
environments."
Parallel
Works collaborates with leading Neocloud providers to validate ACTIVATE AI
across a range of real-world scenarios. These include secure, encrypted
computing environments, large-scale GPU clusters and data centers in regions
such as Europe and Iceland to support data residency requirements. The
collaborations are part of a growing ecosystem of Neocloud partners working
with Parallel Works to enable secure, scalable AI infrastructure globally.
Pricing and
Availability
ACTIVATE
AI is available immediately and included with existing ACTIVATE user seat
licenses.