MEF announced a cutting-edge demonstration of
GPU-as-a-Service (GPUaaS) for AI at the Edge utilizing MEF's Lifecycle
Service Orchestration (LSO) APIs. In collaboration with Infosys,
NVIDIA, and IronYun, MEF is showcasing the initiative this week
at Mobile World Congress (MWC), in Barcelona, Spain, highlighting how
service providers can monetize network infrastructure by offering
enterprises scalable, real-time AI inferencing capabilities at the
Edge.
The MWC showcase demonstrates a fully automated process for
enterprises to obtain pricing and place orders for GPU resources at the
Edge, leveraging MEF's standardized LSO APIs.
- Infosys,
a global leader in next-generation digital services and consulting,
presents a seamless ordering process that integrates service provider
capabilities with enterprise systems, enabling AI models to function
effectively at the Edge.
- IronYun, a leader in video
analytics, demonstrates security, safety and operational applications of
the Vaidio AI Vision Platform running on GPUs at the Edge.
This initiative marks a significant milestone in MEF's AI strategy, driving the evolution of AI-powered networks.
Unlocking AI at the Edge: A Game-Changer for Service Providers and Enterprises
The
rise of AI-driven applications demands powerful GPU resources close to
data sources. Traditional cloud-based AI processing introduces latency,
making Edge computing a critical solution. MEF's Edge Compute
Infrastructure-as-a-Service (IaaS) standard defines Edge IaaS, enabling
Cloud Service Providers and Subscribers to compare offerings using a
common framework. The next iteration expands its scope to include
GPUaaS, standardizing the ability for service providers to deliver AI at
the Edge with reduced latency and opens new revenue opportunities.
"This
initiative is a major leap forward in AI at the provider edge," said
Pascal Menezes, CTO, MEF. "By enabling service providers to offer
GPU-as-a-Service, we are empowering enterprises to run AI inferencing at
the Edge with greater scalability and efficiency. With this
announcement, MEF, Infosys, NVIDIA, and IronYun are setting a new
benchmark for AI services, paving the way for a future where AI at the
Edge is seamlessly accessible, scalable, and monetizable."
A Fully Standardized, On-Demand AI Ecosystem
MEF's LSO APIs ensure interoperability and automation across service providers. Key features of GPUaaS include:
- On-Demand GPU Resources - Enterprises can access high-performance GPUs at the Edge for AI inferencing without heavy upfront investments.
- Seamless Ordering & Deployment - MEF's API framework enables automated ordering, quoting, and activation of GPU resources across multiple providers.
- Optimized AI Performance -
Low-latency Edge computing enhances AI-driven applications, such as
real-time video analytics and intelligent traffic management.
Balakrishna
D. R. (Bali), Executive Vice President, Global Services Head, AI and
Industry Verticals, Infosys, said, "Unlocking AI at the Edge is crucial
for enterprises to fully tap into AI's potential. By integrating
GPU-as-a-Service, Infosys empowers enterprises to run AI inferencing
with lower latency and greater efficiency. Our solutions, built on
advanced GPU resources and powered by Infosys Topaz and Infosys Cobalt,
deliver scalable, high-performance AI at the Edge. Through our
collaboration with MEF to standardize GPU-as-a-Service, we're setting a
new industry benchmark, enabling enterprises to harness AI for
real-world impact."
"At IronYun, we've redefined what's possible
in video analytics by embedding intelligence into every layer of the
Vaidio platform, delivering unmatched accuracy, scalability, and compute
efficiency," said Marshall Tyler, CEO of IronYun. "We truly appreciate
the opportunity to partner with MEF to showcase our advanced vision AI
through this groundbreaking GPU-as-a-Service initiative. By combining
deployment flexibility with real-time inferencing power at the Edge,
Vaidio empowers providers to monetize their networks, and enables
enterprises in all sectors to unlock new levels of security and
operational efficiency."