Red Hat announced the general availability of
Red Hat Enterprise Linux (RHEL) AI
across the hybrid cloud. RHEL AI is Red Hat's foundation model platform
that enables users to more seamlessly develop, test and run generative
AI (gen AI) models to power enterprise applications. The platform brings
together the open source-licensed
Granite large language model (LLM) family
and InstructLab model alignment tools, based on the Large-scale
Alignment for chatBots (LAB) methodology, packaged as an optimized,
bootable RHEL image for individual server deployments across the hybrid
cloud.
While gen AI's promise is immense, the associated costs of procuring,
training and fine-tuning LLMs can be astronomical, with some leading
models costing nearly $200 million
to train before launch. This does not include the cost of aligning for
the specific requirements or data of a given organization, which
typically requires data scientists or highly-specialized developers. No
matter the model selected for a given application, alignment is still
required to bring it in-line with company-specific data and processes,
making efficiency and agility key for AI in actual production
environments.
Red Hat believes that over the next decade, smaller, more efficient and
built-to-purpose AI models will form a substantial mix of the enterprise
IT stack, alongside cloud-native applications. But to achieve this, gen
AI needs to be more accessible and available, from its costs to its
contributors to where it can run across the hybrid cloud. For decades,
open source communities have helped solve similar challenges for complex
software problems through contributions from diverse groups of users; a
similar approach can lower the barriers to effectively embracing gen
AI.
An open source approach to gen AI
These are the challenges that RHEL AI intends to address - making gen AI
more accessible, more efficient and more flexible to CIOs and
enterprise IT organizations across the hybrid cloud. RHEL AI helps:
-
Empower gen AI innovation with enterprise-grade, open source-licensed Granite models, and aligned with a wide variety of gen AI use cases.
-
Streamline aligning gen AI models to business requirements with
InstructLab tooling, making it possible for domain experts and
developers within an organization to contribute unique skills and
knowledge to their models even without extensive data science skills.
-
Train and deploy gen AI anywhere across the hybrid cloud by
providing all of the tools needed to tune and deploy models for
production servers wherever associated data lives. RHEL AI also provides
a ready on-ramp to Red Hat OpenShift AI for training, tuning and
serving these models at scale while using the same tooling and concepts.
RHEL AI is also backed by the benefits of a Red Hat subscription, which
includes trusted enterprise product distribution, 24x7 production
support, extended model lifecycle support and Open Source Assurance legal protections.
RHEL AI extends across the hybrid cloud
Bringing a more consistent foundation model platform closer to where an
organization's data lives is crucial in supporting production AI
strategies. As an extension of Red Hat's hybrid cloud portfolio, RHEL AI
will span nearly every conceivable enterprise environment, from
on-premise datacenters to edge environments to the public cloud. This
means that RHEL AI will be available directly from Red Hat, from Red Hat's original equipment manufacturer (OEM) partners
and to run on the world's largest cloud providers, including Amazon Web
Services (AWS), Google Cloud, IBM Cloud and Microsoft Azure. This
enables developers and IT organizations to use the power of hyperscaler
compute resources to build innovative AI concepts with RHEL AI.
Availability
RHEL AI is generally available today via the Red Hat Customer Portal to
run on-premise or for upload to AWS and IBM Cloud as a "bring your own
subscription" (BYOS) offering. Availability of a BYOS offering on Azure
and Google Cloud is planned in Q4 2024 and RHEL AI is also expected to
be available on IBM Cloud as a service later this year.
Red Hat plans to further expand the aperture of RHEL AI cloud and OEM
partners in the coming months, providing even more choice across hybrid cloud environments.