DENVER, Colorado, the open source software leader — Red Hat, Inc., the world’s leading provider of open source solutions, on May 7, – RED HAT SUMMIT 2024 announced the launch of Red Hat Enterprise Linux AI (RHEL AI), a foundation model platform that enables users to more seamlessly develop, test and deploy generative AI (GenAI) models.
The headliners are Red Hat Enterprise Linux AI (RHEL AI), a foundation model platform for developing and running open source language models, and InstructLab, a community project to empower domain experts to enhance AI models with their knowledge.
“RHEL AI and the InstructLab project, coupled with Red Hat OpenShift AI at scale, are designed to lower many of the barriers facing GenAI across the hybrid cloud, from limited data science skills to the sheer resources required, while fueling innovation both in enterprise deployments and in upstream communities.”
Ashesh Badani
Senior Vice President and Chief Product Officer, Red Hat
(Resource: Red Hat Delivers Accessible, Open Source Generative AI Innovation with Red Hat Enterprise Linux AI )
How Red Hat stands apart from other companies integrating and offering open source AI
According to Red Hat CEO Matt Hicks, RHEL AI distinguishes itself from the competition in a few key ways.
Primarily, Red Hat is focused on open source and a hybrid approach. “We believe that AI is not really different than applications. That you’re going to need to train them in some places, run them in other places. And we’re neutral to that hardware infrastructure. We want to run anywhere,” said Hicks.
Additionally, Red Hat has a proven track record of optimizing performance across different hardware stacks. “We have a long history of showing that we can make the most out of the hardware stacks below us. We don’t produce GPUs. I can make Nvidia run as fast as they can. I can make AMD run as fast as they can. I can do the same with Intel and Gaudi,” explained Hicks.
This ability to make the most of performance across several hardware options while still providing location and hardware optionality is fairly unique in the market.
Finally, Red Hat’s open source approach means customers retain ownership of their IP. “It’s still your IP. We provide that service and subscription business to and you’re not giving up your IP to work with us on that,” said Hicks.
(Resource: Red Hat unveils RHEL AI and InstructLab to democratize enterprise AI )
To have a spotlight view discussing Red Hat Enterprise Linux, Red Hat Enterprise Linux AI, Red Hat OpenShift AI, InstructLab, Let’s get started.
Red Hat launches RHEL for AI
The whole solution is packaged as a bootable RHEL image for individual server deployments across the hybrid cloud and is part of OpenShift AI, Red Hat’s hybrid machine learning operations (MLOps) platform for running models and InstructLab at scale through distributed cluster environments. RHEL AI provides a supported, enterprise-ready runtime environment for AI models across AMD, Intel, and Nvidia hardware platforms, Red Hat said…
To lower the entry barriers for AI innovation, enterprises need to be able to expand the roster of who can work on AI initiatives while simultaneously getting these costs under control. With InstructLab alignment tools, Granite models and RHEL AI, Red Hat aims to apply the benefits of true open source projects – freely accessible and reusable, transparent and open to contributions – to GenAI in an effort to remove these obstacles.
Also read: Kairos: Empowering On-Premises Environments with Cloud-Native Meta-Linux Distribution
Building AI in the open with InstructLab
IBM Research generated the Large-scale Alignment for chatBots (LAB) technique, an approach for model alignment that uses taxonomy-guided synthetic data generation and a novel multi-phase tuning framework.
After seeing that the LAB method could help considerably improve model performance, IBM and Red Hat decided to launch InstructLab, an open source community built around the LAB method and the open source Granite models from IBM. The InstructLab project intends to put LLM development into the hands of developers by making, building and contributing to an LLM as simple as contributing to any other open source project.
As part of the InstructLab launch, IBM has also released a family of select Granite English language and code models in the open. These models are released under an Apache license with transparency on the datasets used to train these models. The Granite 7B English language model has been integrated into the InstructLab community, where end users can contribute the skills and knowledge to collectively enhance this model, just as they would when contributing to any other open source project. Similar support for Granite code models within InstructLab will be available soon.
Open source AI innovation on a trusted Linux backbone
RHEL AI builds open approach to AI innovation, integrating an enterprise-ready version of the InstructLab project and the Granite language and code models along with the world’s leading enterprise Linux platform to streamline deployment across a hybrid infrastructure environment. This generates a foundation model platform for bringing open source-licensed GenAI models into the enterprise. RHEL AI includes:
Open source-licensed Granite language and code models that are maintained and indemnified by Red Hat.
A supported, lifecycled distribution of InstructLab that offers a scalable, cost-effective solution for enhancing LLM capabilities and making knowledge and skills contributions accessible to a much comprehensive range of users.
Optimized bootable model runtime instances with Granite models and InstructLab tooling packages as bootable RHEL images via RHEL image mode, containing optimized Pytorch runtime libraries and accelerators for AMD Instinct™ MI300X, Intel and NVIDIA GPUs and NeMo frameworks.
Red Hat’s ample enterprise support and lifecycle promise that starts with a trusted enterprise product distribution, 24×7 production support and extended lifecycle support.
For instance organizations experiment and tune new AI models on RHEL AI, they have a ready on-ramp for scaling these workflows with Red Hat OpenShift AI, which will include RHEL AI, and where they can leverage OpenShift’s Kubernetes engine to train and serve AI models at scale and OpenShift AI’s integrated MLOps capabilities to manage the model lifecycle.
Also read: Introduction to Proxmox VE 8.1 – Part 1
The cloud is hybrid. So is AI.
This drive of leading open source technologies continues with Red Hat powering AI/ML strategies across the open hybrid cloud, empowering AI workloads to run where data lives, whether in the datacenter, multiple public clouds or at the edge. More than just the workloads, Red Hat’s vision for AI carries model training and tuning down this same path to better address limitations around data sovereignty, compliance and operational integrity. The stability delivered by Red Hat’s platforms across these environments, no matter where they run, is crucial in keeping AI innovation flowing.
RHEL AI and the InstructLab community further deliver on this vision, breaking down numerous barriers to experimenting with and building AI models while providing the tools, data and concepts needed to fuel the next wave of intelligent workloads.
Availability
Red Hat Enterprise Linux AI is now available as a developer preview. Building on the GPU infrastructure accessible on IBM Cloud, which is used to train the Granite models and support InstructLab, IBM Cloud will now be adding support for RHEL AI and OpenShift AI. This integration will permit enterprises to deploy generative AI more easily into their mission critical applications.
FAQ’s
What is Red Hat OpenShift AI architecture?
Red Hat OpenShift AI is established on the upstream project Open Data Hub, which is a blueprint for building an AI as a service platform on Red Hat’s Kubernetes-based OpenShift Container Platform. Open Data Hub is a meta-project that incorporates over 20 open source AI/ML projects into a practical solution.
What is the full form of RHEL?
Red Hat Enterprise Linux operating system.
What is OpenShift AI?
Red Hat® OpenShift® AI is a flexible, scalable artificial intelligence (AI) and machine learning (ML) platform that facilitates enterprises to create and deliver AI-enabled applications at scale across hybrid cloud environments.
Is Red Hat and RHEL the same?
RHEL, earlier known as Red Hat Linux Advanced Server, is certified with thousands of vendors and across hundreds of clouds.
Key takeaways:
RHEL AI builds an open approach to AI innovation, incorporating an enterprise-ready version of the InstructLab project and the Granite language and code models along with the world’s leading enterprise Linux platform to streamline deployment across a hybrid infrastructure environment. This forms a foundation model platform for bringing open source-licensed GenAI models into the enterprise.
Therefore, a rapidly growing ecosystem of open model options has spurred further AI innovation and illustrated that there won’t be “one model to rule them all.”