Latest on CC: “Red Hat Revolutionizes AI: OpenShift AI 2.15 Unveiled”

Red Hat Introduces OpenShift AI 2.15 with Enhanced AI/ML Capabilities for Hybrid Cloud Deployment

Red Hat, a global leader in open-source solutions, has unveiled the latest version of Red Hat OpenShift AI 2.15, its AI and machine learning (ML) platform that enables businesses to build and scale AI-driven applications across hybrid cloud environments.

The updated OpenShift AI 2.15 offers improved flexibility, tuning, and tracking features, designed to accelerate AI/ML innovation while ensuring operational consistency and stronger security across public clouds, data centers, and edge environments. According to IDC, organizations in the Forbes Global 2000 are expected to allocate over 40% of their core IT budgets to AI initiatives, and by 2026, generative AI and automation are projected to drive $1 trillion in productivity gains. Red Hat’s platform is built to support this level of investment by managing the lifecycle of AI models and enabling the development of generative AI applications while also supporting traditional workloads across hybrid cloud infrastructures.

OpenShift AI 2.15 is crafted to meet the growing demand for AI workloads while maintaining support for critical cloud-native applications. Key new features include:

Model Registry: A technology preview feature that centralizes the management of registered models, allowing users to share, version, deploy, and track both predictive and generative AI models. Red Hat has contributed this project to the Kubeflow community as a subproject.
– **Data Drift Detection**: A tool to monitor changes in the input data distribution used for machine learning models. This ensures that models remain accurate by detecting when live data diverges from training data.
– **Bias Detection Tools**: Integrated tools from the TrustyAI open-source community that help AI engineers ensure models are fair and unbiased, both during training and real-world deployment.
– **Efficient Fine-Tuning with LoRA**: Leverages low-rank adapters (LoRA) to fine-tune large language models (LLMs) like Llama 3 more efficiently, reducing cost and resource consumption while scaling AI workloads.
– **Support for NVIDIA NIM**: Provides an interface for accelerating generative AI application delivery through NIM, part of the NVIDIA AI Enterprise platform, which supports a wide range of AI models for scalable inference both on-premises and in the cloud.
Support for AMD GPUs: Adds support for AMD GPUs in AI development, offering access to an AMD ROCm workbench for both model training and serving, providing additional options for high-performance computing tasks.
– **Enhanced Model Serving**: Introduces vLLM serving runtime for KServe, a popular open-source model serving runtime for large language models, enabling high flexibility and performance with options for custom runtimes.
– **Expanded AI Training and Experimentation**: Enhancements to data science pipelines, including experiment tracking and hyperparameter tuning with Ray Tune, which optimize the accuracy and efficiency of model training for both predictive and generative AI.

These updates make Red Hat OpenShift AI 2.15 a powerful tool for enterprises looking to drive AI innovation and maintain operational efficiency at scale across hybrid cloud environments.

Be the first to comment

Leave a Reply

Your email address will not be published.


*