Get Started
Create a Gen AI or an AI Retrieval Augmented Generation (RAG) application that makes use of Intel’s Xeon Processor AMX features. Start learning about these required tools with the resources below:
- Inspiration
- What is generative AI?
- Introduction to RAG
- Red Hat OpenShift on AWS (ROSA)
- Other Hackathon Resources
- Videos
- Support
Inspiration
What is generative AI?
Generative AI is a kind of artificial intelligence technology that relies on deep learning models trained on large data sets to create new content. Generative AI models, which are used to generate new data, stand in contrast to discriminative AI models, which are used to sort data based on differences. People today are using generative AI applications to produce writing, pictures, code, and more. Common use cases for generative AI include chatbots, image creation and editing, software code assistance, and scientific research. Find our more with the below resources:
- VIDEO - Supercharge your Cloud-native Applications with Generative AI
- Apply generative AI to app modernization with Konveyor AI
- What is retrieval-augmented generation?
- Redefining development: The retrieval-augmented generation (RAG) revolution in software engineering
- Building an ops foundation for the future of generative AI
- What is generative AI?
Introduction to RAG
The RAG technique adds dynamic, query-dependent data into the model's prompt stream. Relevant data is retrieved from a custom-built knowledge base stored in a vector database. The prompt and the retrieved context enrich the model's output, delivering more relevant and accurate results. RAG allows you to leverage your data with an LLM while keeping the integrity of your data private, as it's not sent to a third party managing the model. The key components of the RAG workflow can be captured in four simple steps: user query processing, retrieval, context incorporation and output generation. The diagram below illustrates this basic flow. Read more here: Intel's E-Book Building Blocks of RAG with Intel

OpenShift & OpenShift AI
Red Hat OpenShift on AWS (ROSA)
Please note that both the Sandbox and Environment are limited to a 12-hour window per day. After 12 hours, the pods containing your work will be scaled down to 0. You must scale your pod up to continue working if you need longer than 12 hours to complete your work
Other Hackathon Resources
- Red Hat & Intel AI and Machine Learning: the perfect combination for data scientists
- Red Hat OpenShift AI Learning
- Red Hat OpenShift AI Demo
- OPEA Gen AI example repository
- Manage deep learning models with OpenVINO model server
- OpenVINO Get Started Guide
-
Notebooks for developers to build on (technical, from “scratch”, generic implementation):
-
LLM Chatbot
- The above link has details on LLM models supported by OpenVINO.
- Distil Whisper ASR
- LLM RAG Langchain
-
LLM Chatbot
- Edge AI Ref Kit released externally (Demonstrates Speech-to-text + LLM):
- Fine-tuning/training:
- Generative AI Development with Podman AI Lab, InstructLab, & OpenShift AI
- Explore Intel's Gaudi Resources
Learn More
- Create, Migrate, and Optimize Your AI Models with Intel® Gaudi® AI Accelerators
- The Intel Gaudi AI Accelerator is a processor designed specifically to accelerate deep learning model training and inference, offering high performance and cost efficiency. It features scalable Ethernet-based interconnects and integrates with AI frameworks like TensorFlow and PyTorch.
- Create and train your models with tutorials on Intel® Gaudi® Technology
- Step-by-step tutorials that walk you through retrieval augmented generation (RAG), visual question and answering, Code generation, and more.
Videos
OpenVino installation video
Learn how to install OpenVino through this walkthrough. Additional resources for this video include the documentation on the "pip install of openvino", and the example/notebook used in the video to demo.
Watch here
Red Hat OpenShift AI using Intel OpenVINO & Xeon AMX
Learn how to validate if the underlying platform supports Intel Xeon AMX (AI Accelerator) features and use OpenVINO as a model serving for AI inferencing use cases.
Watch here
Generative AI Development with Podman AI Lab, InstructLab, & OpenShift AI
Take a look at how you can get started working with generative AI in your application development process using open-source tools like Podman AI Lab (https://podman-desktop.io/extensions/...) to help build and serve applications with LLMs, InstructLab (https://instructlab.ai) to fine-tune models locally from your machine, and OpenShift AI (https://developers.redhat.com/product...) to handle the operationalizing of building and serving AI on an OpenShift cluster.
Watch Here
Harness the power of AI/ML with Red Hat OpenShift
In this webinar, Abhinav Joshi shares how you can harness the power of AI/ML with the Red Hat portfolio. He covers the challenges, benefits, containers and Kubernetes for AI/ML workloads.
Watch Here
Support
- Looking for a team or have questions? Join the Devpost Discord

