Tutorials and Examples#
The MLRun tutorials provide a hands-on introduction to using MLRun to implement data science workflows and to automate both gen AI and machine-learning operations (MLOps) tasks.
In this section
Introduction to MLRun - Use serverless functions to train and deploy models
Each of the following tutorials is a dedicated Jupyter notebook. You can download them by clicking the download icon
at the top of each page.
Gen AI tutorials#
How to copy a dataset into your cluster, deploy an LLM in the cluster, and run your function.
Set up an effective model monitoring system that leverages LLMs to maintain high standards for deployed models.
How to track experiments for document-based models using the LangChain API to integrate directly with vector databases.
Machine learning tutorials#
How to deploy real-time serving pipelines with MLRun Serving and different types of pre-trained ML/DL models.
How to work with projects, source control (git), CI/CD, to easily build and deploy multi-stage ML pipelines.
Demonstrate MLRun Serving pipelines, MLRun model monitoring, and automated drift detection.
Turn a Kaggle research notebook to a production ML micro-service with minimal code changes using MLRun.
Understand MLRun feature store with a simple example: build, transform, and serve features in batch and in real-time.
Use MLRun batch inference function (from MLRun Function Hub), run it as a batch job, and generate drift reports.
Demonstrates a multi-step online pipeline with data prep, ensemble, model serving, and post processing.
Use the feature store with data ingestion, model training, model serving, and automated pipeline.
End to end demos#
See Demos.
Cheat sheet#
If you already know the basics, use the cheat sheet as a guide to typical use cases and their flows/SDK.