-
-
Notifications
You must be signed in to change notification settings - Fork 98
Installation instructions
There are several ways to use AI Runner
- Compiled - This allows non-technical users to work with AI Runner, but distribution is currently on pause.
- Docker - This allows you to keep all installation options confined to a container but you'll need to understand how to use Docker commands, enable virtualization on your CPU in the BIOS and grasp the concept of virtual machines / containers.
-
PyPi - Install with pip and use with
airunnerfrom the command line, or create your own python apps with the airunner library. - Bare metal - This is for advanced users who know what they're doing.
-
AI Runner uses Wayland by default for optimal performance and compatibility with modern Linux desktop environments. This means you will need Wayland support on your host system.
-
Install NVIDIA Container Toolkit
Follow the official guide to enable GPU passthrough for Docker. -
Choose version
- Runtime - For users who just want to use AI Runner as-is
- Development - For users who want to modify AI Runner
Run this command in terminal on Ubuntu (WSL2 on Windows)
docker run --gpus all -it --rm -v ~/.local/share/airunner:/home/appuser/.local/share/airunner --network=host ghcr.io/capsize-games/airunner/airunner:linux_build_runtimeRun this command in terminal on Ubuntu (WSL2 on Windows)
git clone https://github.com/Capsize-Games/airunner.git
cd airunner
./src/airunner/bin/docker.sh airunnerDocker compose allows you to customize the container environment.
For example, if you want access to a directory on your host machine, you can mount it in the container by creating a airunner/package/dev/docker-compose.local.yml file with the following content
version: '3.8'
services:
airunner_dev:
volumes:
- /mnt/YourDrive:/mnt/YourDrive:rw,zInstall AI Runner using pip on Ubuntu and Windows WSL 2
-
Install system requirements
sudo apt update && sudo apt upgrade -y sudo apt install -y make build-essential libssl-dev zlib1g-dev libbz2-dev libreadline-dev libsqlite3-dev wget curl llvm libncurses5-dev libncursesw5-dev xz-utils tk-dev libffi-dev liblzma-dev python3-openssl git nvidia-cuda-toolkit pipewire libportaudio2 libxcb-cursor0 gnupg gpg-agent pinentry-curses espeak xclip cmake qt6-qpa-plugins qt6-wayland qt6-gtk-platformtheme mecab libmecab-dev mecab-ipadic-utf8 libxslt-dev libxslt1.1 sudo apt install espeak sudo apt install espeak-ng-espeak -
Create
airunnerdirectorysudo mkdir -p ~/.local/share/airunner sudo chown $USER:$USER ~/.local/share/airunner
-
Install AI Runner - Python 3.13+ required
pyenvandvenvare recommended (see wiki for more info)pip install "typing-extensions==4.13.2" pip install torch torchvision torchaudio --index-url https://download.pytorch.org/whl/cu128 pip install airunner[all_dev] pip install -U timm -
Run AI Runner
airunner
Non-docker development environment for Ubuntu and Windows WSL 2
Choose this if you want to run AI Runner natively on your machine without Docker.
These instructions will assume the following directory structure. You should only deviate from this structure if you know what you're doing.
~/Projects
├── airunner
├── OpenVoice
└── venv
- Install system requirements
All platforms
If you want to use AI Runner with Ollama:
sudo apt update && sudo apt upgrade -y sudo apt install -y make build-essential libssl-dev zlib1g-dev libbz2-dev libreadline-dev libsqlite3-dev wget curl llvm libncurses5-dev libncursesw5-dev xz-utils tk-dev libffi-dev liblzma-dev python3-openssl git nvidia-cuda-toolkit pipewire libportaudio2 libxcb-cursor0 gnupg gpg-agent pinentry-curses espeak xclip cmake qt6-qpa-plugins qt6-wayland qt6-gtk-platformtheme mecab libmecab-dev mecab-ipadic-utf8 sudo apt install espeak sudo apt install espeak-ng-espeakcurl -fsSL https://ollama.com/install.sh | sh - Create airunner directory
sudo mkdir ~/.local/share/airunner sudo chown $USER:$USER ~/.local/share/airunner
- Install pyenv (allows management of multiple Python versions)
curl https://pyenv.run | bash - Add pyenv to shell configuration
# Check and add pyenv configuration if not already present
if ! grep -q "Pyenv configuration added by AI Runner" ~/.bashrc; then
cat << 'EOF' >> ~/.bashrc
# Pyenv configuration added by AI Runner setup
export PYENV_ROOT="$HOME/.pyenv"
if [ -d "$PYENV_ROOT/bin" ]; then
export PATH="$PYENV_ROOT/bin:$PATH"
fi
if command -v pyenv &>/dev/null; then
eval "$(pyenv init - bash)"
fi
EOF
fi
# Check and add WSLg XDG_RUNTIME_DIR fix if not already present
if ! grep -q "WSLg XDG_RUNTIME_DIR Fix added by AI Runner" ~/.bashrc; then
cat << 'EOF' >> ~/.bashrc
# WSLg XDG_RUNTIME_DIR Fix added by AI Runner setup
if [ -n "$WSL_DISTRO_NAME" ]; then
if [ -d "/wslg/runtime-dir" ]; then
export XDG_RUNTIME_DIR="/wslg/runtime-dir"
elif [ -d "/mnt/wslg/runtime-dir" ]; then # Older WSLg path
export XDG_RUNTIME_DIR="/mnt/wslg/runtime-dir"
fi
fi
EOF
fi
# Check and add Qt environment variables for WSLg if not already present
if ! grep -q "Qt environment variables for WSLg added by AI Runner" ~/.bashrc; then
cat << 'EOF' >> ~/.bashrc
# Qt environment variables for WSLg added by AI Runner setup
if [ -n "$WSL_DISTRO_NAME" ]; then
export QT_QPA_PLATFORM=wayland
export QT_QPA_PLATFORMTHEME=gtk3
fi
EOF
fi- Install python and set to local version
. ~/.bashrc pyenv install 3.13.3
- Clone repo, set local python version, create virtual env, activate it
mkdir ~/Projects cd ~/Projects pyenv local 3.13.3 python -m venv venv source ./venv/bin/activate git clone https://github.com/Capsize-Games/airunner.git
- Install AI Runner requirements
pip install "typing-extensions==4.13.2" pip install torch torchvision torchaudio --index-url https://download.pytorch.org/whl/cu128 pip install -e ~/Projects/airunner[all_dev] pip install -U timm
- Run setup wizard and install the models you want
airunner-setup
- Run app
airunner
Optional
- Ollama - required for Ollama LLM capabilities
- Flash attention 2
- xformers
- FramePack
Note
AI Runner, like all local AI tools, uses code and models from third-party libraries. You should be aware of the licenses and terms of use for these libraries, and ensure you practice responsible AI usage. For an extra layer of security and privacy, consider using a service like OpenSnitch on Linux to monitor outgoing connections from the app.
By default, AI Runner only connects to the internet to download models and to get latitude and longitude data should you decide to enter a zipcode. That data is stored on your local machine and is not sent to any third-party services. The latitude and longitude data is used to get the weather data for the weather-based chatbot prompt.
Services used for this are openstreetmap.org for the latitude and longitude data, and open-meteo.com for the weather data. Both of these services are free to use and do not require an API key. If you do not enter a zipcode or use the weather-based chatbot prompt, these services are not used.
AI Runner supports Ollama. Follow the official quick-start guide, choose "ollama" in the model dropdown in the LLM settings panel, and enter the name of the model you want to use.