Complete setup guide for ModelForge on Windows, including native installation and options for full feature support.
The Unsloth provider is NOT supported on native Windows. Unsloth requires a Linux environment.
If you want to use Unsloth for 2x faster training, you have two options:
- Windows Subsystem for Linux (WSL) - Recommended
- Docker with NVIDIA Container Toolkit - Alternative
For standard training with the HuggingFace provider, native Windows works perfectly.
Use this if you're okay with the HuggingFace provider only (no Unsloth support).
- Windows 10/11 (64-bit)
- Python 3.11.x - Download from python.org
⚠️ Python 3.12 is NOT supported yet- During installation, check "Add Python to PATH"
- NVIDIA GPU with 4GB+ VRAM (6GB+ recommended)
- NVIDIA Drivers - Latest Game Ready or Studio drivers
- CUDA Toolkit - Download CUDA 11.8 or 12.x
- HuggingFace Account - Create account and generate access token
Open PowerShell or Command Prompt:
python --versionShould show: Python 3.11.x
nvcc --versionShould show your CUDA version (e.g., release 12.6)
# Create project directory
mkdir ModelForge
cd ModelForge
# Create virtual environment
python -m venv venv
# Activate virtual environment
.\venv\Scripts\Activatepip install modelforge-finetuning
# Optional extras
pip install modelforge-finetuning[cli] # CLI wizard
pip install modelforge-finetuning[quantization] # 4-bit/8-bit quantizationVisit PyTorch Installation Page and select:
- PyTorch Build: Stable
- Your OS: Windows
- Package: Pip
- Language: Python
- Compute Platform: CUDA 11.8 or CUDA 12.6
For CUDA 12.6:
pip install torch torchvision torchaudio --index-url https://download.pytorch.org/whl/cu126For CUDA 11.8:
pip install torch torchvision torchaudio --index-url https://download.pytorch.org/whl/cu118python -c "import torch; print(f'CUDA Available: {torch.cuda.is_available()}'); print(f'GPU: {torch.cuda.get_device_name(0) if torch.cuda.is_available() else \"None\"}')"Should show:
CUDA Available: True
GPU: NVIDIA GeForce RTX 3060
Option A: Environment Variable (PowerShell)
$env:HUGGINGFACE_TOKEN="your_token_here"Option B: Environment Variable (CMD)
set HUGGINGFACE_TOKEN=your_token_hereOption C: .env File (Persistent)
echo HUGGINGFACE_TOKEN=your_token_here > .envmodelforge # Launch web UI
modelforge cli # Launch CLI wizard (headless/SSH alternative)Open browser to: http://localhost:8000
- No Unsloth provider - Only HuggingFace provider available
- Standard training speed - Cannot use 2x speedup from Unsloth
- All other features work - SFT, QLoRA, RLHF, DPO strategies are available
Windows Subsystem for Linux provides a full Linux environment on Windows, enabling all ModelForge features including Unsloth.
- Windows 10 (Build 19041+) or Windows 11
- NVIDIA GPU with latest drivers (525.60+)
- At least 16GB RAM recommended
Open PowerShell as Administrator:
wsl --install -d Ubuntu-22.04This installs WSL 2 with Ubuntu 22.04. Restart your computer when prompted.
After restart, Ubuntu will open automatically:
- Create a username and password
- Update packages:
sudo apt update && sudo apt upgrade -yIMPORTANT: Do NOT install NVIDIA drivers in WSL - use your Windows drivers!
Install CUDA Toolkit in WSL:
# Add NVIDIA package repository
wget https://developer.download.nvidia.com/compute/cuda/repos/wsl-ubuntu/x86_64/cuda-keyring_1.1-1_all.deb
sudo dpkg -i cuda-keyring_1.1-1_all.deb
sudo apt-get update
# Install CUDA Toolkit
sudo apt-get install -y cuda-toolkit-12-6Add CUDA to PATH:
echo 'export PATH=/usr/local/cuda/bin:$PATH' >> ~/.bashrc
echo 'export LD_LIBRARY_PATH=/usr/local/cuda/lib64:$LD_LIBRARY_PATH' >> ~/.bashrc
source ~/.bashrcVerify:
nvcc --version
nvidia-smisudo apt install -y software-properties-common
sudo add-apt-repository ppa:deadsnakes/ppa -y
sudo apt update
sudo apt install -y python3.11 python3.11-venv python3.11-dev python3-pipMake Python 3.11 default:
sudo update-alternatives --install /usr/bin/python3 python3 /usr/bin/python3.11 1Verify:
python3 --version # Should show Python 3.11.x# Create virtual environment
mkdir ~/ModelForge
cd ~/ModelForge
python3 -m venv venv
source venv/bin/activate
# Install ModelForge
pip install modelforge-finetuning
# Install PyTorch with CUDA
pip install torch torchvision torchaudio --index-url https://download.pytorch.org/whl/cu126
# Install Unsloth
pip install unslothexport HUGGINGFACE_TOKEN="your_token_here"
# Or add to .env file
echo "HUGGINGFACE_TOKEN=your_token_here" > .envmodelforgeAccess from Windows browser: http://localhost:8000
- Full Unsloth support - 2x faster training
- All features available - No limitations
- Better performance - Native Linux environment
- Easy file access - Access WSL files from Windows Explorer at
\\wsl$\Ubuntu-22.04\
Use Docker for isolated environments and easy deployment.
- Docker Desktop for Windows - Download
- NVIDIA GPU with latest drivers
- WSL 2 backend (enabled by default in Docker Desktop)
- Download and install Docker Desktop
- Enable WSL 2 backend in Settings
- Restart computer
Open PowerShell as Administrator:
# Install WSL if not already installed
wsl --install
# Switch to WSL Ubuntu
wsl -d Ubuntu-22.04In WSL Ubuntu terminal:
# Add NVIDIA repository
distribution=$(. /etc/os-release;echo $ID$VERSION_ID)
curl -s -L https://nvidia.github.io/nvidia-docker/gpgkey | sudo apt-key add -
curl -s -L https://nvidia.github.io/nvidia-docker/$distribution/nvidia-docker.list | sudo tee /etc/apt/sources.list.d/nvidia-docker.list
# Install nvidia-container-toolkit
sudo apt-get update
sudo apt-get install -y nvidia-container-toolkit
# Configure Docker to use NVIDIA runtime
sudo nvidia-ctk runtime configure --runtime=dockerRestart Docker Desktop.
docker run --rm --gpus all nvidia/cuda:12.6.0-base-ubuntu22.04 nvidia-smiShould show your GPU information.
Create Dockerfile:
FROM nvidia/cuda:12.6.0-devel-ubuntu22.04
# Install Python 3.11
RUN apt-get update && apt-get install -y \
python3.11 \
python3.11-venv \
python3.11-dev \
python3-pip \
git \
&& rm -rf /var/lib/apt/lists/*
# Set Python 3.11 as default
RUN update-alternatives --install /usr/bin/python3 python3 /usr/bin/python3.11 1
# Install ModelForge
RUN pip install --no-cache-dir modelforge-finetuning
# Install PyTorch with CUDA
RUN pip install --no-cache-dir torch torchvision torchaudio --index-url https://download.pytorch.org/whl/cu126
# Install Unsloth
RUN pip install --no-cache-dir unsloth
# Set working directory
WORKDIR /workspace
# Expose port
EXPOSE 8000
# Run ModelForge
CMD ["modelforge", "--host", "0.0.0.0"]# Build image
docker build -t modelforge:latest .
# Run container
docker run --gpus all -p 8000:8000 -e HUGGINGFACE_TOKEN=your_token_here modelforge:latestAccess at: http://localhost:8000
To preserve data between container restarts:
docker run --gpus all -p 8000:8000 \
-v modelforge-data:/root/.local/share/modelforge \
-e HUGGINGFACE_TOKEN=your_token_here \
modelforge:latestWhen using the Unsloth provider, you MUST specify a fixed max_sequence_length. Auto-inference (value -1) is not supported.
Example Configuration:
{
"provider": "unsloth",
"model_name": "meta-llama/Llama-3.2-3B",
"max_seq_length": 2048, // REQUIRED: Must be a positive integer
"strategy": "sft",
...
}Valid values:
512,1024,2048,4096,8192, etc.
Invalid values:
-1(auto-inference - NOT supported)0or negative numbers
This limitation is specific to Unsloth and does not apply to the HuggingFace provider.
Problem: torch.cuda.is_available() returns False
Solutions:
- Verify NVIDIA drivers:
nvidia-smi - Reinstall PyTorch with correct CUDA version
- Check CUDA installation:
nvcc --version - Restart computer after installing CUDA
Problem: ImportError: No module named 'unsloth'
Solution: Unsloth requires Linux. Use WSL or Docker (see above).
Problem: nvidia-smi works in Windows but not in WSL
Solutions:
- Update Windows to latest version
- Update NVIDIA drivers (525.60+)
- Ensure WSL 2 is installed:
wsl --status - Restart WSL:
wsl --shutdownthen reopen
Problem: Docker can't access GPU
Solutions:
- Ensure WSL 2 backend is enabled in Docker Desktop
- Install nvidia-container-toolkit in WSL (see above)
- Restart Docker Desktop
- Verify:
docker run --rm --gpus all nvidia/cuda:12.6.0-base-ubuntu22.04 nvidia-smi
Problem: Cannot write to directories
Solutions:
- WSL: Ensure you own the directory:
sudo chown -R $USER:$USER ~/ModelForge - Docker: Use volume mounts with correct permissions
- Post-Installation Setup - Configure ModelForge
- Quick Start Guide - Run your first training job
- Configuration Guide - Learn all options
- Unsloth Provider - Learn about Unsloth features
Need Help? Check the Windows-Specific Troubleshooting guide.