A Rust implementation of the W3C WebNN specification for neural network graph validation and backend conversion.
This is an early-stage experimental implementation for research and exploration. Many features are incomplete, untested, or may change significantly.
rustnn is a Rust library that provides:
- WebNN Graph Validation: Validates WebNN graph structures against the W3C specification
- Backend Conversion: Converts WebNN graphs to ONNX and CoreML formats
- Runtime Backends: Executes graphs on CPU, GPU, or Neural Engine
- Shape Inference: Automatic tensor shape computation
- Operation Support: 88 WebNN operations (84% spec coverage)
Python users should use pywebnn - a separate package that provides full W3C WebNN API Python bindings using rustnn as the core library.
Install Python package:
pip install pywebnnSee the pywebnn repository for Python documentation and examples.
Add rustnn to your Cargo.toml:
[dependencies]
rustnn = { git = "https://github.com/rustnn/rustnn" }
# Optional: Enable runtime backends
rustnn = { git = "https://github.com/rustnn/rustnn", features = ["onnx-runtime"] }Features:
onnx-runtime- ONNX Runtime execution (CPU/GPU)coreml-runtime- CoreML execution (macOS only)trtx-runtime- TensorRT execution (Linux/Windows with NVIDIA GPU)
use rustnn::graph::GraphInfo;
use rustnn::converters::{GraphConverter, OnnxConverter};
use rustnn::validator::GraphValidator;
// Load a WebNN graph from JSON
let graph: GraphInfo = serde_json::from_str(&json_string)?;
// Validate the graph
let validator = GraphValidator::new();
let artifacts = validator.validate(&graph)?;
// Convert to ONNX
let converter = OnnxConverter;
let onnx_model = converter.convert(&graph)?;
// Save ONNX model
std::fs::write("model.onnx", onnx_model.data)?;For Python examples, see the pywebnn repository.
Following the W3C WebNN Device Selection spec, backends are selected via hints:
# CPU-only execution
context = ml.create_context(accelerated=False)
# Request GPU/NPU (platform selects best available)
context = ml.create_context(accelerated=True)
# Request high-performance (prefers GPU)
context = ml.create_context(accelerated=True, power_preference="high-performance")
# Request low-power (prefers NPU/Neural Engine)
context = ml.create_context(accelerated=True, power_preference="low-power")Platform-Specific Backends:
- NPU: CoreML Neural Engine (Apple Silicon macOS only)
- GPU: ONNX Runtime GPU (cross-platform) or CoreML GPU (macOS)
- CPU: ONNX Runtime CPU (cross-platform)
# Download pretrained weights (first time only)
bash scripts/download_mobilenet_weights.sh
# Run on different backends
python examples/mobilenetv2_complete.py examples/images/test.jpg --backend cpu
python examples/mobilenetv2_complete.py examples/images/test.jpg --backend gpu
python examples/mobilenetv2_complete.py examples/images/test.jpg --backend coremlOutput:
Top 5 Predictions (Real ImageNet Labels):
1. lesser panda 99.60%
2. polecat 0.20%
3. weasel 0.09%
Performance: 74.41ms (CPU) / 77.14ms (GPU) / 51.93ms (CoreML)
# Run generation with attention
make text-gen-demo
# Train on custom text
make text-gen-train
# Generate with trained weights
make text-gen-trainedSee examples/ for more samples.
- Getting Started - Installation and first steps
- API Reference - Complete Python API documentation
- Examples - Code examples and tutorials
- Architecture - Design principles and structure
- Development Guide - Building and contributing
- 85 of ~95 WebNN operations (89% spec coverage)
- Shape inference: 85/85 (100%)
- Python API: 85/85 (100%)
- ONNX Backend: 85/85 (100%)
- CoreML MLProgram: 85/85 (100%)
- 1350+ WPT conformance tests passing
See docs/development/implementation-status.md for complete details.
# Validate a graph
cargo run -- examples/sample_graph.json
# Visualize a graph (requires graphviz)
cargo run -- examples/sample_graph.json --export-dot graph.dot
dot -Tpng graph.dot -o graph.png
# Convert to ONNX
cargo run -- examples/sample_graph.json --convert onnx --convert-output model.onnx
# Execute with ONNX Runtime
cargo run --features onnx-runtime -- examples/sample_graph.json --convert onnx --run-onnxSee make help for all available targets.
Contributions welcome! Please see:
- AGENTS.md - Project architecture and conventions
- docs/development/contributing.md - How to add features
- TODO.txt - Feature requests and known issues
Quick Contribution Guide:
- Fork and create feature branch:
git checkout -b feature/my-feature - Install hooks (optional):
./scripts/install-git-hooks.sh - Make changes and test:
make test && make python-test - Format code:
make fmt - Commit and push
Licensed under the Apache License, Version 2.0. See LICENSE for details.
- GitHub: https://github.com/tarekziade/rustnn
- PyPI: https://pypi.org/project/pywebnn/
- Documentation: https://tarekziade.github.io/rustnn/
- W3C WebNN Spec: https://www.w3.org/TR/webnn/
- W3C WebNN Community Group for the specification
- Chromium WebNN implementation for reference
- PyO3 and Maturin projects for excellent Python-Rust integration
Made with Rust by Tarek Ziade