Run PyTorch models in Ruby.
ExecuTorch is Meta's lightweight runtime for deploying PyTorch models on edge devices. This gem provides Ruby bindings so you can run exported models (.pte files) directly in your Ruby applications.
require "executorch"
# Load a model
model = Executorch::Model.new("model.pte")
# Create input tensor
input = Executorch::Tensor.new([[1.0, 2.0, 3.0]])
# Run inference
output = model.predict([input]).first
puts output.to_a # => [[3.0, 5.0, 7.0]]Requirements: Ruby 3.0+, macOS or Linux, C++17 compiler
ExecuTorch must be built from source. Follow the official guide, or use these commands:
git clone https://github.com/pytorch/executorch.git
cd executorch
./install_requirements.sh
cmake -B cmake-out \
-DCMAKE_INSTALL_PREFIX=vendor/executorch \
-DEXECUTORCH_BUILD_EXTENSION_MODULE=ON \
-DEXECUTORCH_BUILD_EXTENSION_DATA_LOADER=ON \
-DEXECUTORCH_BUILD_EXTENSION_TENSOR=ON \
-DCMAKE_BUILD_TYPE=Release
cmake --build cmake-out -j4
cmake --install cmake-outTell Bundler where ExecuTorch is installed (only needed once per project):
bundle config set --local build.executorch --with-executorch-dir=vendor/executorchAdd to your Gemfile:
gem "executorch"Then:
bundle installCI/CD: Use the environment variable instead:
EXECUTORCH_DIR=/path/to/executorch bundle install
Create tensors from nested arrays (shape is inferred):
# 2D tensor, shape [2, 3]
tensor = Executorch::Tensor.new([[1.0, 2.0, 3.0],
[4.0, 5.0, 6.0]])
# With explicit dtype
tensor = Executorch::Tensor.new([[1, 2], [3, 4]], dtype: :long)
# Inspect
tensor.shape # => [2, 3]
tensor.dtype # => :float
tensor.to_a # => [[1.0, 2.0, 3.0], [4.0, 5.0, 6.0]]Or from flat arrays with explicit shape:
tensor = Executorch::Tensor.new([1.0, 2.0, 3.0, 4.0], shape: [2, 2])Supported dtypes: :float (default), :double, :int, :long
model = Executorch::Model.new("model.pte")
# Run inference (all equivalent)
outputs = model.predict([input])
outputs = model.forward([input])
outputs = model.call([input])
# Introspection
model.loaded? # => true
model.method_names # => ["forward"]Export PyTorch models to .pte format:
import torch
from executorch.exir import to_edge
class MyModel(torch.nn.Module):
def forward(self, x):
return x * 2 + 1
model = MyModel()
example_input = torch.randn(1, 3)
exported = torch.export.export(model, (example_input,))
et_program = to_edge(exported).to_executorch()
with open("model.pte", "wb") as f:
et_program.write_to_file(f)"ExecuTorch installation not found"
Verify your installation and configure the path:
ls vendor/executorch/include/executorch # Should exist
bundle config set --local build.executorch --with-executorch-dir=vendor/executorch"module.h header not found"
ExecuTorch was built without required extensions. Rebuild with:
cmake -B cmake-out \
-DEXECUTORCH_BUILD_EXTENSION_MODULE=ON \
-DEXECUTORCH_BUILD_EXTENSION_DATA_LOADER=ON \
-DEXECUTORCH_BUILD_EXTENSION_TENSOR=ON \
..."undefined symbol" at runtime
Try linking additional libraries:
EXECUTORCH_EXTRA_LIBS=portable_ops_lib,portable_kernels bundle exec rake compilegit clone https://github.com/benngarcia/executorch-ruby.git
cd executorch-ruby
bundle install
bundle config set --local build.executorch --with-executorch-dir=vendor/executorch
bundle exec rake compile
bundle exec rake testBug reports and pull requests are welcome on GitHub.
Apache License 2.0. See LICENSE.txt.
Built with Rice. Inspired by onnxruntime-ruby and torch.rb.