A model zoo for non-Euclidean embedding models
Hyperbolic · Spherical · Product Manifolds
- Standardized access to non-Euclidean embedding models
- One catalog surface: model names map to internal loaders such as ONNX or optional torch-backed runtimes
- Simple API —
load()andencode_images()
uv pip install hyper-modelsThis base install is the simple path: it stays torch-free and is enough for ONNX-backed catalog entries such as HyCoCLIP and MERU.
For torch-backed checkpoints (for example UNCHA):
uv pip install "hyper-models[ml]"import hyper_models
from PIL import Image
# List available models
hyper_models.list_models()
# ['hycoclip-vit-s', 'hycoclip-vit-b', 'meru-vit-s', 'meru-vit-b', 'uncha-vit-s', 'uncha-vit-b']
# Inspect supported internal loader kinds
hyper_models.list_loaders()
# ['onnx', 'uncha-image-torch']
# Load model (auto-downloads from Hugging Face Hub)
model = hyper_models.load("hycoclip-vit-s")
model.geometry # 'hyperboloid'
model.dim # 513
# Encode PIL images
images = [Image.open("image.jpg")]
embeddings = model.encode_images(images) # (1, 513) ndarray
# Get model info
info = hyper_models.get_model_info("hycoclip-vit-s")
info.hub_id # 'mnm-matin/hyperbolic-clip'
info.loader # 'onnx'
info.license # 'CC-BY-NC'
# Low-level: preprocess images yourself
batch = hyper_models.preprocess_images(images) # (B, 3, 224, 224)
embeddings = model.encode(batch)hyper-models is intended to be a timm-like catalog for non-Euclidean models.
- The public abstraction is the catalog entry name, for example
hycoclip-vit-s. - Each entry declares metadata such as geometry, dimensionality, artifact path, and an internal loader kind.
- Internal loaders may differ by model family:
onnxfor exported, torch-free runtimesuncha-image-torchfor raw checkpoints that need a PyTorch image runtime
This keeps callers on one stable API:
model = hyper_models.load("hycoclip-vit-s")
model = hyper_models.load("uncha-vit-b")Callers do not need to know which internal loader is used, except for optional
dependency installation when choosing entries that need hyper-models[ml].
HyperView auto-detects hyper-models names and routes them to the hyper-models provider.
import hyperview as hv
dataset = hv.Dataset.from_huggingface(
name="demo",
hf_dataset="uoft-cs/cifar10",
split="train",
image_key="img",
)
# Uses provider='hyper-models' automatically.
space_key = dataset.compute_embeddings(model="uncha-vit-b")
layout_key = dataset.compute_visualization(space_key=space_key, layout="poincare")HyperView's simple path remains torch-free. If you use the default ONNX-backed
hyper-models entries or the default embed-anything provider, HyperView does
not need PyTorch. PyTorch is only needed when you explicitly select a
torch-backed catalog entry such as uncha-vit-s or uncha-vit-b.
| Model | Available | Paper | Code |
|---|---|---|---|
hycoclip-vit-s |
ICLR 2025 | PalAvik/hycoclip | |
hycoclip-vit-b |
ICLR 2025 | PalAvik/hycoclip | |
meru-vit-s |
ICML 2023 | facebookresearch/meru | |
meru-vit-b |
ICML 2023 | facebookresearch/meru | |
uncha-vit-s |
CVPR 2026 | jeeit17/UNCHA | |
uncha-vit-b |
CVPR 2026 | jeeit17/UNCHA | |
hyp-vit |
— | CVPR 2022 | htdt/hyp_metric |
hie |
— | CVPR 2020 | leymir/hyperbolic-image-embeddings |
hcnn |
— | ICLR 2024 | kschwethelm/HyperbolicCV |
| Model | Available | Paper | Code |
|---|---|---|---|
megadescriptor (via timm) |
WACV 2024 | WildlifeDatasets/wildlife-datasets | |
sphereface |
— | CVPR 2017 | wy1iu/sphereface |
arcface |
— | CVPR 2019 | deepinsight/insightface |
| Model | Available | Paper | Code |
|---|---|---|---|
hyperbolics |
— | ICLR 2019 | HazyResearch/hyperbolics |
This repo also contains tooling to export PyTorch models to ONNX:
cd export/hycoclip
uv run python export_onnx.py --checkpoint model.pth --onnx model.onnxSee export/hycoclip/README.md for details.