- [2026/03/06] π₯ ComfyUI (community contribution) is now supported!
- [2026/03/03] π Upload paper and init project. Read
- [2026/03/03] π§βπ» Release the inference code and model weights. π€Weight.
- [2026/03/03] π Release π€MMLottieBench for benchmarking vector animation generation capabilities.
- [2026/03/03] πΎ Release π€MMLottie-2M Dataset.
- [2026/03/03] π Launch the Huggingface π€Demo, try it out!
- [2026/02/21] OmniLottie is accepted to CVPR 2026π₯! See you in Denver!
- Project Page & Technical Report
- MMLottie-2M Dataset Release
- Inference Code & Model Weight
- Online Demo (Gradio deployed on Huggingface)
- MMLottieBench Benchmark
- Training Code
- π OmniLottie ComfyUI Plugin ComfyUI_OmniLottie by @smthemex.
OmniLottie is the first family of end-to-end multimodal Lottie generators that leverage pre-trained Vision-Language Models (VLMs), capable of generating complex and detailed Lottie animations from multi-modal instructions including texts, images, and videos. We also introduce MMLottie-2M, a multimodal dataset with two million richly annotated Lottie animations, along with a standardized evaluation protocol for multi-modal vector animation generation tasks.
| Model | Download link | Size | Update date |
|---|---|---|---|
| OmniLottie(4B) | Huggingface | 8.46 GB | 2026-03-02 |
The dependencies configured according to the following instructions provide an environment equipped for inference
git clone https://github.com/OpenVGLab/OmniLottie
cd OmniLottieCreate and activate a new conda environment with Python 3.10:
conda create -n omnilottie python=3.10
conda activate omnilottieWe have tested our environment with CUDA 12.1. You can install CUDA 12.1 by following the CUDA Toolkit installation guide.
Install PyTorch with CUDA 12.1 support:
pip install torch==2.3.0+cu121 torchvision==0.18.0+cu121 --index-url https://download.pytorch.org/whl/cu121Install remaining dependencies:
pip install -r requirements.txt| GPU Memory Usage | Time per 256/512/1024/2048/4096 tokens | |
|---|---|---|
| OmniLottie | 15.2G | 8.34/16.68/33.38/66.74/133.49 seconds |
Note: The inference time shown here is measured per OmniLottie Lottie tokens, while the inference time reported in our paper is measured per JSON code tokens for fair comparison with baseline methods.
Download Model Weights
First, install the Hugging Face CLI tool:
pip install huggingface-hubDownload the model from Hugging Face:
# Download OmniLottie model
huggingface-cli download OmniLottie/OmniLottie --local-dir /PATH/TO/OmniLottieTry with Example Data
We provide example prompts, images, and videos in the example/ directory:
# Test with example text prompts
python inference.py \
--sketch_weight /PATH/TO/OmniLottie \
--batch_text_file example/demo.txt \
--output_dir ./output_demo_text
# Test with example images
python inference.py \
--sketch_weight /PATH/TO/OmniLottie \
--single_image example/demo_images/00de75e2c031cb3fc3f472e356aba5b6.png \
--output_dir ./output_demo_image
# Test with example videos
python inference.py \
--sketch_weight /PATH/TO/OmniLottie \
--single_video example/demo_video/02b8ce2014690a9e30dc25da846e8afb.mp4 \
--output_dir ./output_demo_videoGenerate Lottie animations from text descriptions:
Single prompt:
python inference.py \
--sketch_weight /PATH/TO/OmniLottie \
--single_text "A red ball appearing, bouncing up and down, then fading out, repeating seamlessly" \
--output_dir ./output_textBatch generation from file:
# Create a prompts.txt file with one prompt per line
python inference.py \
--sketch_weight /PATH/TO/OmniLottie \
--batch_text_file example/demo.txt \
--output_dir ./output_textCustom generation parameters:
python inference.py \
--sketch_weight /PATH/TO/OmniLottie \
--single_text "a blue bird appearing, pulsing while sliding downward, lingers briefly, then growing back while sliding upward to reset with clear phase changes, repeating seamlessly" \
--use_sampling \
--temperature 0.8 \
--top_p 0.25 \
--top_k 5 \
--repetition_penalty 1.01 \
--output_dir ./outputGenerate with Best-of-N selection:
python inference.py \
--sketch_weight /PATH/TO/OmniLottie \
--single_text "a light blue piggy bank with a darker blue outline, with a single light blue coin with a dark blue yen symbol (ΓΒ£) appears above the piggy bank, then starts descending towards the piggy bank's opening" \
--num_candidates 8 \
--output_dir ./outputGenerate Lottie animations from an image:
Single image:
python inference.py \
--sketch_weight /PATH/TO/OmniLottie \
--single_image /path/to/image.png \
--output_dir ./output_imageConvert video to Lottie animation:
Single video:
python inference.py \
--sketch_weight /PATH/TO/OmniLottie \
--single_video /path/to/video.mp4 \
--output_dir ./output_videoSpecify tokenizer path:
python inference.py \
--sketch_weight /PATH/TO/OmniLottie \
--tokenizer_name /PATH/TO/Qwen2.5-VL-3B-Instruct \
--single_text "Your prompt here" \
--output_dir ./outputAdjust token length:
python inference.py \
--sketch_weight /PATH/TO/OmniLottie \
--maxlen 6072 \
--text_len 512 \
--single_text "Your prompt here" \
--output_dir ./outputFilter by task type (when using MMLottieBench dataset):
python inference.py \
--sketch_weight /PATH/TO/OmniLottie \
--mmlottie_bench_dir /PATH/TO/mmlottie_bench \
--split real \
--task_filter text \
--output_dir ./outputProcess limited samples with shuffling:
python inference.py \
--sketch_weight /PATH/TO/OmniLottie \
--mmlottie_bench_dir /PATH/TO/mmlottie_bench \
--split real \
--max_samples 10 \
--shuffle \
--output_dir ./outputWe provide an interactive generation interface using Gradio:
-
Local Deployment
python app.py
-
Online Demo
Try our live demo on Hugging Face Spaces
We provide MMLottieBench for standardized evaluation of Lottie generation models.
Option 1: Using download script:
python download_mmlottie_bench.py --output_dir /PATH/TO/mmlottie_benchOption 2: Using Hugging Face CLI:
huggingface-cli download OmniLottie/MMLottieBench --repo-type dataset --local-dir /PATH/TO/mmlottie_benchOption 3: Automatic download (in code):
from datasets import load_dataset
dataset = load_dataset("OmniLottie/MMLottieBench")MMLottieBench contains 900 samples split into:
- Real split: 450 real-world Lottie animations
- Synthetic split: 450 synthetically generated samples
Each split contains 3 task types (150 samples each):
- Text-to-Lottie: Generate from text descriptions
- Text-Image-to-Lottie: Generate from image + text guidance
- Video-to-Lottie: Convert video to Lottie animation
MMLottieBench provides two splits that can be switched using --split:
--split real- Test on 450 real-world Lottie animations--split synthetic- Test on 450 synthetically generated samples
Test on real split (all tasks):
python inference.py \
--sketch_weight /PATH/TO/OmniLottie \
--mmlottie_bench_dir /PATH/TO/mmlottie_bench \
--split real \
--output_dir ./benchmark_results_realTest on synthetic split (all tasks):
python inference.py \
--sketch_weight /PATH/TO/OmniLottie \
--mmlottie_bench_dir /PATH/TO/mmlottie_bench \
--split synthetic \
--output_dir ./benchmark_results_syntheticTest specific task type on real split:
python inference.py \
--sketch_weight /PATH/TO/OmniLottie \
--mmlottie_bench_dir /PATH/TO/mmlottie_bench \
--split real \
--mmlottie_task text2lottie \
--output_dir ./benchmark_resultsAvailable task types:
text2lottie- Text-to-Lottie generation (150 samples per split)text_image2lottie- Text-Image-to-Lottie generation (150 samples per split)video2lottie- Video-to-Lottie generation (150 samples per split)
Process limited samples with filtering:
python inference.py \
--sketch_weight /PATH/TO/OmniLottie \
--mmlottie_bench_dir /PATH/TO/mmlottie_bench \
--split real \
--max_samples 50 \
--shuffle \
--output_dir ./benchmark_resultsFor detailed usage, see:
OmniLottie is licensed under the Apache License 2.0, while MMLottie-2M dataset is under Creative Commons Attribution Non Commercial Share Alike 4.0 License. You can find the license files in the respective github and HuggingFace repositories.
The MMLottie-2M Dataset (the "Dataset") is provided exclusively for research and non-commercial purposes. Any commercial use, redistribution for profit, or deployment in commercial products is strictly prohibited without explicit authorization.
- The Dataset is compiled from content that was originally publicly available on third-party websites.
- All copyrights, trademarks, and other intellectual property rights in the original content remain with their respective owners.
- The inclusion of any content in this Dataset does not imply endorsement, authorization, sponsorship, or any affiliation with the original content creators or rights holders.
- The processing, filtering, and reorganization performed by the authors do not alter the ownership or intellectual property status of the underlying content.
The Dataset is provided "AS IS" and "AS AVAILABLE", without warranties of any kind, either express or implied, including but not limited to:
- Accuracy, completeness, or reliability of the data
- Merchantability or fitness for a particular purpose
- Non-infringement of third-party rights
- Freedom from errors, bugs, or harmful components
Under no circumstances shall the authors, contributors, or affiliated organizations be liable for any direct, indirect, incidental, special, consequential, or punitive damages arising from or related to:
- The use or inability to use the Dataset
- Any errors or omissions in the Dataset
- Any claims by third parties regarding intellectual property infringement
- Any actions taken based on the content of the Dataset
By using the Dataset, you agree that:
- You are solely responsible for ensuring compliance with all applicable laws, regulations, and third-party rights in your jurisdiction.
- You will not use the Dataset for any illegal, harmful, or unethical purposes.
- You will properly attribute the Dataset in any resulting publications or works.
If you are a rights holder and believe that any content in this Dataset infringes your intellectual property rights, please contact us immediately. We are committed to addressing legitimate concerns and will promptly remove any content upon verification of valid claims.
For questions, concerns, or content removal requests, please reach out through:
- Email: 25113050158@m.fudan.edu.cn
- GitHub Issues: https://github.com/OpenVGLab/OmniLottie/issues
@article{yang2026omnilottie,
title={OmniLottie: Generating Vector Animations via Parameterized Lottie Tokens},
author={Yiying Yang and Wei Cheng and Sijin Chen and Honghao Fu and Xianfang Zeng and Yujun Cai and Gang Yu and Xinjun Ma},
journal={arXiv preprint arxiv:2603.02138},
year={2026}
}We thank the following projects and resources for their valuable contributions:
- Data Sources: LottieFiles, IconScout, Flaticon, Iconfont, Icons8
- python-lottie: For providing excellent tools for Lottie manipulation and processing
- MMSVG-Icon, MMSVG-Illustration: For inspiring our multi-modal data curation approach
