Skip to content

πŸ”₯πŸ”₯πŸ”₯ OmniStyle: Filtering High Quality Style Transfer Data at Scale, CVPR 2025

Notifications You must be signed in to change notification settings

StyleX-Research/OmniStyle

Folders and files

NameName
Last commit message
Last commit date

Latest commit

Β 

History

23 Commits
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 

Repository files navigation

OmniStyle: Filtering High Quality Style Transfer Data at Scale

Ye Wang1, Ruiqi Liu1, Jiang Lin2, Fei Liu3, Zili Yi2, Yilin Wang4,*, Rui Ma1,5,*

1School of Artificial Intelligence, Jilin University Β Β 
2School of Intelligence Science and Technology, Nanjing University Β Β 
3ByteDance 4Adobe Β Β 
5Engineering Research Center of Knowledge-Driven Human-Machine Intelligence, MOE, China
*Corresponding authors


πŸ“’ News

  • [2025.07.23] OmniStyle-150K dataset is now available!
  • [2025.07.11] Code and model weights for OmniStyle are now available!
  • [2025.07.05] Released the project page.

πŸ› οΈ TODO List

  • βœ… Release Model weights and inference code for OmniStyle.
  • βœ… Release OmniStyle-150K: The filtered high-quality subset used for training.

πŸ€– OmniStyle is the first end-to-end style transfer framework based on the Diffusion Transformer (DiT) architecture, achieving high-quality 1K-resolution stylization by leveraging the large-scale, filtered OmniStyle-1M dataset. It supports both instruction- and image-guided stylization, enabling efficient and versatile style transfer across diverse styles.

πŸ—‚οΈ OmniStyle-1M is the first million-scale paired style transfer dataset, comprising over one million triplets of content, style, and stylized images across 1,000 diverse style categories. It provides strong supervision for learning controllable and generalizable style transfer models.

πŸ§ͺ OmniStyle-150K is a high-quality subset of OmniStyle-1M, specifically filtered to train the OmniStyle model.


🧩 Installation & Environment Setup

We recommend creating a clean conda environment:

conda create -n omnistyle python=3.10 
conda activate omnistyle
# Install dependencies
pip install -r requirements.txt

πŸ“₯ Checkpoints Download

You can download the pretrained OmniStyle model from Hugging Face:

πŸ‘‰ https://huggingface.co/StyleXX/OmniStyle

After downloading, please place the .safetensors checkpoint file into the ./ckpts/ directory:

In addition, you should download relevant model weights from FLUX-Dev:

πŸ‘‰ https://github.com/XLabs-AI/x-flux

After downloading all weights, you need to specify the correct checkpoint paths in test.sh:


πŸ–ΌοΈ Image-Guided Image Style Transfer

We have provided example style and content images in the test/ folder.

To run image-guided stylization, simply execute:

CUDA_VISIBLE_DEVICES=0 python inference_img_guided.py

The generated results will be saved in the output/ folder.


✏️ Instruction-Guided Image Style Transfer

For instruction-guided stylization, just run:

CUDA_VISIBLE_DEVICES=0 python inference_instruction_guided.py

As with image-guided transfer, the results will be saved in the output/ folder.


πŸ’» Inference Memory Requirements

OmniStyle supports high-resolution (1k) image stylization. Below are the typical GPU memory usages during inference:

Mode Resolution GPU Memory Usage
Image-Guided Transfer 1024Γ—1024 ~46 GB
Instruction-Guided 1024Γ—1024 ~38 GB

πŸ“Œ Note: For stable inference, please ensure at least 48 GB available GPU memory.

πŸ’‘ Recommendation: OmniStyle is optimized for 1024Γ—1024 resolution. We recommend using this resolution during inference to achieve the best stylization quality.


πŸ™ Acknowledgement

Our code is built with reference to the following excellent projects. We sincerely thank the authors for their open-source contributions:

Their work greatly inspired and supported the development of OmniStyle.

About

πŸ”₯πŸ”₯πŸ”₯ OmniStyle: Filtering High Quality Style Transfer Data at Scale, CVPR 2025

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Contributors 2

  •  
  •