Skip to content

Ammmob/PixelSmile

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

40 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

PixelSmile: Toward Fine-Grained Facial Expression Editing

Paper      Project Page      Model      Coming soon      Demo

PixelSmile Demo

PixelSmile Teaser

📢 Updates

🚀 Release Plan

  • Project Page
  • Model Weight (Preview)
  • Inference Code
  • Benchmark Data
  • Online Demo
  • Training Code
  • Benchmark Code
  • Model Weight (Stable)

⚡ Quick Start

Quick start for PixelSmile inference.

  1. Install the environment in Installation.
  2. Download the base model and PixelSmile weights in Model Download.
  3. Run inference in Inference.

🔧 Installation

For Inference

Clone the repository and enter the project directory:

git clone https://github.com/Ammmob/PixelSmile.git
cd PixelSmile

Create and activate a clean conda environment:

conda create -n pixelsmile python=3.10
conda activate pixelsmile

Install the inference dependencies:

pip install -r requirements.txt

Patch the current diffusers installation for the Qwen image edit bug:

bash scripts/patch_qwen_diffusers.sh

For Training

If you want to train PixelSmile, install the additional training dependencies on top of the inference environment:

pip install -r requirements-train.txt

🤗 Model Download

For Inference

PixelSmile uses Qwen/Qwen-Image-Edit-2511 as the base model.

Model Stage Data Type Download
PixelSmile-preview Preview Human Hugging Face

✨ A more stable version is coming soon, with improved human expression editing performance and support for anime expression editing.

For Training

Training requires additional pretrained weights and auxiliary models. We will provide the full training asset list soon.

🎨 Inference

PixelSmile supports two simple ways to run inference.

Option 1. Edit the default arguments in the script

bash scripts/run_infer.sh

You can edit scripts/run_infer.sh and directly modify the default values in DEFAULT_ARGS.

Option 2. Pass arguments from the command line

bash scripts/run_infer.sh \
  --image-path /path/to/input.jpg \
  --output-dir /path/to/output \
  --model-path /path/to/Qwen-Image-Edit-2511 \
  --lora-path /path/to/PixelSmile.safetensors \
  --expression happy \
  --scales 0 0.5 1.0 1.5 \
  --seed 42

Command-line arguments will override the default values in the script.

🧠 Training

Training code is coming soon.

📖 Citation

If you find PixelSmile useful in your research or applications, please consider citing our work.

@article{hua2026pixelsmile,
  title={PixelSmile: Toward Fine-Grained Facial Expression Editing},
  author={Jiabin Hua and Hengyuan Xu and Aojie Li and Wei Cheng and Gang Yu and Xingjun Ma and Yu-Gang Jiang},
  journal={arXiv preprint arXiv:2603.25728},
  year={2026}
}

About

PixelSmile: Fine-grained facial expression editing with continuous control, reduced semantic entanglement, and strong identity preservation.

Topics

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors