Skip to content

EasonXiao-888/SpatialEdit

Folders and files

NameName
Last commit message
Last commit date

Latest commit

Β 

History

32 Commits
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 

Repository files navigation

SpatialEdit: Benchmarking Fine-Grained Image Spatial Editing

Yicheng Xiao*, Wenhu Zhang*, Lin Song βœ‰οΈ, Yukang Chen, Wenbo Li, Nan Jiang, Tianhe Ren, Haokun Lin, Wei Huang, Haoyang Huang, Xiu Li, Nan Duan, Xiaojuan Qi βœ‰οΈ

🧭 Fine-grained spatial editing Β |Β  πŸ§ͺ Benchmarking Β |Β  πŸŽ₯ Camera and object manipulation

⭐ If you find this project helpful or use any part of it in your work, we kindly encourage you to give it a star and cite our paper. Your support would be a great motivation for us. πŸ“š

🎬 Demo

The following demo showcases our method on fine-grained spatial editing from spatially controlled endpoints.

SpatialEdit_Demo.mp4

πŸš€ Application Gallery

🧊 3D Point Control

✨ The first and third examples show point clouds with only a single given viewpoint. The second and fourth examples are augmented by our model, which synthesizes richer spatial observations from the sparse input view.

πŸŽ₯ Conditional-frames Based Video Generation:

✨ Given the first frame, our editing model first performs spatial editing to produce the final frame of the video. Subsequently, the video generation model synthesizes a coherent transition sequence, while preserving scene realism and thematic consistency.

πŸ“· Camera Trajectory Transformation

🚢 Object Moving

πŸ”„ Object Rotation

πŸ“ Abstract

Image spatial editing performs geometry-driven transformations, allowing precise control over object layout and camera viewpoints. Current models are insufficient for fine-grained spatial manipulations, motivating a dedicated assessment suite.

Our contributions are three-fold:

  1. We introduce SpatialEdit-Bench, a complete benchmark that evaluates spatial editing by jointly measuring perceptual plausibility and geometric fidelity via viewpoint reconstruction and framing analysis.
  2. To address the data bottleneck for scalable training, we construct SpatialEdit-500K, a synthetic dataset generated with a controllable Blender pipeline that renders objects across diverse backgrounds and systematic camera trajectories, providing precise ground-truth transformations for both object- and camera-centric operations.
  3. Building on this data, we develop SpatialEdit-16B, a baseline model for fine-grained spatial editing. Our method achieves competitive performance on general editing while substantially outperforming prior methods on spatial manipulation tasks.

πŸ”— Resources

Resource Description Link
πŸ§ͺ Training Data SpatialEdit-500K synthetic training set for scalable fine-grained spatial editing πŸ€—Hugging Face
🧠 Model Weights SpatialEdit-16B checkpoints for image spatial editing πŸ€—Hugging Face
πŸ–ΌοΈ Benchmark Images SpatialEdit-Bench benchmark images and evaluation assets πŸ€—Hugging Face

🌍 Overview

SpatialEdit focuses on spatially grounded image editing, where the goal is not just to change appearance, but to control object motion, rotation, 3D viewpoint, framing, and camera movement with precision.

Task Definition

πŸ“ SpatialEdit-Bench

SpatialEdit-Bench evaluates both object-centric and camera-centric edits. The benchmark is designed to score whether an edited image is visually plausible while also satisfying the requested spatial transformation.

SpatialEdit-Bench Results

πŸ—οΈ SpatialEdit-500K Data Engine

To support scalable training and controlled evaluation, SpatialEdit-500K is built with a synthetic rendering pipeline that systematically varies object pose, placement, and camera trajectories over diverse scenes.

SpatialEdit-500K Data Engine

🎨 Visual Comparisons

Qualitative comparisons highlight the advantage of SpatialEdit on fine-grained spatial manipulation tasks.

Visual Comparison 1

Visual Comparison 2

βš™οΈ Installation

Create a Python environment and install the dependencies:

pip install -r requirements.txt
pip install accelerate peft gradio pillow

Notes:

  • flash_attn in requirements.txt requires a compatible CUDA and PyTorch environment.
  • Some config files still contain placeholder or internal paths and should be updated before running inference.

πŸ“¦ Prerequisites

Before running the code, please download the required external checkpoints first:

  • VGGT: required for camera-level benchmark evaluation.
  • YOLO26x: required for framing evaluation. The current benchmark script expects yolo26x.pt.
  • Qwen3-VL-8B-Instruct: used as the vision-language backbone in the current config.
  • Wan2.1-T2V-1.3B: download the Wan2.1_VAE.pth weights used by the VAE configuration.

πŸ§ͺ Quick Demo

The repo currently provides a simple local inference entry point:

python spatialedit_demo.py

Before running, update the checkpoint paths in spatialedit_demo.py:

  • ckpt_path_PT
  • ckpt_path_CT
  • device

The example input image is located at validation/JD_Dog.jpeg.

πŸƒ Benchmark Inference

To generate edited outputs for SpatialEdit-Bench, use:

torchrun --nnodes 1 --nproc_per_node 8 SpatialEdit-Bench/eval_inference.py \
  --config configs/spatialedit_base_config.py \
  --ckpt-path /path/to/checkpoint_or_lora \
  --save-path /path/to/save_dir \
  --meta-file /path/to/SpatialEdit_Bench_Meta_File.json \
  --bench-data-dir /path/to/SpatialEdit_Bench_Data \
  --basesize 1024 \
  --num-inference-steps 50 \
  --guidance-scale 5.0 \
  --seed 42

You can also adapt the provided launcher script:

  • SpatialEdit-Bench/scripts/dist_inference.sh

πŸ“Š Benchmark Evaluation

πŸ“· Camera-Level Evaluation

Camera-level evaluation measures viewpoint reconstruction and framing fidelity:

bash SpatialEdit-Bench/scripts/dist_camera_eval.sh

Update the placeholder paths in the script before running:

  • VGGT
  • YOLO
  • EVAL_DATA
  • META_DATA_FILE

🧩 Object-Level Evaluation

Object-level evaluation scores edit faithfulness and benchmark statistics:

bash SpatialEdit-Bench/scripts/dist_object_eval.sh

Update the script paths and evaluation backend first:

  • META_FILE
  • SAVE
  • BENCH_DATA_DIR
  • BACKBONE

πŸ’‘ Notes

  • configs/spatialedit_base_config.py currently contains internal absolute paths and should be replaced with your local model paths.
  • The benchmark scripts assume access to external benchmark metadata, source images, and model checkpoints.
  • The repo already includes example evaluation utilities under SpatialEdit-Bench/camera_level_eval and SpatialEdit-Bench/object_level_eval.

❀️ Acknowledgement

Code in this repository builds upon several excellent open-source projects. We sincerely thank ReCamMaster and TexVerse for their outstanding contributions.

We also extend our gratitude to Yanbing Zhang for his valuable support throughout this project.

Additionally, our resource construction pipeline and experiments have contributed to the development of the image editing model in JoyAI-Image.

Citation

@article{xiao2026spatialedit,
  title   = {SpatialEdit: Benchmarking Fine-Grained Image Spatial Editing},
  author  = {Xiao, Yicheng and Zhang, Wenhu and Song, Lin and Chen, Yukang and Li, Wenbo and Jiang, Nan and Ren, Tianhe and Lin, Haokun and Huang, Wei and Huang, Haoyang and Li, Xiu and Duan, Nan and Qi, Xiaojuan},
  journal = {arXiv preprint arXiv:2604.04911},
  year    = {2026}
}

About

[Official Repo] SpatialEdit: Benchmarking Fine-Grained Image Spatial Editing

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors