Yicheng Xiao*, Wenhu Zhang*, Lin Song βοΈ, Yukang Chen, Wenbo Li, Nan Jiang, Tianhe Ren, Haokun Lin, Wei Huang, Haoyang Huang, Xiu Li, Nan Duan, Xiaojuan Qi βοΈ
π§ Fine-grained spatial editing Β |Β π§ͺ Benchmarking Β |Β π₯ Camera and object manipulation
β If you find this project helpful or use any part of it in your work, we kindly encourage you to give it a star and cite our paper. Your support would be a great motivation for us. π
The following demo showcases our method on fine-grained spatial editing from spatially controlled endpoints.
SpatialEdit_Demo.mp4
β¨ The first and third examples show point clouds with only a single given viewpoint. The second and fourth examples are augmented by our model, which synthesizes richer spatial observations from the sparse input view.
β¨ Given the first frame, our editing model first performs spatial editing to produce the final frame of the video. Subsequently, the video generation model synthesizes a coherent transition sequence, while preserving scene realism and thematic consistency.
Image spatial editing performs geometry-driven transformations, allowing precise control over object layout and camera viewpoints. Current models are insufficient for fine-grained spatial manipulations, motivating a dedicated assessment suite.
Our contributions are three-fold:
- We introduce SpatialEdit-Bench, a complete benchmark that evaluates spatial editing by jointly measuring perceptual plausibility and geometric fidelity via viewpoint reconstruction and framing analysis.
- To address the data bottleneck for scalable training, we construct SpatialEdit-500K, a synthetic dataset generated with a controllable Blender pipeline that renders objects across diverse backgrounds and systematic camera trajectories, providing precise ground-truth transformations for both object- and camera-centric operations.
- Building on this data, we develop SpatialEdit-16B, a baseline model for fine-grained spatial editing. Our method achieves competitive performance on general editing while substantially outperforming prior methods on spatial manipulation tasks.
| Resource | Description | Link |
|---|---|---|
| π§ͺ Training Data | SpatialEdit-500K synthetic training set for scalable fine-grained spatial editing | π€Hugging Face |
| π§ Model Weights | SpatialEdit-16B checkpoints for image spatial editing | π€Hugging Face |
| πΌοΈ Benchmark Images | SpatialEdit-Bench benchmark images and evaluation assets | π€Hugging Face |
SpatialEdit focuses on spatially grounded image editing, where the goal is not just to change appearance, but to control object motion, rotation, 3D viewpoint, framing, and camera movement with precision.
SpatialEdit-Bench evaluates both object-centric and camera-centric edits. The benchmark is designed to score whether an edited image is visually plausible while also satisfying the requested spatial transformation.
To support scalable training and controlled evaluation, SpatialEdit-500K is built with a synthetic rendering pipeline that systematically varies object pose, placement, and camera trajectories over diverse scenes.
Qualitative comparisons highlight the advantage of SpatialEdit on fine-grained spatial manipulation tasks.
Create a Python environment and install the dependencies:
pip install -r requirements.txt
pip install accelerate peft gradio pillowNotes:
flash_attninrequirements.txtrequires a compatible CUDA and PyTorch environment.- Some config files still contain placeholder or internal paths and should be updated before running inference.
Before running the code, please download the required external checkpoints first:
- VGGT: required for camera-level benchmark evaluation.
- YOLO26x: required for framing evaluation. The current benchmark script expects
yolo26x.pt. - Qwen3-VL-8B-Instruct: used as the vision-language backbone in the current config.
- Wan2.1-T2V-1.3B: download the
Wan2.1_VAE.pthweights used by the VAE configuration.
The repo currently provides a simple local inference entry point:
python spatialedit_demo.pyBefore running, update the checkpoint paths in spatialedit_demo.py:
ckpt_path_PTckpt_path_CTdevice
The example input image is located at validation/JD_Dog.jpeg.
To generate edited outputs for SpatialEdit-Bench, use:
torchrun --nnodes 1 --nproc_per_node 8 SpatialEdit-Bench/eval_inference.py \
--config configs/spatialedit_base_config.py \
--ckpt-path /path/to/checkpoint_or_lora \
--save-path /path/to/save_dir \
--meta-file /path/to/SpatialEdit_Bench_Meta_File.json \
--bench-data-dir /path/to/SpatialEdit_Bench_Data \
--basesize 1024 \
--num-inference-steps 50 \
--guidance-scale 5.0 \
--seed 42You can also adapt the provided launcher script:
SpatialEdit-Bench/scripts/dist_inference.sh
Camera-level evaluation measures viewpoint reconstruction and framing fidelity:
bash SpatialEdit-Bench/scripts/dist_camera_eval.shUpdate the placeholder paths in the script before running:
VGGTYOLOEVAL_DATAMETA_DATA_FILE
Object-level evaluation scores edit faithfulness and benchmark statistics:
bash SpatialEdit-Bench/scripts/dist_object_eval.shUpdate the script paths and evaluation backend first:
META_FILESAVEBENCH_DATA_DIRBACKBONE
configs/spatialedit_base_config.pycurrently contains internal absolute paths and should be replaced with your local model paths.- The benchmark scripts assume access to external benchmark metadata, source images, and model checkpoints.
- The repo already includes example evaluation utilities under
SpatialEdit-Bench/camera_level_evalandSpatialEdit-Bench/object_level_eval.
Code in this repository builds upon several excellent open-source projects. We sincerely thank ReCamMaster and TexVerse for their outstanding contributions.
We also extend our gratitude to Yanbing Zhang for his valuable support throughout this project.
Additionally, our resource construction pipeline and experiments have contributed to the development of the image editing model in JoyAI-Image.
@article{xiao2026spatialedit,
title = {SpatialEdit: Benchmarking Fine-Grained Image Spatial Editing},
author = {Xiao, Yicheng and Zhang, Wenhu and Song, Lin and Chen, Yukang and Li, Wenbo and Jiang, Nan and Ren, Tianhe and Lin, Haokun and Huang, Wei and Huang, Haoyang and Li, Xiu and Duan, Nan and Qi, Xiaojuan},
journal = {arXiv preprint arXiv:2604.04911},
year = {2026}
}



















