Multimodal large language models (MLLMs) have achieved remarkable progress in vision–language tasks, but they continue to struggle with spatial understanding. Existing spatial MLLMs often rely on explicit 3D inputs or architecture-specific modifications, and remain constrained by large-scale datasets or sparse supervision. To address these limitations, we introduce SPATIALTHINKER, a 3D-aware MLLM trained with RL to integrate structured spatial grounding with multi-step reasoning. The model simulates human-like spatial perception by constructing a scene graph of task-relevant objects and spatial relations, and reasoning towards an answer via dense spatial rewards. SPATIALTHINKER consists of two key contributions: (1) a data synthesis pipeline that generates STVQA-7K, a high-quality spatial VQA dataset, and (2) online RL with a multi-objective dense spatial reward enforcing spatial grounding. SPATIALTHINKER-7B outperforms supervised fine-tuning and the sparse RL baseline on spatial understanding and real-world VQA benchmarks, nearly doubling the base-model gain compared to sparse RL, and surpassing GPT4o. These results showcase the effectiveness of combining spatial supervision with reward-aligned reasoning in enabling robust 3D spatial understanding with limited data and advancing MLLMs towards human-level visual reasoning.
- [2025/11/11] 🔥 Code base released.
- [2025/11/08] 🔥 Model Checkpoints and Dataset released.
- Python 3.9+
transformers >= 4.49.0flash-attn >= 2.4.3vllm >= 0.7.3(0.8.0 recommended)
pip install -e .bash scripts/spatialthinker_3b_grpo.shbash scripts/spatialthinker_7b_grpo.shbash scripts/qwen_2_5_3b_stvqa_vanilla_grpo.shbash scripts/qwen_2_5_7b_stvqa_vanilla_grpo.shpython3 scripts/model_merger.py --local_dir path_to_your_last_actor_checkpointTo evaluate SpatialThinker or baseline models across spatial reasoning benchmarks, use the provided evaluation/evals.py script.
python3 evaluation/evals.py \
--dataset <dataset_name> \
--template <prompt_template> \ # e.g. `reasoning`, `no_reasoning`, `spatial_thinker`
--model_path <model_or_checkpoint> \
--cuda <gpu_id> \
--batch_size <num_samples_per_step> \
[--provider <inference_backend>] \
[--processor_name <tokenizer_or_processor>] \
[--custom_filename <output_name>]python3 evaluation/evals.py \
--dataset blink-spatial \
--template spatial_thinker \
--model_path OX-PIXL/SpatialThinker-3B \
--cuda 0 \
--batch_size 4python3 evaluation/evals.py \
--dataset spatialbench \
--template spatial_thinker \
--model_path OX-PIXL/SpatialThinker-3B \
--cuda 0 \
--batch_size 2python3 evaluation/evals.py \
--dataset stvqa \
--template reasoning \
--model_path gpt-4o-2024-05-13 \
--provider openai \
--batch_size 1python3 evaluation/evals.py \
--dataset stvqa \
--template reasoning \
--model_path claude-3-5-sonnet \
--provider anthropic \
--batch_size 1cv-bench, cv-bench-2D, cv-bench-3D, blink-spatial, blink-depth, blink-object,
blink-counting, blink-multi-view, blink-jigsaw, realworld_qa, spatialbench, mmvp, 3dsrbench,
lego, spatialreasoner, robospatial, robospatial_rgb, stvqa, hallusionbench.
- Release Training Code
- Release Evaluation Code
- Release Model Checkpoints
- Release STVQA-7K Training Dataset
- Release STVQA-7K Data Generation Pipeline
If you find this repository useful in your project, please consider giving a ⭐ and citing:
@misc{batra2025spatialthinkerreinforcing3dreasoning,
title={SpatialThinker: Reinforcing 3D Reasoning in Multimodal LLMs via Spatial Rewards},
author={Hunar Batra and Haoqin Tu and Hardy Chen and Yuanze Lin and Cihang Xie and Ronald Clark},
year={2025},
eprint={2511.07403},
archivePrefix={arXiv},
primaryClass={cs.CV},
url={https://arxiv.org/abs/2511.07403},
}This project builds upon the following open-source frameworks and works:
- EasyR1 — An efficient, scalable, multi-modality RL training framework based on veRL
- LLaMA-Factory — Unified efficient fine-tuning of 100+ LLMs & VLMs
- Qwen2.5-VL — Multimodal LLM series from the Qwen family
💡 For more details, visit the project page and our paper on arXiv.
