Luzhou Ge · Xiangyu Zhu · Zhuo Yang · Xuesong Li
Beijing Institute of Technology
✅ Realese supplementary experimental materials of Language-guided Object Retrieval on the Replica dataset.
✅ Realese multi-layer scene graph construction LVLM prompts.
✅ Open source deployment code using realsense d455.
✅ Update detailed tutorial and code of DynamicGSG.
DynamicGSG has been tested with Python 3.10, Torch 1.12.1 & CUDA=11.6, Torch 2.3.0 & CUDA 12.1, Torch 2.7.0 & CUDA 12.8.
conda create -n dgsg python=3.10
conda activate dgsg
conda install cuda-toolkit==12.1.1 -c conda-forge
conda install opencv -c conda-forge
pip install torch==2.3.1 torchvision==0.18.1 torchaudio==2.3.1 --index-url https://download.pytorch.org/whl/cu121
git clone https://github.com/GeLuzhou/Dynamic-GSG.git
cd Dynamic-GSG
git submodule update --init --recursive
export CUDA_HOME=/your/env/path/of/dgsg
export PATH=$CUDA_HOME/bin:$PATH
export LD_LIBRARY_PATH=$CUDA_HOME/lib:$LD_LIBRARY_PATH
# Install gaussian rasterization
cd submodules/diff-gaussian-rasterization-w-depth
pip install .
# Install GroundingDINO
cd ../GroundingDINO
pip install .
cd ..
pip install -r requirements.txt
DATAROOT is ./data by default. Please change the input_folder path in the scene-specific config files if datasets are stored somewhere else on your machine.
Download the data as below, and the data is saved into the ./data/Replica folder. Note that the Replica data is generated by the authors of iMAP (but hosted by the authors of NICE-SLAM). Please cite iMAP if you use the data.
mkdir -p data
cd data
wget https://cvg-data.inf.ethz.ch/nice-slam/data/Replica.zip
unzip Replica.zip
Please follow the data downloading and image undistortion procedure on the ScanNet++ website.
We use the following sequences following SplaTAM:
8b5caf3398
b20a261fdf
To run DynamicGSG without dynamic update, set whether_to_update in the configuration file to false, and please use the following command:
python scripts/dynamic_gsg_real_ssim.py configs/replica/dgsg.pyTo visualize the final interactive reconstruction, please use the following command:
python viz_scripts/classifier_vis.py configs/replica/dgsg.pyBefore running, you need to fill in your chatgpt api key in line 403 of the vlm_utils/vlm.py file to obtain the relevant ai service.
- press "r": display the RGB reconstruction results.
- press "c": search objects through CLIP semantic feature only.
- press "l": search objects through LLM-based method.
- press "o": search objects through hierarchical scene graph.
- press "p": search objects through object idx, such as '1', '2', '3'...
- press "a": find all related objects in the scene graph.
- Collect data in the real world according to data collection tutorial.
To run DynamicGSG with dynamic update, please use the following command:
python scripts/dynamic_gsg_real_ssim.py configs/realsense/dgsg.py-
set
whether_to_updatein the configuration file toTrue. -
Modify
frame_begin_updatein the configuration file to set the frame from which the scene is basically rebuilt and the environment changes are detected. -
For the sake of simplicity, the open source code here only updates at the level of a single object and does not involve updates to complex scene graphs. We will open source the refactored and more readable code later.
To visualize the final interactive reconstruction, please use the following command:
python viz_scripts/classifier_vis.py configs/realsense/dgsg.pyBefore running, you need to fill in your chatgpt api key in line 403 of the vlm_utils/vlm.py file to obtain the relevant ai service.
- press "r": display the RGB reconstruction results.
- press "c": search objects through CLIP semantic feature only.
- press "l": search objects through LLM-based method.
- press "o": search objects through hierarchical scene graph.
- press "p": search objects through object idx, such as '1', '2', '3'...
- press "a": find all related objects in the scene graph.
This work is built on many amazing research works and open-source projects, thanks a lot to all the authors for sharing!
If you find our paper and code useful, please cite us:
@INPROCEEDINGS{11246569,
author={Ge, Luzhou and Zhu, Xiangyu and Yang, Zhuo and Li, Xuesong},
booktitle={2025 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS)},
title={DynamicGSG: Dynamic 3D Gaussian Scene Graphs for Environment Adaptation},
year={2025},
volume={},
number={},
pages={2232-2239},
keywords={Three-dimensional displays;Navigation;Source coding;Semantic segmentation;Semantics;Rendering (computer graphics);Indoor environment;Intelligent agents;Intelligent robots},
doi={10.1109/IROS60139.2025.11246569}
}

