Skip to content
/ OR2 Public

Official implementation for "Compensating Spatiotemporally Inconsistent Observations for Online Dynamic 3D Gaussian Splatting"

License

Notifications You must be signed in to change notification settings

bbangsik13/OR2

Repository files navigation

Compensating Spatiotemporally Inconsistent Observations for Online Dynamic 3D Gaussian Splatting (SIGGRAPH 2025)

arXiv project_page

Youngsik Yun1, Jeongmin Bae1, Hyunseung Son1,
Seoha Kim2, Hahyun Lee2 , Gun Bang2,
Youngjung Uh1†

1Yonsei University   2Electronics and Telecommunications Research Institute (ETRI)
Corresponding Author


Official repository for the paper "Compensating Spatiotemporally Inconsistent Observations for Online Dynamic 3D Gaussian Splatting".

TODO

  • Release 3DGStream+Ours.
  • Code refactoring.
  • Add other baselines.
  • Etc.

Quick Start

We provide a Docker image based on CUDA 11.7, built for Compute Capability 8.6 (e.g., RTX A5000):

docker pull bbangsik/colmap:or2
docker run -it --name or2 \
-v ${YOUR_DATASET_PATH}:/root/OR2/data \
-v ${YOUR_CKPT_PATH}:/root/OR2/outputs \
--gpus all --ipc=host bbangsik/colmap:or2 /bin/bash

Then you can run the script using the first-frame reconstruction results generated during the code release preparation:

cd /root/OR2
bash scripts/run_w_pretrained_ckpt.sh

Alternatively, you can also run the inference script used in our submission:

bash scripts/inference_sample.sh

It will download the preprocessed data and pretrained checkpoint.

Default Directory Structure

OR2/
|---data/n3v
|   |---coffee_martini_wo_cam13
|        |---frame000000
|        |---...
|   |---cook_spinach
|   |---...
|---3dgs_init_best
|	|---coffee_martini_wo_cam13
|	|---...
|---outputs
|  |---coffee_martini_wo_cam13
|        |---results.json
|	|---...
|---ntc
|   |---coffee_martini_wo_cam13_ntc_params_F_4.pth
|   |---...

Environmental Setup

You can use the same Python environment configured for 3DGStream.

git clone https://github.com/bbangsik13/OR2.git
cd OR2
conda create -n or2 python=3.9
conda activate or2
pip install -r requirements.txt
pip install torch==2.0.0 torchvision==0.15.1 torchaudio==2.0.1 --index-url https://download.pytorch.org/whl/cu118
pip install kornia
pip install git+https://github.com/NVlabs/tiny-cuda-nn/#subdirectory=bindings/torch
pip install submodules/diff-gaussian-rasterization/
pip install submodules/simple-knn/ 

We use pytorch=2.0.0+cu118 in our environment.

Data Preparation

We recommand to change DATA_PATH in scripts/download_n3v.sh due to the large size of the dataset.

bash scripts/download_n3v.sh
bash scripts/preprocess_data.sh
bash scripts/colmap.sh
python scripts/get_static_masks.py -m mask -s data

Run

The main experiments are run using nohup, and the logs are saved in the logs folder.
Please modify as needed.

First Frame Reconstruction

bash scripts/run_init.sh

Then run:

python scripts/find_best_init.py

Sequential Frame Reconstruction

python scripts/cache_warmup.py

Then run:

bash scripts/train_frames.sh

Hardware Requirements

  • OS: The code was tested on Ubuntu 18.04, 20.04, and 22.04. It should be compatible with most recent Linux distributions as well.
  • GPU: At least 8 GB of VRAM is required. The code has been tested on RTX 2080 Ti and RTX A5000.
  • Training duration: For sequential frame reconstruction, the training process takes approximately 13 seconds per frame. This may vary depending on the environments.

Note

While preparing the code for release, we found that some experiments in the paper were conducted using frames extracted from resized videos.
The results obtained using the above data preparation are shown below:

Method PSNR↑ SSIM↑ mTV×100↓
3DGStream 32.07 0.947 0.205
3DGStream+Ours 32.54 0.949 0.110
Δ +0.47 +0.002 -0.095

Our results still outperform the baseline.

Acknowledgments

This code is based on 3DGS, Dynamic3DG, 3DGStream, and HiCoM. We would like to thank the authors of these papers for their hard work.

Bibtex

@inproceedings{Yun_2025, series={SIGGRAPH Conference Papers ’25},
   title={Compensating Spatiotemporally Inconsistent Observations for Online Dynamic 3D Gaussian Splatting},
   url={http://dx.doi.org/10.1145/3721238.3730678},
   DOI={10.1145/3721238.3730678},
   booktitle={Proceedings of the Special Interest Group on Computer Graphics and Interactive Techniques Conference Conference Papers},
   publisher={ACM},
   author={Yun, Youngsik and Bae, Jeongmin and Son, Hyunseung and Kim, Seoha and Lee, Hahyun and Bang, Gun and Uh, Youngjung},
   year={2025},
   month=jul, pages={1–9},
   collection={SIGGRAPH Conference Papers ’25} }

About

Official implementation for "Compensating Spatiotemporally Inconsistent Observations for Online Dynamic 3D Gaussian Splatting"

Resources

License

Stars

Watchers

Forks