ECCV 2024
Shihe Shen*, Huachen Gao*, Wangze Xu, Rui Peng, Luyang Tang, Kaiqiang Xiong, Jianbo Jiao, Ronggang Wang†
Peking University, Peng Cheng Laboratory, University of Birmingham
* Equal Contribution, † Corresponding Author
DiGARR is a novel neural rendering framework for robust radiance fields that implements disentangled generation and aggregation methods.
Note: This code is currently under organization and not ready to run directly. Please wait for code organization to complete or contact the authors for the full version.
-
Clone the repository
-
Install dependencies:
pip install -r requirements.txtThe project supports LLFF dataset. Please download the corresponding data and place it in the correct location.
The project provides a convenient training script scripts/train/static_train_hybrid.sh for LLFF dataset training:
# Syntax: ./scripts/train/static_train_hybrid.sh <GPU_ID> <CONFIG_NAME>
# Example: train fern scene
./scripts/train/static_train_hybrid.sh 0 fern
# Train flower scene
./scripts/train/static_train_hybrid.sh 0 flower
# Train horns scene
./scripts/train/static_train_hybrid.sh 0 hornsGPU_ID: CUDA device ID (e.g., 0, 1, 2...)CONFIG_NAME: Configuration file name corresponding to LLFF dataset scene name
fern- Fern sceneflower- Flower scenefortress- Fortress scenehorns- Horns sceneleaves- Leaves sceneorchids- Orchids sceneroom- Room scenetrex- T-Rex scene
- Code init
- Code debug, organization and refactoring
- NeRF Blender dataset/training
- Dependency file organization (requirements.txt)
- Pre-trained model preparation
@InProceedings{digarr,
author="Shen, Shihe and Gao, Huachen and Xu, Wangze and Peng, Rui and Tang, Luyang and Xiong, Kaiqiang and Jiao, Jianbo and Wang, Ronggang",
title="Disentangled Generation and Aggregation for Robust Radiance Fields",
booktitle="Computer Vision -- ECCV 2024",
year="2025",
}