MVPGS is a few-shot novel view synthesis method based on 3D Gaussian Splatting. Details are described in our paper:
MVPGS: Excavating Multi-view Priors for Gaussian Splatting from Sparse Input Views
Wangze Xu, Huachen Gao, Shihe Shen, Rui Peng, Jianbo Jiao, Ronggang Wang
ECCV 2024 (arxiv | project page)
📍 If there are any bugs in our code, please feel free to raise your issues.
⭐️ Update:
- [2024/10/21] Results, including optimized models and rendered images, are now available at this link.
# clone this repository
git clone https://github.com/zezeaaa/MVPGS.git --recursive # or git clone git@github.com:zezeaaa/MVPGS.git --recursive
# create environment
conda env create --file environment.yml
conda activate mvpgs
- Download LLFF from the official download link, unzip to
<your LLFF path>.
- Download DTU dataset
Rectified (123 GB)from the official website, unzip to<your DTU_Rectified path>. - Download masks
submission_data.zip(used for evaluation only) from this link, unzip to<your DTU_mask path>, then runthen the preprocessed DTU colmap dataset is generated in# Set original_dtu_path as <your DTU_Rectified path> # Set output_path as <your DTU path> bash scripts/prepare_dtu_dataset.sh<your DTU path>. The data structure is just like this:<your DTU path> ├── scan8 ├── distorted ├── images ├── images_2 ├── images_4 ├── images_8 ├── sparse ├── stereo └── poses_bounds.npy ├── scan21 ├── ...
- Download NVS-RGBD from the official website link, unzip to
<your NVS-RGBD path>. - To get all cameras in colmap format, run
# set dataset_path as <your NVS-RGBD path> bash scripts/get_all_cams_for_nvsrgbd.sh
- Download Tanks and Temples dataset preprocessed by NoPe-NeRF from this link, unzip to
<your T&T path>(we use the first 50 frames of each scene for our experiments).
Download the official pretrained MVSFormer weights (MVSFormer.zip and MVSFormer-Blended.zip) from the official link. Extract the pretrained models to ./pretrained/.
Training and evaluation on LLFF:
# set data_path as <your LLFF path>
bash scripts/exps_llff.sh
Training and evaluation on DTU:
# set data_path as <your DTU path>
# set dtu_mask_path as <your DTU_mask path>
bash scripts/exps_dtu.sh
Training and evaluation on NVS-RGBD:
# set data_path as <your NVS-RGBD path>
bash scripts/exps_nvsrgbd.sh
Training and evaluation on T&T:
# set data_path as <your T&T path>
bash scripts/exps_tanks.sh
Results (including optimized models and rendered images) of the current version can be downloaded from this link.
If you find our work useful in your research please consider citing our paper:
@inproceedings{xu2024mvpgs,
title={Mvpgs: Excavating multi-view priors for gaussian splatting from sparse input views},
author={Xu, Wangze and Gao, Huachen and Shen, Shihe and Peng, Rui and Jiao, Jianbo and Wang, Ronggang},
booktitle={European Conference on Computer Vision},
pages={203--220},
year={2024},
organization={Springer}
}
Our code is heavily based on 3D Gaussian Splatting, and we use the rasterization in DreamGaussian. We refer to Pose-Warping for the forward warping implementation, and we use MVSFormer for the prediction of MVS depth. We thank the excellent code they provide.
