This software project accompanies the research paper, PBR-NeRF: Inverse Rendering with Physics-Based Neural Fields and can be used to reproduce the results in the paper.
PBR-NeRF is a differentiable rendering framework for joint geometry, material and lighting estimation from multi-view images. We build upon NeILF++ and introduce two novel physics-based losses: (1) the Conservation of Energy Loss and (2) the NDF-weighted Specular Loss.
Our novel losses represent two intuitive priors that we use to achieve state-of-the-art material estimation without compromising novel view synthesis quality.
Please see code/model/loss.py for the exact loss implementation.
We provide the following scripts to reproduce key results in the paper. The environment setup is described in the following section.
Note that the final evaluation results are stored in:
pbrnerf/code/outputs/joint/<run>/evaluation/report_evaluation.json
Our full proposed method can be run on the NeILF++ and DTU dataset using the following scripts. We provide additional helper scripts to run the full method on all dataset scenes to reproduce experiments.
sbatch train_pbrnerf_neilfpp.sh
sbatch train_pbrnerf_dtu.sh
Run the following scripts to reproduce the SOTA experiments on the NeILF++ and DTU datasets.
./sota_neilfpp.sh
./sota_dtu.sh
Run the following script to reproduce the ablation study on the NeILF++ City scene.
./ablation_pbr_losses.sh
We provide two options for environment setup
pipconda
-
Update
setup_pip.shwith your correct local paths (if needed). Verify that all paths (e.g.OptiX_INSTALL_DIRandLD_LIBRARY_PATH) insetup_pip.share set up correctly -
Install NVIDIA optix 7.3 for Linux using your NVIDIA account. You may want to change the install prefix below to your own local machine's scratch directory.
sh NVIDIA-OptiX-SDK-7.3.0-linux64-x86_64.sh --include-subdir --skip-license
- Run
setup.sh. This should take around 60 mins to run.
sbatch setup_pip.sh
- Log into
wandband paste your API key when prompted
mamba activate neilfpp2080
wandb login
Install mamba for a faster conda alternative
conda install mamba -n base -c conda-forge
-
Update
setup_conda.shwith yourcondaandmambapaths (if needed). This is only required if you store yourcondaandmambapaths differently than expected by the script -
Install NVIDIA optix 7.3 for Linux using your NVIDIA account. You may want to change the install prefix below to your own local machine's scratch directory.
sh NVIDIA-OptiX-SDK-7.3.0-linux64-x86_64.sh --include-subdir --skip-license
-
Verify that all other paths (e.g.
OptiX_INSTALL_DIRandLD_LIBRARY_PATH) insetup_conda.share set up correctly -
Run
setup_conda.sh. This should take around 60 mins to run.
./setup_conda.sh
- Log into
wandband paste your API key when prompted
mamba activate neilfpp2080
wandb login
Download the synthetic dataset, the preprocessed DTU dataset and the HDR dataset from the original NeILF++ dataset source. Note that for the synthetic dataset, all scenes shares the same geometry files and the BRDF ground truths. We only provide the geometry files and the BRDF GT in synthetic_city, and users may structure the input of another scene by substituting the image folder (synthetic_city/inputs/images) to another input image folder.
Extract the synthetic dataset to ~/scratch/datasets/neilfpp_synthetic. Your directories should look like this:
$ tree -L 1 ~/scratch/datasets/neilfpp_synthetic/
/home/user/scratch/datasets/neilfpp_synthetic/
├── synthetic_castel
├── synthetic_castel_mix
├── synthetic_city
├── synthetic_city_mix
├── synthetic_studio
└── synthetic_studio_mix
6 directories, 0 files
Input data should be structured into the following:
.
├── inputs
│ ├── images
│ ├── position_maps (optional)
│ ├── depth_maps (optional)
│ ├── normal_maps (optional)
│ ├── model (optional)
│ │ ├── components of a textured mesh (synthetic dataset)
│ │ └── oriented_pcd.ply (other datasets)
│ └── sfm_scene.json
│
└── ground_truths (optional)
└── materials
├── kd
├── roughness
└── metallic
The sfm_scene.json file is used to stored the metadata and the SfM result of a scene. Please refer to load_cams_from_sfmscene in utilts/io.py for details.
- Camera intrinsics are stored in
sfm_scene['camera_track_map']['images']['INDEX']['intrinsic'] - Camera extrinsics are stored in
sfm_scene['camera_track_map']['images']['INDEX']['extrinsic'] - Image list are stored in
sfm_scene['image_list']['file_paths']. (image index -> image path) - The bounding box transformation is stored in
sfm_scene['bbox']['transform'], which can transform the eight bounding box corners to normalized points at{1/-1, 1/-1, 1/-1}. It is used to compute the scale mat for coordinate normalization. - Other fields could be ignored.
The image names should be stored in sfm_scene['image_list']['file_paths']. You can use .jpg and .png formats for LDR inputs, or .exr and .tiff formats for HDR inputs.
Geometry input is required by the synthetic dataset. The geometry input should be provided as either the rendered position map at each view (in inputs/position_maps) or the rendered depth map at each view (in inputs/depth_maps). For other datasets, an oriented point cloud is preferred (in inputs/model/oriented_pcd.ply).
The position atlas should be provided for exporting BRDF texture maps. (in inputs/model/pos_tex, see sample data for details)
We sincerely thank the authors of NeILF++ for publicly sharing their code and dataset, which served as the foundation of our work. PBR-NeRF builds upon the NeILF++ framework and their contributions have been invaluable in enabling our research.

