Skip to content

hg-chung/Interpretable-Inverse-Rendering

Repository files navigation

Differentiable Inverse Rendering with Interpretable Basis BRDFs

This repository contains the implementation of the paper:

Differentiable Inverse Rendering with Interpretalbe Basis BRDFs

Hoon-Gyu Chung, Seokjun Choi, Seung-Hwan Baek

CVPR, 2025

Installation

We recommend you to use Conda environment. Install pytorch3d following INSTALL.md.

conda create -n IIR python=3.9
conda activate IIR
conda install pytorch=1.13.0 torchvision pytorch-cuda=11.6 -c pytorch -c nvidia
conda install -c fvcore -c iopath -c conda-forge fvcore iopath
conda install numpy matplotlib tqdm imageio
pip install scikit-image plotly opencv-python open3d lpips kornia icecream, plyfile, submodules/diff-surfel-rasterization, submodules/simple-knn
conda install pytorch3d -c pytorch3d

Dataset

** We utilized multi-view flash image dataset: 4 synthetic scenes and 2 real-world scenes.**

You can download dataset from Google Drive and put them in the corresponding folder.

Train and Evaluation

sh train.sh
sh evaluation.sh

You can control sparsity by adjusting lambda_weight_img_sparse and basis_merge_threshold.

Citation

If you find this work useful in your research, please consider citing:

@inproceedings{chung2025differentiable,
  title={Differentiable Inverse Rendering with Interpretable Basis BRDFs},
  author={Chung, Hoon-Gyu and Choi, Seokjun and Baek, Seung-Hwan},
  booktitle = {Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)},
  year={2025}
}

Acknowledgement

Part of our code is based on the previous works:DPIR, 2D Gaussian Splatting, and R3DGS.

About

Differentiable Inverse Rendering with Interpretable Basis BRDFs [CVPR 2025]

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published