Skip to content

fvasluianu97/IFBlend

Repository files navigation

IFBlend

This repository covers the "Towards Ambient Lighting Normalization" paper, accepted for publication at ECCV 2024. Further materials covering our work will be available through this repo. Follow the current README for more resources.

Results

You can find a collection of output images here.

Checkpoints

IFBlend checkpoints trained on AMBIENT6K are available here.

Installing

  • Clone the current repository

    git clone https://github.com/fvasluianu97/IFBlend.git
  • Download the checkpoints.zip from here and unzip to the repository root directory.

  • Download the weights.zip from here and unzip in the root directory of the repository.

  • Activate your Python virtual environment.

  • Install the packages mentioned in requirements.txt.

  • Test your IFBlend checkpoint:

   python eval.py --data_src ./data/AMBIENT6K --ckp_dir ./checkpoints --res_dir ./final-results --load_from IFBlend_ambient6k

Data

In the next section you can find the URLs for the training, testing, nad benchmark splits of the AMBIENT6K dataset. Download the available resources to a file tree similar to teh following structure:

.
├── checkpoints
│   └── IFBlend_ambient6k
│       └── best
│           └── checkpoint.pt
├── data
│   └── AMBIENT6K
│       ├── Benchmarck
│       ├── Test
│       │   ├── 0_gt.png
│       │   ├── 0_in.png
│       │   ├── 1_gt.png
│       │   └── 1_in.png
│       └── Train
│           ├── 0_gt.png
│           ├── 0_in.png
│           ├── 1_gt.png
│           └── 1_in.png
├── dataloader.py
├── dconv_model.py
├── eval.py
├── final-results
├── .gitignore
├── ifblend.py
├── laynorm.py
├── loaded_models
├── loss.py
├── metrics.py
├── model_convnext.py
├── perceptual_loss.py
├── README.md
├── refinement.py
├── requirements.txt
├── train.py
├── unet.py
├── utils_model.py
├── utils.py
└── weights
    └── convnext_xlarge_22k_1k_384_ema.pth

AMBIENT6K

The AMBIENT6K dataset is designed to drive the research in the field of Lightning Normalization, as a collection of 6000 high resolution images representing images affected by non-homogeneous lighting distribution. For each affected image we propose a ground-truth image representing the same scene, but under near-perfect lighting conditions. The ground-truth images are representations of canonical lighting, characteristic to a professional photography setup.

The images are shot with a Canon R6 mk2 camera, at 24 MP. The resolution of the images used in our experiments had to be limited due to the computing resources limitations. The data is available in Canon CR3 RAW image format, so using software like Adobe Lightroom would enable exporting it at 24 MP resolution. For reproducibility, we will upload the version used in our ablations here also.

Data Structure

The RAW images are organized per scene. Each scene is noted in the metadata and each scene corresponds to a ground truth RAW image. In the RGB data, each input image was paired to the corresponding ground-truth image.

Acknowledgements

The following repositories represented valuable resources in our work:

We thank the authors for sharing their work!

License

Copyright (c) 2025 Computer Vision Lab, University of Wurzburg

Licensed under CC BY-NC-SA 4.0 (Attribution-NonCommercial-ShareAlike 4.0 International) (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at

https://creativecommons.org/licenses/by-nc-sa/4.0/legalcode

The code is released for academic research use only. For commercial use, please contact Computer Vision Lab, University of Wurzburg. Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License.

About

No description, website, or topics provided.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors

Languages