This repository covers the "Towards Ambient Lighting Normalization" paper, accepted for publication at ECCV 2024. Further materials covering our work will be available through this repo. Follow the current README for more resources.
You can find a collection of output images here.
IFBlend checkpoints trained on AMBIENT6K are available here.
-
Clone the current repository
git clone https://github.com/fvasluianu97/IFBlend.git
-
Download the
checkpoints.zipfrom here and unzip to the repository root directory. -
Download the
weights.zipfrom here and unzip in the root directory of the repository. -
Activate your Python virtual environment.
-
Install the packages mentioned in
requirements.txt. -
Test your IFBlend checkpoint:
python eval.py --data_src ./data/AMBIENT6K --ckp_dir ./checkpoints --res_dir ./final-results --load_from IFBlend_ambient6kIn the next section you can find the URLs for the training, testing, nad benchmark splits of the AMBIENT6K dataset. Download the available resources to a file tree similar to teh following structure:
.
├── checkpoints
│ └── IFBlend_ambient6k
│ └── best
│ └── checkpoint.pt
├── data
│ └── AMBIENT6K
│ ├── Benchmarck
│ ├── Test
│ │ ├── 0_gt.png
│ │ ├── 0_in.png
│ │ ├── 1_gt.png
│ │ └── 1_in.png
│ └── Train
│ ├── 0_gt.png
│ ├── 0_in.png
│ ├── 1_gt.png
│ └── 1_in.png
├── dataloader.py
├── dconv_model.py
├── eval.py
├── final-results
├── .gitignore
├── ifblend.py
├── laynorm.py
├── loaded_models
├── loss.py
├── metrics.py
├── model_convnext.py
├── perceptual_loss.py
├── README.md
├── refinement.py
├── requirements.txt
├── train.py
├── unet.py
├── utils_model.py
├── utils.py
└── weights
└── convnext_xlarge_22k_1k_384_ema.pth
The AMBIENT6K dataset is designed to drive the research in the field of Lightning Normalization, as a collection of 6000 high resolution images representing images affected by non-homogeneous lighting distribution. For each affected image we propose a ground-truth image representing the same scene, but under near-perfect lighting conditions. The ground-truth images are representations of canonical lighting, characteristic to a professional photography setup.
The images are shot with a Canon R6 mk2 camera, at 24 MP. The resolution of the images used in our experiments had to be limited due to the computing resources limitations. The data is available in Canon CR3 RAW image format, so using software like Adobe Lightroom would enable exporting it at 24 MP resolution. For reproducibility, we will upload the version used in our ablations here also.
-
Training data: RGB inp. img. | RAW inp. img. | RGB gt. img. | RAW gt. img. | scene metadata | object metadata
-
Testing data: RGB inp. img. | RAW inp. img. | RGB gt. img. | RAW gt. img. | scene metadata | object metadata
The RAW images are organized per scene. Each scene is noted in the metadata and each scene corresponds to a ground truth RAW image. In the RGB data, each input image was paired to the corresponding ground-truth image.
The following repositories represented valuable resources in our work:
- https://github.com/megvii-research/NAFNet.git
- https://github.com/fvasluianu97/WSRD-DNSR.git
- https://github.com/liuh127/NTIRE-2021-Dehazing-DWGAN.git
We thank the authors for sharing their work!
Copyright (c) 2025 Computer Vision Lab, University of Wurzburg
Licensed under CC BY-NC-SA 4.0 (Attribution-NonCommercial-ShareAlike 4.0 International) (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at
https://creativecommons.org/licenses/by-nc-sa/4.0/legalcode
The code is released for academic research use only. For commercial use, please contact Computer Vision Lab, University of Wurzburg. Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License.