Skip to content

Official PyTorch implementation of "Perception-Inspired Color Space Design for Photo White Balance Editing" (WACV 2026).

Notifications You must be signed in to change notification settings

YangCheng58/WB_Color_Space

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

41 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Perception-Inspired Color Space Design for Photo White Balance Editing

Official PyTorch Implementation of our WACV 2026 paper.

Yang Cheng1, Ziteng Cui2, *, Lin Gu3, Shenghan Su1, Zenghui Zhang1

1 Shanghai Jiao Tong University
2 The University of Tokyo
3 Tohoku University

* Corresponding author

🚀 Framework Overview

1. The Learnable HSI (LHSI) Color Space

Our proposed color space introduces a learnable luminance axis and adaptive nonlinear mapping functions.

2. DCLAN Architecture

The overall pipeline utilizes the MambaVision Module (MVM) and Cross Attention Module (CAM) to effectively process the disentangled features.

⚡ Get Started

  • System: Linux (Recommended for Mamba compilation)
  • Python: 3.10 (Recommended)
  • CUDA: 11.8 (Required for Mamba-SSM)
  • PyTorch: 2.0+
git clone https://github.com/YangCheng58/WB_Color_Space.git
cd WB_Color_Space
pip install -r requirements.txt

📂 Dataset Preparation

We evaluate our method using the Rendered WB Dataset introduced by Afifi et al. .

Following the data organization and evaluation protocol described in Deep White-Balance Editing (CVPR 2020), please download the dataset and organize the files as follows.

1. Download Dataset

Please download the full dataset (Set1, Set2, and Cube+) from the Official Dataset Repository.

2. Directory Structure

Extract and organize the data into the ./dataset/ directory:

./dataset/
├── Set1_all/                  # Contains Inputs and GTs from Set1
├── Set2_input_images/         # Testing inputs from Set2
├── Set2_ground_truth_images/  # Testing GTs from Set2
├── Cube_input_images/         # Testing inputs from Cube+
└── Cube_ground_truth_images/  # Testing GTs from Cube+

Note: We follow the standard cross-validation protocol. Specifically, we use Fold 3 as the testing set. The detailed image list and split definition can be found in folds/fold3_.mat.

🚀 Training

To train the DCLAN model from scratch on Set1 using the standard Fold 3 configuration, run the following command:

python train.py \
  --training_dir ./dataset/Set1_all \
  --fold 3 \
  --epochs 120 \
  --num_training_images 12000

📈 Evaluation

We provide a comprehensive evaluation script to test the model on Set1, Set2, and Cube+ datasets. The script calculates MSE, MAE, and $\Delta E_{2000}$, reporting both the Mean and Quartiles (Q1, Median, Q3).

1. Pre-trained Model

The pre-trained model weights are already provided in this repository at:

models/best.pth

2. Run Evaluation

You can evaluate specific datasets using the following commands.

Evaluate on Set1:

python eval.py \
  --dataset Set1 \
  --data_root ./dataset/Set1_all \
  --split_file ./folds/fold3_.mat \
  --model_path models/best.pth

Evaluate on Set2:

python eval.py \
  --dataset Set2 \
  --input_dir ./dataset/Set2_input_images \
  --gt_dir ./dataset/Set2_ground_truth_images \
  --model_path models/best.pth

Evaluate on Cube+:

python eval.py \
  --dataset Cube \
  --input_dir ./dataset/Cube_input_images \
  --gt_dir ./dataset/Cube_ground_truth_images \
  --model_path models/best.pth

3. Output Metrics

The script will output a table containing Mean, Q1 (25%), Median (50%), and Q3 (75%) for all metrics. It also automatically saves any outliers (MSE > 500) to a text file for further analysis.

About

Official PyTorch implementation of "Perception-Inspired Color Space Design for Photo White Balance Editing" (WACV 2026).

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages