Official PyTorch Implementation of our WACV 2026 paper.
Yang Cheng1, Ziteng Cui2, *, Lin Gu3, Shenghan Su1, Zenghui Zhang1
1 Shanghai Jiao Tong University2 The University of Tokyo
3 Tohoku University
* Corresponding author
Our proposed color space introduces a learnable luminance axis and adaptive nonlinear mapping functions.
The overall pipeline utilizes the MambaVision Module (MVM) and Cross Attention Module (CAM) to effectively process the disentangled features.
- System: Linux (Recommended for Mamba compilation)
- Python: 3.10 (Recommended)
- CUDA: 11.8 (Required for Mamba-SSM)
- PyTorch: 2.0+
git clone https://github.com/YangCheng58/WB_Color_Space.gitcd WB_Color_Spacepip install -r requirements.txtWe evaluate our method using the Rendered WB Dataset introduced by Afifi et al. .
Following the data organization and evaluation protocol described in Deep White-Balance Editing (CVPR 2020), please download the dataset and organize the files as follows.
Please download the full dataset (Set1, Set2, and Cube+) from the Official Dataset Repository.
Extract and organize the data into the ./dataset/ directory:
./dataset/
├── Set1_all/ # Contains Inputs and GTs from Set1
├── Set2_input_images/ # Testing inputs from Set2
├── Set2_ground_truth_images/ # Testing GTs from Set2
├── Cube_input_images/ # Testing inputs from Cube+
└── Cube_ground_truth_images/ # Testing GTs from Cube+
Note: We follow the standard cross-validation protocol. Specifically, we use Fold 3 as the testing set. The detailed image list and split definition can be found in folds/fold3_.mat.
To train the DCLAN model from scratch on Set1 using the standard Fold 3 configuration, run the following command:
python train.py \
--training_dir ./dataset/Set1_all \
--fold 3 \
--epochs 120 \
--num_training_images 12000We provide a comprehensive evaluation script to test the model on Set1, Set2, and Cube+ datasets. The script calculates MSE, MAE, and
The pre-trained model weights are already provided in this repository at:
models/best.pth
You can evaluate specific datasets using the following commands.
Evaluate on Set1:
python eval.py \
--dataset Set1 \
--data_root ./dataset/Set1_all \
--split_file ./folds/fold3_.mat \
--model_path models/best.pthEvaluate on Set2:
python eval.py \
--dataset Set2 \
--input_dir ./dataset/Set2_input_images \
--gt_dir ./dataset/Set2_ground_truth_images \
--model_path models/best.pthEvaluate on Cube+:
python eval.py \
--dataset Cube \
--input_dir ./dataset/Cube_input_images \
--gt_dir ./dataset/Cube_ground_truth_images \
--model_path models/best.pthThe script will output a table containing Mean, Q1 (25%), Median (50%), and Q3 (75%) for all metrics. It also automatically saves any outliers (MSE > 500) to a text file for further analysis.

