Skip to content

ming053l/PhaSR

Repository files navigation

PhaSR: Generalized Image Shadow Removal with Physically Aligned Priors

Chia-Ming Lee, Yu-Fan Lin, Yu-Jou Hsiao, Jin-Hui Jiang, Yu-Lun Liu, Chih-Chung Hsu

National Yang Ming Chiao Tung University, National Cheng Kung University

CVPR 2026

Overview

TL;DR: PhaSR combines parameter-free Retinex normalization with geometric-semantic cross-modal attention for state-of-the-art shadow removal and ambient lighting normalization with the highest efficiency.

  • Background and Motivation

Shadow removal under diverse lighting conditions requires disentangling illumination from intrinsic reflectance. Existing methods struggle with: (1) confusing shadows with intrinsic material properties, (2) limited generalization from single-light to multi-source ambient lighting, and (3) loss of physical priors through encoder-decoder bottlenecks.

  • Main Contribution

PhaSR addresses these challenges through dual-level physically aligned prior integration:

  1. PAN (Physically Aligned Normalization) - Parameter-free preprocessing via Gray-world normalization, log-domain Retinex decomposition, and dynamic range recombination, consistently improving existing architectures by 0.15-0.34 dB.

  2. GSRA (Geometric-Semantic Rectification Attention) - Cross-modal differential attention (A_rect = A_sem - λ·A_geo) harmonizing DepthAnything-v2 geometry with DINO-v2 semantics.

Benchmark results on shadow removal and ambient lighting normalization.

Model Params FLOPs ISTD+ WSRD+ Ambient6K
OmniSR 24.55M 78.32G 33.34 26.07 23.01
DenseSR 24.70M 81.13G 33.98 26.28 22.54
PhaSR 18.95M 55.63G 34.48 28.44 23.32

Updates

  • ✅ 2025-11-16: Project page released.
  • ⏳ Pretrained models coming soon.

Installation

git clone https://github.com/ming053l/phasr.git
conda create --name phasr python=3.9 -y
conda activate phasr
pip install torch==2.0.1 torchvision==0.15.2 torchaudio==2.0.2 --index-url https://download.pytorch.org/whl/cu118
cd phasr
pip install -r requirements.txt

Dataset Structure

ISTD, ISTD+, WSRD+, SRD, Ambient6k

dataset/
├── train
      ├── origin <- Shadow-affected images
      ├── shadow_free <- Shadow-free images
├── valid
      ├── origin <- Shadow-affected images
      ├── shadow_free <- Shadow-free images
├── test
      ├── origin <- Shadow-affected images
  1. Clone Depth anything v2
git clone https://github.com/DepthAnything/Depth-Anything-V2.git
  1. Download the pretrain model of depth anything v2

  2. Run calculate_depth_normal.py to create the depth and normal map

  • You need to change the --root and --ckpt-path to your dataset and Depth Anything v2 ckpt path.
    • e.g. python calculate_depth_normal.py --root dataset/train(absolute path) --ckpt-path path_to_depth_anything_v2_ckpt
dataset/
├── train
      ├── origin <- Shadow-affected images
      ├── depth <- .npy depth maps
      ├── shadow_free <- Shadow-free images
      ├── normal <- .npy normal maps
├── valid
      ├── origin <- Shadow-affected images
      ├── depth <- .npy depth maps
      ├── shadow_free <- Shadow-free images
      ├── normal <- .npy normal maps
├── test
      ├── origin <- Shadow-affected images
      ├── depth <- .npy depth maps
      ├── normal <- .npy normal maps
  1. Clone DINOv2
git clone https://github.com/facebookresearch/dinov2.git

How To Test

⚠️ You MUST change the path setting in test.py

bash test.sh

How To Train

⚠️ You MUST change the path setting in options.py

bash train.sh

Citations

If our work is helpful to your research, please kindly cite:

@misc{lee2024phasr,
      title={PhaSR: Generalized Image Shadow Removal with Physically Aligned Priors}, 
      author={Lee, Chia-Ming and Lin, Yu-Fan and Hsiao, Yu-Jou and Jung, Jing-Hui and Liu, Yu-Lun and Hsu, Chih-Chung},
      year={2026},
      eprint={2601.17470},
      archivePrefix={arXiv},
      primaryClass={cs.CV},
      url={https://arxiv.org/abs/2601.17470}, 
}

Acknowledgments

Our work builds upon OmniSR, DenseSR, DepthAnything-v2, and DINO-v2. We are grateful for their outstanding contributions.

Contact

If you have any questions, please email Chia-Ming Lee(zuw408421476@gmail.com), Yu-Fan Lin(aas12as12as12tw@gmail.com) to discuss with the authors.

About

PhaSR: Generalized Image Shadow Removal with Physically Aligned Priors (CVPR 2026)

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors

Languages