Here you find the code for our paper "Diffusion Models for Earth Observation Use-cases: from cloud removal to urban change detection", accepted as oral at Big Data From Space 2023 (BIDS 2023)
download the following checkpoint in "results" folder and rename it "clouds_best.pt".
You can find a demo at the following notebook EO_Diffusion.ipynb
Conda environment:
- Conda 23.1.0
- CUDA toolkit: 11.7.1
- Pytorch: 11.3.0
- Torchvision + torchaudio: 0.14.0 + 0.13.0
- Tested on an NVIDIA RTX 4000 (49 GB)
GPU utilities installation: I don't recomment using the exported eo_diffusion.yml file. It's better to install it directly from pytorch website with the required versions, as shown in the command below:
conda install pytorch torchvision torchaudio pytorch-cuda=11.7 -c pytorch -c nvidia
conda create -n env_name
conda activate env_name
conda install pip
pip install -r requirements.txtpython train.pySpecify batch size, diffusion steps training hyperparameters from command line (you have default values otherwise)
python train.py --batch_size 4 --timesteps 1000 --lr 1e-05 --epochs 200Based on the same concept expressed in RePaint https://arxiv.org/abs/2201.09865
python train.py --cond_type "sum"Save your results
python train.py --dir path/to/your/dir --ckpt ckpt_name--save option if you want to store images
python inference.py --ckpt path/to/your/ckpt --outdir path/to/your/folder_samples --saveBelow you find the two relevant lines to modify concerning U-Net architecture and data loaders, in train.py and inference.py
base_dim, dim_mults, attention_resolutions,num_res_blocks, num_heads=128,[1,2,3,4],[4,8],2,8
train_dataloader,test_dataloader=create_cloud_dataloaders(batch_size=args.batch_size, num_workers=4, size=image_size,
ratio=0.5, length=-1, num_patches=2000, percents=[99,0,70])In data.py you find all the available dataloaders with the title create_{dataset_name}_dataloader.
In data_load.py you find all the Dataset classes for the available datasets.
Concerning U-Net and Diffusion, you can modify the parameters at this two lines:
unet = UNetModel(image_size, in_channels=in_channels+cond_channels, model_channels=base_dim, out_channels=out_channels, channel_mult=dim_mults,
attention_resolutions=attention_resolutions,num_res_blocks=num_res_blocks, num_heads=num_heads, num_classes=num_classes)
model=EODiffusion(unet,
timesteps=args.timesteps,
image_size=image_size,
in_channels=in_channels
).to(device)Lilian Weng blog on Diffusion Models: https://lilianweng.github.io/posts/2021-07-11-diffusion-models/
The Denoising Diffusion Probabilistic Models paper: https://arxiv.org/pdf/2006.11239.pdf
RePaint paper: https://arxiv.org/abs/2201.09865
This work is the result of a collaboration between ESA, Φ-Lab and Sapienza University of Rome, Alcor Lab for my master thesis (manuscript pdf).




