Dataloader
-
use opencv (
cv2) to read and process images. -
read from image files OR from .lmdb for fast IO speed.
- How to create .lmdb file? Please see
codes/scripts/create_lmdb.py.
- How to create .lmdb file? Please see
-
can downsample images using
matlab bicubicfunction. However, the speed is a bit slow. Implemented inutil.py. More aboutmatlab bicubicfunction.
LR_dataset: only reads LR images in test phase where there is no GT images.LRHR_dataset: reads LR and HR pairs from image folder or lmdb files. If only HR images are provided, downsample the images on-the-fly. Used in SR and SRGAN training and validation phase.LRHR_seg_bg_dataset: reads HR images, segmentations and generates LR images, category. Used in SFTGAN training and validation phase.
-
Prepare the images. You can download classical SR datasets (including BSD200, T91, General100; Set5, Set14, urban100, BSD100, manga109; historical) from Google Drive or Baidu Drive. DIV2K dataset can be downloaded from DIV2K offical page, or from Baidu Drive.
-
For faster IO speed, you can make lmdb files for training dataset. Please see
codes/scripts/create_lmdb.py. -
We use DIV2K dataset for training the SR and SRGAN models.
- since DIV2K images are large, we first crop them to sub images using
codes/scripts/extract_subimgs_single.py. - generate LR images using matlab with
codes/scripts/generate_mod_LR_bic.m. If you already have LR images, you can skip this step. Please make sure the LR and HR folders have the same number of images. - generate .lmdb file if needed using
codes/scripts/create_lmdb.py. - modify configurations in
options/train/xxx.jsonwhen training, e.g.,dataroot_HR,dataroot_LR.
- since DIV2K images are large, we first crop them to sub images using
SFTGAN is now used for a part of outdoor scenes.
- Download OutdoorScene training dataset from Google Drive (the training dataset is a little different from that in project page, e.g., image size and format) and OutdoorScene testing dataseet from Google Drive.
- Generate the segmenation probability maps for training and testing dataset using
codes/test_seg.py. - Put the images in a folder named
imgand put the segmentation .pth files in a folder namedbicsegas the following figure shows.
- The same for validation (you can choose some from the test folder) and test folder.
We use random crop, random flip/rotation, (random scale) for data augmentation.
