Skip to content

Commit a1ddacc

Browse files
committed
merge main
2 parents dfa2477 + 09be3a9 commit a1ddacc

File tree

161 files changed

+2455
-703
lines changed

Some content is hidden

Large Commits have some content hidden by default. Use the searchbox below for content that may be hidden.

161 files changed

+2455
-703
lines changed

.github/scripts/bump_version.py

Lines changed: 0 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -1,4 +1,3 @@
1-
import re
21
from pathlib import Path
32
import tomlkit
43

.github/workflows/documentation.yml

Lines changed: 9 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -48,9 +48,17 @@ jobs:
4848
with:
4949
name: documentation
5050
path: _build/
51-
5251
# Log artifact URL
5352
- name: Link to generated docs here
5453
if: ${{ github.event_name == 'pull_request' && github.event.action != 'closed' }}
5554
run: |
5655
echo "Documentation artifact URL: ${{ steps.upload-artifact.outputs.artifact-url }}"
56+
- name: Deploy to ghpages
57+
uses: peaceiris/actions-gh-pages@v3
58+
if: ${{ github.event_name == 'push' && github.ref == 'refs/heads/main' }}
59+
with:
60+
publish_branch: gh-pages
61+
github_token: ${{ secrets.GITHUB_TOKEN }}
62+
publish_dir: _build/
63+
force_orphan: true
64+
enable_jekyll: false

.github/workflows/lint.yml

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -15,9 +15,9 @@ jobs:
1515
- uses: actions/checkout@v3
1616
- uses: actions/setup-python@v4
1717
with:
18-
python-version: '3.10' # minimum supported version
18+
python-version: '3.10' # minimum supported version
1919
- uses: psf/black@stable
2020
- name: Install and run ruff
2121
run: |
2222
pip install ruff
23-
ruff check --target-version "py310" --select "UP006"
23+
ruff check --target-version "py310" --select "UP006" --select "F401" --select "B905"

CHANGELOG.rst

Lines changed: 2 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -39,6 +39,7 @@ Fixed
3939
- Fix that the max_pixel option in PSNR and SSIM and add analgous min_pixel option (:gh:`535` by `Johannes Hertrich`_)
4040
- Fix some issues related to denoisers: ICNN grad not working inside torch.no_grad(), batch of image and batch of sigma for some denoisers (DiffUNet, BM3D, TV, Wavemet), EPLL error when batch size > 1 (:gh:`530` by `Minh Hai Nguyen`_)
4141
- Batching WaveletPrior and fix iwt(:gh:`530` by `Minh Hai Nguyen`_)
42+
- Fix on unreliable/inconsistent automatic choosing GPU with most free VRAM (:gh:`570` by `Fedor Goncharov`_)
4243

4344

4445

@@ -127,6 +128,7 @@ Fixed
127128

128129
Changed
129130
^^^^^^^
131+
- Add bibtex references (:gh:`575` by `Samuel Hurault`_)
130132
- Set sphinx warnings as errors (:gh:`379` by `Julian Tachella`_)
131133
- Added single backquotes default to code mode in docs (:gh:`379` by `Julian Tachella`_)
132134
- Changed the __add__ method for stack method for stacking physics (:gh:`371` by `Julian Tachella`_ and `Andrew Wang`_)

CITATION.cff

Lines changed: 0 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -56,7 +56,6 @@ authors:
5656
given-names: "Romain"
5757
- family-names: "Weiss"
5858
given-names: "Pierre"
59-
version: "0.3.2"
6059
doi: "https://doi.org/10.48550/arXiv.2505.20160"
6160
date-released: 2023-06-30
6261
repository-code: "https://github.com/deepinv/deepinv"

deepinv/datasets/cbsd68.py

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -17,7 +17,7 @@
1717
class CBSD68(torch.utils.data.Dataset):
1818
"""Dataset for `CBSBD68 <https://paperswithcode.com/dataset/cbsd68>`_.
1919
20-
Color BSD68 dataset for image restoration benchmarks is part of The Berkeley Segmentation Dataset and Benchmark.
20+
Color BSD68 dataset for image restoration benchmarks is part of The Berkeley Segmentation Dataset and Benchmark from :footcite:t:`martin2001database`.
2121
It is used for measuring image restoration algorithms performance. It contains 68 images.
2222
2323

deepinv/datasets/div2k.py

Lines changed: 4 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -14,8 +14,9 @@
1414
class DIV2K(torch.utils.data.Dataset):
1515
"""Dataset for `DIV2K Image Super-Resolution Challenge <https://data.vision.ee.ethz.ch/cvl/DIV2K>`_.
1616
17-
Images have varying sizes with up to 2040 vertical pixels, and 2040 horizontal pixels.
17+
The DIV2K dataset from :footcite:t:`agustsson2017ntire` is a high-quality image dataset originally built for image super-resolution tasks.
1818
19+
Images have varying sizes with up to 2040 vertical pixels, and 2040 horizontal pixels.
1920
2021
**Raw data file structure:** ::
2122
@@ -51,6 +52,8 @@ class DIV2K(torch.utils.data.Dataset):
5152
>>> print(len(dataset)) # check that we have 100 images
5253
100
5354
>>> shutil.rmtree("DIV2K") # remove raw data from disk
55+
56+
5457
"""
5558

5659
# https://data.vision.ee.ethz.ch/cvl/DIV2K/

deepinv/datasets/fastmri.py

Lines changed: 4 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -22,12 +22,10 @@
2222
from typing import Any, Callable, NamedTuple, Optional, Union, Any
2323
from collections import defaultdict
2424
import pickle
25-
import math
2625
import warnings
2726
import os
2827
import h5py
2928
from tqdm import tqdm
30-
import numpy as np
3129
import torch
3230
from torchvision.transforms import Compose, CenterCrop
3331

@@ -40,7 +38,7 @@
4038
class SimpleFastMRISliceDataset(torch.utils.data.Dataset):
4139
"""Simple FastMRI image dataset.
4240
43-
Loads in-memory a saved and processed subset of 2D slices from the full FastMRI slice dataset for quick loading.
41+
Loads in-memory a saved and processed subset of 2D slices from the full FastMRI slice dataset of :footcite:t:`knoll2020advancing`, for quick loading.
4442
4543
.. important::
4644
@@ -81,6 +79,8 @@ class SimpleFastMRISliceDataset(torch.utils.data.Dataset):
8179
:param Callable transform: optional transform for images, defaults to None
8280
:param bool download: If ``True``, downloads the dataset from the internet and puts it in root directory.
8381
If dataset is already downloaded, it is not downloaded again. Default at False.
82+
83+
8484
"""
8585

8686
def __init__(
@@ -151,7 +151,7 @@ def __len__(self):
151151
class FastMRISliceDataset(torch.utils.data.Dataset, MRIMixin):
152152
"""Dataset for `fastMRI <https://fastmri.med.nyu.edu/>`_ that provides access to raw MR image slices.
153153
154-
This dataset randomly selects 2D slices from a dataset of 3D MRI volumes.
154+
This dataset (from :footcite:t:`knoll2020advancing`) randomly selects 2D slices from a dataset of 3D MRI volumes.
155155
This class considers one data sample as one slice of a MRI scan, thus slices of the same MRI scan are considered independently in the dataset.
156156
157157
To download raw data, please go to the bottom of the page `https://fastmri.med.nyu.edu/` to download the brain/knee and train/validation/test volumes as ``h5`` files.

deepinv/datasets/flickr2k.py

Lines changed: 2 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -14,6 +14,8 @@
1414
class Flickr2kHR(torch.utils.data.Dataset):
1515
"""Dataset for `Flickr2K <https://github.com/limbee/NTIRE2017>`_.
1616
17+
The Flickr2k dataset introduced by :footcite:t:`agustsson2017ntire` contains 2650 2K images.
18+
1719
**Raw data file structure:** ::
1820
1921
self.root --- Flickr2K --- 000001.png

deepinv/datasets/fmd.py

Lines changed: 2 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -15,6 +15,8 @@
1515
class FMD(torch.utils.data.Dataset):
1616
"""Dataset for `Fluorescence Microscopy Denoising <https://github.com/yinhaoz/denoising-fluorescence>`_.
1717
18+
Introduced by :footcite:t:`zhang2018poisson`.
19+
1820
| 1) The Fluorescence Microscopy Denoising (FMD) dataset is dedicated to
1921
| Poisson-Gaussian denoising.
2022
| 2) The dataset consists of 12,000 real fluorescence microscopy images
@@ -76,7 +78,6 @@ class FMD(torch.utils.data.Dataset):
7678
dataset = FMD(root="fmd", img_types=img_types, download=True) # download raw data at root and load dataset
7779
print(len(dataset)) # check that we have 5000 images
7880
shutil.rmtree("fmd") # remove raw data from disk
79-
8081
"""
8182

8283
gdrive_ids = {

0 commit comments

Comments
 (0)