Skip to content

[Feature] Support two new loss function#600

Closed
MengzhangLI wants to merge 2 commits intoopen-mmlab:masterfrom
MengzhangLI:loss
Closed

[Feature] Support two new loss function#600
MengzhangLI wants to merge 2 commits intoopen-mmlab:masterfrom
MengzhangLI:loss

Conversation

@MengzhangLI
Copy link
Copy Markdown
Contributor

@MengzhangLI MengzhangLI commented Jun 12, 2021

Hi, jiarui, junjun, zesen and xincheng:

These are two new loss functions, which are tversky loss and DiceTopK loss.

I tested these two new functions on several 2D medical image datasets, i.e., DRIVE, STARE and CHASE_DB1.

Here is the result:

image

Best,

@CLAassistant
Copy link
Copy Markdown

CLAassistant commented Jun 12, 2021

CLA assistant check
All committers have signed the CLA.

@clownrat6 clownrat6 requested review from xiexinch and xvjiarui June 12, 2021 15:27
@Junjun2016 Junjun2016 changed the title two_new_loss [Feature] Support two new loss function Jun 13, 2021
@Junjun2016
Copy link
Copy Markdown
Collaborator

Hi @MengzhangLI
Please fix the errors.

@codecov
Copy link
Copy Markdown

codecov bot commented Jun 15, 2021

Codecov Report

Merging #600 (2909367) into master (5d46314) will decrease coverage by 4.42%.
The diff coverage is 14.65%.

Impacted file tree graph

@@            Coverage Diff             @@
##           master     #600      +/-   ##
==========================================
- Coverage   86.26%   81.83%   -4.43%     
==========================================
  Files         101      105       +4     
  Lines        5278     5626     +348     
  Branches      854      896      +42     
==========================================
+ Hits         4553     4604      +51     
- Misses        561      858     +297     
  Partials      164      164              
Flag Coverage Δ
unittests 81.83% <14.65%> (-4.43%) ⬇️

Flags with carried forward coverage won't be shown. Click here to find out more.

Impacted Files Coverage Δ
mmseg/models/losses/DiceTopK_loss.py 0.00% <0.00%> (ø)
mmseg/models/losses/diceTopK_loss.py 17.82% <17.82%> (ø)
mmseg/models/losses/diceCE_loss.py 18.68% <18.68%> (ø)
mmseg/models/losses/tversky_loss.py 25.00% <25.00%> (ø)
mmseg/models/losses/__init__.py 100.00% <100.00%> (ø)

Continue to review full report at Codecov.

Legend - Click here to learn more
Δ = absolute <relative> (impact), ø = not affected, ? = missing data
Powered by Codecov. Last update 5d46314...2909367. Read the comment docs.

@Junjun2016
Copy link
Copy Markdown
Collaborator

Hi @MengzhangLI
Please add unittest.

@Junjun2016
Copy link
Copy Markdown
Collaborator

Can we implement topk in other ways, e.g. OHEM.

@MengzhangLI
Copy link
Copy Markdown
Contributor Author

Can we implement topk in other ways, e.g. OHEM.

Of course, but it needs some time.

BTW, the way I adopted in DiceTopK is just follow nnUNet default settings:

https://github.com/JunMa11/SegLoss/blob/master/test/nnUNetV1/loss_functions/TopK_loss.py

@MengzhangLI
Copy link
Copy Markdown
Contributor Author

I will update these loss functions in several ways:
(1) Using existed methods e.g., OHEM rather than brand new functions.
(2) Making code more clear and meets up default style of MMSegmentation.

@Junjun2016 Junjun2016 self-requested a review June 18, 2021 16:40
@MengzhangLI MengzhangLI deleted the loss branch June 23, 2021 10:13
aravind-h-v pushed a commit to aravind-h-v/mmsegmentation that referenced this pull request Mar 27, 2023
* Fix typo in docstring.

* Allow dtype to be overridden on model load.

This may be a temporary solution until open-mmlab#567 is addressed.

* Create latents in float32

The denoising loop always computes the next step in float32, so this
would fail when using `bfloat16`.
michaelzhang-ai pushed a commit to michaelzhang-ai/mmsegmentation that referenced this pull request Mar 22, 2024
* Fix 599

* fix unittest & docs & default value

* update docs
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants