Skip to content

Add benchmarks for morphology.local_maxima#3255

Merged
mkcor merged 2 commits intoscikit-image:mainfrom
lagru:extrema-benchmark
Sep 14, 2022
Merged

Add benchmarks for morphology.local_maxima#3255
mkcor merged 2 commits intoscikit-image:mainfrom
lagru:extrema-benchmark

Conversation

@lagru
Copy link
Copy Markdown
Member

@lagru lagru commented Jul 7, 2018

Basically two cases are benchmarked: A natural 2d image with many small extrema and a synthetic 3D "image" with few large extrema. The memory behavior of the latter is especially interesting because it requires the internal queue to grow quite a bit.

Checklist

  • Check that the PR title is short, concise, and will make sense 1 year
    later.
  • Check that new functions are imported in corresponding __init__.py.
  • Check that new features, API changes, and deprecations are mentioned in
    doc/release/release_dev.rst.
  • Consider backporting the PR with @meeseeksdev backport to v0.14.x

@pep8speaks
Copy link
Copy Markdown

pep8speaks commented Jul 7, 2018

Hello @lagru! Thanks for updating this PR. We checked the lines you've touched for PEP 8 issues, and found:

Line 242:80: E501 line too long (87 > 79 characters)
Line 246:24: E231 missing whitespace after ','

Comment last updated at 2022-09-13 16:12:28 UTC

@jni
Copy link
Copy Markdown
Member

jni commented Jul 7, 2018

@lagru do you have a reference for how long these benchmarks take? I feel like the parameter matrix might be a bit too big. And I would argue that running local_minima on Hubble is unrealistic and not very relevant, while running it on the checkerboard is redundant.

You can run these locally quite easily. See the "asv run" help:

http://asv.readthedocs.io/en/latest/commands.html

You should probably limit yourself to a single environment from the benchmark matrix, as the envs take quite a while to make.

@lagru
Copy link
Copy Markdown
Member Author

lagru commented Jul 7, 2018

@jni Ha 😄, and here I expected that you'd have asked for benchmarks for local_minima if I had omitted them. My thought process was that local_minima using local_maxima under the hood is an implementation detail and not part of the API. Although I doubt that will change.

I haven't used local_maxima myself yet but using it without any kind of filtering on a natural image may not be "realistic" anyway. However I think these two extreme benchmarks cover a large part of local_maximas use cases quite well.

I'm happy to do a quick check on my machine although I doubt that my notebook is a good reference.

You should probably limit yourself to a single environment from the benchmark matrix, as the envs take quite a while to make.

It's a little bit faster if one uses conda to create them.

@codecov-io
Copy link
Copy Markdown

codecov-io commented Jul 9, 2018

Codecov Report

Merging #3255 into master will not change coverage.
The diff coverage is n/a.

Impacted file tree graph

@@           Coverage Diff           @@
##           master    #3255   +/-   ##
=======================================
  Coverage   86.86%   86.86%           
=======================================
  Files         340      340           
  Lines       27484    27484           
=======================================
  Hits        23874    23874           
  Misses       3610     3610

Continue to review full report at Codecov.

Legend - Click here to learn more
Δ = absolute <relative> (impact), ø = not affected, ? = missing data
Powered by Codecov. Last update cf3cf93...25a5e5f. Read the comment docs.

@lagru
Copy link
Copy Markdown
Member Author

lagru commented Jul 9, 2018

@jni On my machine (4 x Intel Core i5-4200U CPU @ 1.60GHz) running the 4 remaining benchmarks takes roughly 1 Minute and 20 seconds for 1 commit and 1 environment.

If you still want to reduce the parameter matrix we could remove the parameter "indices" which only really measures the difference induced by np.nonzero.

@jni
Copy link
Copy Markdown
Member

jni commented Jul 9, 2018

@lagru I would say that's about an order of magnitude too long. The only requirement is that it's long enough that we can get an accurate benchmark. So we should aim for an individual benchmark to run in ~1s, I think.

@lagru
Copy link
Copy Markdown
Member Author

lagru commented Jul 9, 2018

@jni With 33b8572 all 4 benchmarks together take ~20 s on my machine. Is that enough? Otherwise our only options are to remove the parameter "connectivity" or skip the 3D benchmark.

@jni
Copy link
Copy Markdown
Member

jni commented Jul 10, 2018

@lagru why not just reduce the size of the 3D image by reducing the sidelength of the cube, e.g. from (10, 10, 10) to (5, 5, 5)? That should give an 8x reduction in runtime, while testing the exact same principles?

@lagru
Copy link
Copy Markdown
Member Author

lagru commented Jul 10, 2018

@jni That has not much of an effect; the benchmarks take essentially the same time. The size of the array doesn't seem to be the factor here. Even if I use only really small arrays the 4 benchmarks take roughly 20 s. Of note is that these results are only true for the development version of asv (0.3.dev1274+a88114ce); for the most recent release 0.2.1 all 4 benchmarks take 45 seconds!

At this stage I really think that testing on my weak CPU may not be a good reference point. And I really want to avoid reducing the parameter matrix further because I think it's quite valuable to see the algorithms performance for different connectivity and border behavior.

Oh and I found a small bug: local_maxima([1, 1], allow_borders=False) should return array([0, 0]) but raises a ValueError in _offset_to_raveled_neighbors. Should I open an issue for this or just raise a PR when I find the time?

@sciunto
Copy link
Copy Markdown
Member

sciunto commented Jul 11, 2018

@lagru If you think that you can open a PR in a short term, then fine. Otherwise, you are invited to open an issue to do not forget. :)

@lagru lagru changed the title Add benchmarks for morphology.local_maxima and .local_minima Add benchmarks for morphology.local_maxima Jul 14, 2018
@lagru lagru force-pushed the extrema-benchmark branch 2 times, most recently from dd802cd to 1603187 Compare July 15, 2018 14:02
@soupault soupault added this to the 0.15 milestone Jul 28, 2018
@lagru lagru force-pushed the extrema-benchmark branch from 1603187 to 25a5e5f Compare October 6, 2018 15:22
@soupault soupault modified the milestones: 0.15, 0.16 Apr 20, 2019
@sciunto
Copy link
Copy Markdown
Member

sciunto commented Sep 22, 2019

@lagru Can you rebase this PR please? Otherwise, it looks good to me. @jni, do you agree after the last @lagru's comment?

@lagru
Copy link
Copy Markdown
Member Author

lagru commented Sep 22, 2019

Benchmark suite won't pass currently. See #4176 for a fix.

@lagru lagru force-pushed the extrema-benchmark branch 2 times, most recently from 1adc0c3 to e2a36e5 Compare September 22, 2019 12:07
@sciunto sciunto modified the milestones: 0.16, 0.17 May 2, 2020
@sciunto sciunto requested a review from jni May 2, 2020 14:45
Base automatically changed from master to main February 18, 2021 18:23
@mkcor
Copy link
Copy Markdown
Member

mkcor commented Sep 12, 2022

Hey @lagru!! 😉

Is this PR still relevant? The fourth task (out of four) is not relevant anymore; it may be either checked off or removed.

This is a minimal benchmark that only tests the 2D case.
@lagru
Copy link
Copy Markdown
Member Author

lagru commented Sep 13, 2022

@mkcor, relevant in as much that more benchmarks don't hurt? I updated it and would say it's ready to go.

Co-authored-by: Marianne Corvellec <marianne.corvellec@ens-lyon.org>
@mkcor
Copy link
Copy Markdown
Member

mkcor commented Sep 13, 2022

more benchmarks don't hurt

@lagru definitely! Since there hadn't been any activity in a while (:zzz: Dormant :wink:), I was wondering if maybe it was outdated. I guess this PR had simply fallen in the cracks! Should I wait for @jni's approval before merging? Or would you like to remove this review request?

@lagru lagru removed the request for review from jni September 14, 2022 13:16
@lagru
Copy link
Copy Markdown
Member Author

lagru commented Sep 14, 2022

I guess it had just fallen out of sight. I was wondering how you found this. 😄 As far as I'm aware I addressed all of @jni's feedback so I'm happy to get this merged.

@mkcor mkcor merged commit 5a7efa1 into scikit-image:main Sep 14, 2022
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Projects

None yet

Development

Successfully merging this pull request may close these issues.

8 participants