Skip to content

expand cucim.skimage.segmentation benchmarks#323

Closed
grlee77 wants to merge 1 commit intorapidsai:branch-22.08from
grlee77:increase-segmentation-benchmark-coverage
Closed

expand cucim.skimage.segmentation benchmarks#323
grlee77 wants to merge 1 commit intorapidsai:branch-22.08from
grlee77:increase-segmentation-benchmark-coverage

Conversation

@grlee77
Copy link
Copy Markdown
Contributor

@grlee77 grlee77 commented Jun 28, 2022

I noticed that benchmark coverage for the segmentation functions was missing several functions. This PR adds them.

Unfortunately functions in the segmentation module take different mixtures of image and label image inputs so there are multiple different benchmarking classes required to set up appropriate inputs. The appropriate benchmarking class is chosen based on the function name.

@grlee77 grlee77 added non-breaking Introduces a non-breaking change performance Performance improvement labels Jun 28, 2022
@grlee77 grlee77 modified the milestones: v22.06.01, v22.08.00 Jun 28, 2022
@grlee77 grlee77 added the improvement Improves an existing functionality label Jun 28, 2022
@grlee77 grlee77 changed the title add missing benchmarks for the cucim.skimage.segmentation module expand cucim.skimage.segmentation benchmarks Jun 28, 2022
@grlee77 grlee77 changed the title expand cucim.skimage.segmentation benchmarks expand cucim.skimage.segmentation benchmarks Jun 28, 2022
with open(fbase + ".md", "wt") as f:
f.write(all_results.to_markdown())
try:
import tabulate
Copy link
Copy Markdown
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Should we add this dependency somewhere?

Copy link
Copy Markdown
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Not sure where the best place would be? We could add a requirements-bench.txt or something, but we should not add either tabulate or pandas to the main package dependecies as they aren't used outside of the benchmarks. We don't distribute these benchmark files with the conda or PyPI packages.

Copy link
Copy Markdown
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yeah we could do that.

Another option would be to add it to this environment file. We can comment sections and specify these are benchmark requirements for clarity.

Copy link
Copy Markdown
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We do have a requirements-test.txt within the python/cucim folder which has various test requirements that are not listed in that env.yml. I can just add a requirements-bench.txt to the changes in #324

Copy link
Copy Markdown
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Maybe we should include these in the env.yml as well

@grlee77 grlee77 closed this Jul 27, 2022
rapids-bot bot pushed a commit that referenced this pull request Jul 28, 2022
This PR expands the approach of #322 to the remaining `cucim.skimage` benchmarks as well

This PR is built on top of a few other open benchmarking PRs, so please review and merge those first: (#322, #323, #290)

Authors:
  - Gregory Lee (https://github.com/grlee77)
  - https://github.com/aasthajh

Approvers:
  - https://github.com/jakirkham

URL: #324
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

improvement Improves an existing functionality non-breaking Introduces a non-breaking change performance Performance improvement

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants