Skip to content

Support Exporting MultiScaleRoiAlign to ONNX#1324

Merged
fmassa merged 8 commits intopytorch:masterfrom
lara-hdr:lahaidar/export_poolers
Oct 4, 2019
Merged

Support Exporting MultiScaleRoiAlign to ONNX#1324
fmassa merged 8 commits intopytorch:masterfrom
lara-hdr:lahaidar/export_poolers

Conversation

@lara-hdr
Copy link
Copy Markdown
Contributor

Modifications to export MultiScaleRoiAlign to ONNX.

Requires dictionary support added in pytorch/pytorch#25889.

Copy link
Copy Markdown
Member

@fmassa fmassa left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is pretty clean, thanks!

I have a few comments, let me know what you think.

Comment thread torchvision/ops/poolers.py Outdated
Comment thread torchvision/ops/poolers.py Outdated
Comment thread torchvision/ops/poolers.py Outdated
Comment thread torchvision/ops/poolers.py Outdated
Comment thread torchvision/ops/poolers.py Outdated
Comment thread torchvision/ops/poolers.py
Comment thread test/test_onnx.py Outdated
@lara-hdr
Copy link
Copy Markdown
Contributor Author

@fmassa, ok for all the comments.
I will wait for #1325 before updating this PR to avoid conflicts from duplicated code.

@codecov-io
Copy link
Copy Markdown

codecov-io commented Sep 18, 2019

Codecov Report

❗ No coverage uploaded for pull request base (master@49b01e3). Click here to learn what that means.
The diff coverage is 36.84%.

Impacted file tree graph

@@            Coverage Diff            @@
##             master    #1324   +/-   ##
=========================================
  Coverage          ?   63.05%           
=========================================
  Files             ?       78           
  Lines             ?     6193           
  Branches          ?      846           
=========================================
  Hits              ?     3905           
  Misses            ?     1993           
  Partials          ?      295
Impacted Files Coverage Δ
torchvision/ops/poolers.py 80.72% <36.84%> (ø)

Continue to review full report at Codecov.

Legend - Click here to learn more
Δ = absolute <relative> (impact), ø = not affected, ? = missing data
Powered by Codecov. Last update 49b01e3...64cdf57. Read the comment docs.

Copy link
Copy Markdown
Member

@fmassa fmassa left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM, just waiting until ONNX support OrderedDict to merge this PR so that tests pass.

@lara-hdr
Copy link
Copy Markdown
Contributor Author

@fmassa the dict PR was merged an hour ago, but the test fails with unsupported dict error on the CI.
Does the CI run the test with the latest pytorch master?

Copy link
Copy Markdown
Member

@fmassa fmassa left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

the dict PR was merged an hour ago, but the test fails with unsupported dict error on the CI.
Does the CI run the test with the latest pytorch master?

TorchVision CI runs with the latest PyTorch nightly, so it's expected that tests would fail if they had been merged in master 1h before.

I have one more comment regarding your latest changes, can you have a look?

Also, let me restart the CI for the failing jobs

Comment thread torchvision/ops/poolers.py Outdated
results = []
for level, (per_level_feature, scale) in enumerate(zip(x, self.scales)):
idx_in_level = torch.nonzero(levels == level).squeeze(1)
idx_in_level = torch.nonzero(levels.to(torch.int64) == level).squeeze(1)
Copy link
Copy Markdown
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can you explain why this cast is necessary? From

return target_lvls.to(torch.int64) - self.k_min

levels are already cast to int64, so this seems unecessary?

Copy link
Copy Markdown
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

self.k_min is float so the returned value was a torch.float.
I added a cast on (target_lvls.to(torch.int64) - self.k_min) and reverted the one on levels.

Copy link
Copy Markdown
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Oh I see, this is a regression introduced by type promotion, thanks a lot for the catch!

In this case, can you instead change

lvl_min = -torch.log2(torch.tensor(scales[0], dtype=torch.float32)).item()
lvl_max = -torch.log2(torch.tensor(scales[-1], dtype=torch.float32)).item()

to do instead

lvl_min = int(-torch.log2(torch.tensor(scales[0], dtype=torch.float32)))
lvl_max = int(-torch.log2(torch.tensor(scales[-1], dtype=torch.float32)))

i.e., replace .item() with a int()? This way it fixes it at the root cause of it

@fmassa fmassa closed this Sep 30, 2019
@fmassa fmassa reopened this Sep 30, 2019
Copy link
Copy Markdown
Member

@fmassa fmassa left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks a lot Lara!

@fmassa fmassa merged commit 76702a0 into pytorch:master Oct 4, 2019
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants