Skip to content

Port roi_align to actually use dispatcher#2366

Merged
fmassa merged 2 commits intomasterfrom
pr/roi_align_port
Jun 30, 2020
Merged

Port roi_align to actually use dispatcher#2366
fmassa merged 2 commits intomasterfrom
pr/roi_align_port

Conversation

@ezyang
Copy link
Copy Markdown
Contributor

@ezyang ezyang commented Jun 29, 2020

No description provided.

This is still registering everything as catchalls, so we're really just
moving deck chairs around, but payoff is coming soon.

Signed-off-by: Edward Z. Yang <ezyang@fb.com>
@ezyang ezyang requested a review from mcarilli June 29, 2020 22:01
@ezyang ezyang force-pushed the pr/roi_align_port branch from 3ce660f to a0e6f91 Compare June 29, 2020 22:34
@ezyang
Copy link
Copy Markdown
Contributor Author

ezyang commented Jun 29, 2020

CUDA is not done yet, needs similar treatment.

@ezyang ezyang force-pushed the pr/roi_align_port branch from a0e6f91 to ad105f1 Compare June 29, 2020 22:44
@mcarilli
Copy link
Copy Markdown

mcarilli commented Jun 30, 2020

CUDA is not done yet, needs similar treatment.

out of date? Cuda ops look ok/symmetric with CPU ops.

Diffs make sense given what we discussed on slack. I think it will be straightforward to add an autocast layer as well, which will cast inputs then redispatch to one of the dispatch nexuses (roi_align or _roi_align_backward). It won't be pretty (will need to copy casting boilerplate from ATen/autocast_mode.cpp that, in a perfect world, would have some easy interface for external libs to use) but it should work.

If this is mergeable, I can wait for merge then pull it to add an autocast layer and flesh out other ops similarly.

ctx->saved_data["input_shape"] = input.sizes();
ctx->save_for_backward({rois});
auto result = ROIAlign_forward(
at::AutoNonVariableTypeMode g;
Copy link
Copy Markdown

@mcarilli mcarilli Jun 30, 2020

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why an explicit autograd-disabling guard here? Does torch::autograd::Function not disable autograd automatically around forward?

The pre-PR forward doesn't use an explicit guard, and afaik Python-side torch.autograd.Function does disable autograd around its forward method. Both of these lead me to expect torch::autograd::Function also disables autograd around forward. (If it doesn't, I think it should. Aligning its behavior with the Python version makes sense to me. But that would be a Pytorch-side change.)

Copy link
Copy Markdown
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Copy link
Copy Markdown
Member

@fmassa fmassa left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks a lot for the port @ezyang !

Lint is failing though, can you run clang-format?

I'm ok merging this PR right now, just have a question about having to implement dummy gradgrad kernels, as we will probably be using this PR as a template for all other functions.

};

at::Tensor roi_align(
// TODO: There should be an easier way to do this
Copy link
Copy Markdown
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can't we register a fallback kernel that raises an error on double-backward? So that we don't have to implement a dummy double-backwards kernel for all the ops.

Copy link
Copy Markdown
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This would indeed be the right thing to do in core library. There will be some BC consequences though

@ezyang
Copy link
Copy Markdown
Contributor Author

ezyang commented Jun 30, 2020

@mcarilli

It won't be pretty (will need to copy casting boilerplate from ATen/autocast_mode.cpp that, in a perfect world, would have some easy interface for external libs to use) but it should work.

You should probably put the casting utilities in a header file. We'll need to talk about what a good external facing API for it is.

Signed-off-by: Edward Z. Yang <ezyang@fb.com>
@ezyang ezyang force-pushed the pr/roi_align_port branch from ad105f1 to 6155d0c Compare June 30, 2020 15:21
@mcarilli
Copy link
Copy Markdown

You should probably put the casting utilities in a header file. We'll need to talk about what a good external facing API for it is.

aye, right now I don't have a clear idea. Trying it for torchvision will help solidify what such an API needs.

The fact that right now, autocast only wraps forward ops, and relies on autograd to reverse casts in backward, makes life easier.

@codecov
Copy link
Copy Markdown

codecov bot commented Jun 30, 2020

Codecov Report

Merging #2366 into master will increase coverage by 1.36%.
The diff coverage is n/a.

Impacted file tree graph

@@            Coverage Diff             @@
##           master    #2366      +/-   ##
==========================================
+ Coverage   68.49%   69.86%   +1.36%     
==========================================
  Files          93       93              
  Lines        7655     8361     +706     
  Branches     1177     1414     +237     
==========================================
+ Hits         5243     5841     +598     
- Misses       2075     2121      +46     
- Partials      337      399      +62     
Impacted Files Coverage Δ
torchvision/transforms/functional_tensor.py 66.28% <0.00%> (+2.95%) ⬆️
torchvision/transforms/functional.py 80.32% <0.00%> (+3.02%) ⬆️
torchvision/transforms/transforms.py 81.99% <0.00%> (+4.76%) ⬆️
torchvision/transforms/functional_pil.py 68.45% <0.00%> (+7.50%) ⬆️

Continue to review full report at Codecov.

Legend - Click here to learn more
Δ = absolute <relative> (impact), ø = not affected, ? = missing data
Powered by Codecov. Last update 446eac6...6155d0c. Read the comment docs.

Copy link
Copy Markdown
Member

@fmassa fmassa left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks a lot Ed!

@fmassa fmassa merged commit 4480603 into master Jun 30, 2020
@fmassa fmassa deleted the pr/roi_align_port branch June 30, 2020 15:55
fmassa pushed a commit to fmassa/vision-1 that referenced this pull request Jul 9, 2020
* Switch torchvision registrations to new operator registration API.

This is still registering everything as catchalls, so we're really just
moving deck chairs around, but payoff is coming soon.

Signed-off-by: Edward Z. Yang <ezyang@fb.com>

* Port roi_align to actually use dispatcher

Signed-off-by: Edward Z. Yang <ezyang@fb.com>
fmassa added a commit that referenced this pull request Jul 9, 2020
* Switch torchvision registrations to new operator registration API.

This is still registering everything as catchalls, so we're really just
moving deck chairs around, but payoff is coming soon.

Signed-off-by: Edward Z. Yang <ezyang@fb.com>

* Port roi_align to actually use dispatcher

Signed-off-by: Edward Z. Yang <ezyang@fb.com>

Co-authored-by: Edward Z. Yang <ezyang@fb.com>
@fmassa fmassa mentioned this pull request Sep 1, 2020
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants