Per file C++ Operator registration#3135
Conversation
# Conflicts: # torchvision/csrc/new_empty_tensor_op.cpp
5a71bf5 to
46a4c66
Compare
Codecov Report
@@ Coverage Diff @@
## master #3135 +/- ##
=======================================
Coverage 72.75% 72.75%
=======================================
Files 99 99
Lines 8979 8979
Branches 1431 1431
=======================================
Hits 6533 6533
Misses 1999 1999
Partials 447 447 Continue to review full report at Codecov.
|
| #include <torch/torch.h> | ||
| #include <torchvision/roi_align.h> | ||
| #include <torchvision/nms.h> | ||
| #include <torchvision/roi_align.h> |
There was a problem hiding this comment.
Are those individual imports needed or can we just include torchvision/vision.h?
There was a problem hiding this comment.
Including the nms is necessary because vision.h no longer includes all individual header files. I'm not sure that roi_align is necessary though. I left it because we had it before. Do you want me to try and remove it?
There was a problem hiding this comment.
Oh yeah. We should fix this linker issue soon so that we can get rid of those hacks.
| } | ||
|
|
||
| TORCH_LIBRARY_IMPL(torchvision, Autocast, m) { | ||
| m.impl("deform_conv2d", deform_conv2d_autocast); |
There was a problem hiding this comment.
nit: I didn't know that the TORCH_LIBRARY_IMPL can actually be present before TORCH_LIBRARY_FRAGMENT. This is just a comment, no need to change the code.
Also, I wonder if for the future we should create new folders for autograd and autocast, like we have for cpu and cuda?
There was a problem hiding this comment.
Yes indeed. The order looks a bit strange.
The reason I put it like that was to minimize the number of #if blocks AND avoid moving large blocks of code around and lose the git history. If we don't mind about the latter, we can easily refactor it. :)
There was a problem hiding this comment.
Let's leave it like this for now. If we decide to create autocast and autograd folders, then things will be nicely present there (and no ifdefs in the code)
| // C++ Backward | ||
| VISION_API | ||
| std::tuple<at::Tensor, at::Tensor, at::Tensor, at::Tensor, at::Tensor> | ||
| _deform_conv2d_backward( |
There was a problem hiding this comment.
nit: do we want to expose _deform_conv2d_backward in the headers? I would assume those are private and thus shouldn't be exposed, except if somewhere else in the codebase we need them?
There was a problem hiding this comment.
I thought to leave it public because we also expose it on Python via the dispatcher. I believe in the past it was exposed to users because it was in a header file, so I opted for including it to avoid BC-breaking changes. Happy to remove it if we think it's not needed.
There was a problem hiding this comment.
The fact that we have the *_backward functions exposed in Python is an artifact of how some things currently work in the dispatcher IIUC. We don't expose deform_conv2d_backward under torchvision.ops, so I would be ok removing them from those headers.
Summary: * Moving deform_conv2d op registration. * Moving nms op registration. * Moving new_empty_tensor op registration. * Moving ps_roi_align op registration. * Moving ps_roi_pool op registration. * Moving roi_align op registration. * Moving roi_pool op registration. * Restoring headers for forward/backward and fixing styles. * Restoring the test hack on windows. * Stricter header inclusion. Reviewed By: fmassa Differential Revision: D25460675 fbshipit-source-id: 8b7c65320d41c9c0f43815d156957c1e40aef106
Implements the POC discussed at #3051: