Skip to content

[pytorch] add manual registration for trace type#40903

Closed
ljk53 wants to merge 6 commits intogh/ljk53/154/basefrom
gh/ljk53/154/head
Closed

[pytorch] add manual registration for trace type#40903
ljk53 wants to merge 6 commits intogh/ljk53/154/basefrom
gh/ljk53/154/head

Conversation

@ljk53
Copy link
Copy Markdown
Contributor

@ljk53 ljk53 commented Jul 2, 2020

Stack from ghstack:

This PR continues the work of #38467 - decoupling Autograd and Trace for manually registered ops.

Differential Revision: D22354804

NOTE FOR REVIEWERS: This PR has internal Facebook specific changes or comments, please review them on Phabricator!

This PR continues the work of #38467 - decoupling Autograd and Trace for manually registered ops.

Differential Revision: [D22354804](https://our.internmc.facebook.com/intern/diff/D22354804/)

**NOTE FOR REVIEWERS**: This PR has internal Facebook specific changes or comments, please review them on [Phabricator](https://our.internmc.facebook.com/intern/diff/D22354804/)!

[ghstack-poisoned]
@ljk53 ljk53 requested review from albanD and apaszke as code owners July 2, 2020 08:45
@dr-ci
Copy link
Copy Markdown

dr-ci Bot commented Jul 2, 2020

💊 CI failures summary and remediations

As of commit 1790bba (more details on the Dr. CI page):


  • 1/1 failures introduced in this PR

🕵️ 1 new failure recognized by patterns

The following CI failures do not appear to be due to upstream breakages:

See CircleCI build pytorch_linux_backward_compatibility_check_test (1/1)

Step: "Run tests" (full log | diagnosis details | 🔁 rerun)

Jul 06 09:09:26 The PR is introducing backward incompatible changes to the operator library. Please contact PyTorch team to confirm whether this change is wanted or not.
Jul 06 09:09:26 processing existing schema:  __str__(__torch__.torch.classes._TorchScriptTesting._StackString _0) -> (str _0) 
Jul 06 09:09:26 processing existing schema:  __init__(__torch__.torch.classes._TorchScriptTesting._PickleTester _0, int[] _1) -> (None _0) 
Jul 06 09:09:26 processing existing schema:  __getstate__(__torch__.torch.classes._TorchScriptTesting._PickleTester _0) -> (int[] _0) 
Jul 06 09:09:26 processing existing schema:  __setstate__(__torch__.torch.classes._TorchScriptTesting._PickleTester _0, int[] _1) -> (None _0) 
Jul 06 09:09:26 processing existing schema:  top(__torch__.torch.classes._TorchScriptTesting._PickleTester _0) -> (int _0) 
Jul 06 09:09:26 processing existing schema:  pop(__torch__.torch.classes._TorchScriptTesting._PickleTester _0) -> (int _0) 
Jul 06 09:09:26 processing existing schema:  get(__torch__.torch.classes._TorchScriptTesting._LiteInterpreterTest _0, Tensor _1) -> (str _0) 
Jul 06 09:09:26 processing existing schema:  __getstate__(__torch__.torch.classes._TorchScriptTesting._LiteInterpreterTest _0) -> (int _0) 
Jul 06 09:09:26 processing existing schema:  __setstate__(__torch__.torch.classes._TorchScriptTesting._LiteInterpreterTest _0, int _1) -> (None _0) 
Jul 06 09:09:26 processing existing schema:  __init__(__torch__.torch.classes.dist_rpc.WorkerInfo _0, str _1, int _2) -> (None _0) 
Jul 06 09:09:26 The PR is introducing backward incompatible changes to the operator library. Please contact PyTorch team to confirm whether this change is wanted or not.  
Jul 06 09:09:26  
Jul 06 09:09:26 Broken ops: [ 
Jul 06 09:09:26 	aten::ne(str a, str b) -> (bool) 
Jul 06 09:09:26 	aten::eq(str a, str b) -> (bool) 
Jul 06 09:09:26 	aten::add(str a, str b) -> (str) 
Jul 06 09:09:26 ] 
Jul 06 09:09:26 =================== sccache compilation log =================== 
Jul 06 09:09:26 + cleanup 
Jul 06 09:09:26 + retcode=1 
Jul 06 09:09:26 + set +x 

This comment was automatically generated by Dr. CI (expand for details).Follow this link to opt-out of these comments for your Pull Requests.

Please report bugs/suggestions on the GitHub issue tracker or post in the (internal) Dr. CI Users group.

See how this bot performed.

This comment has been revised 13 times.

This PR continues the work of #38467 - decoupling Autograd and Trace for manually registered ops.

Differential Revision: [D22354804](https://our.internmc.facebook.com/intern/diff/D22354804/)

**NOTE FOR REVIEWERS**: This PR has internal Facebook specific changes or comments, please review them on [Phabricator](https://our.internmc.facebook.com/intern/diff/D22354804/)!

[ghstack-poisoned]
ljk53 added a commit that referenced this pull request Jul 2, 2020
Pull Request resolved: #40903

This PR continues the work of #38467 - decoupling Autograd and Trace for manually registered ops.
ghstack-source-id: 107042483

Differential Revision: [D22354804](https://our.internmc.facebook.com/intern/diff/D22354804/)

**NOTE FOR REVIEWERS**: This PR has internal Facebook specific changes or comments, please review them on [Phabricator](https://our.internmc.facebook.com/intern/diff/D22354804/)!
@ljk53 ljk53 requested review from bhosmer, ezyang and smessmer July 2, 2020 10:32
Comment thread test/jit/test_tracer.py Outdated
Comment thread torch/csrc/autograd/TraceTypeManual.cpp
This PR continues the work of #38467 - decoupling Autograd and Trace for manually registered ops.

Differential Revision: [D22354804](https://our.internmc.facebook.com/intern/diff/D22354804/)

**NOTE FOR REVIEWERS**: This PR has internal Facebook specific changes or comments, please review them on [Phabricator](https://our.internmc.facebook.com/intern/diff/D22354804/)!

[ghstack-poisoned]
This PR continues the work of #38467 - decoupling Autograd and Trace for manually registered ops.

Differential Revision: [D22354804](https://our.internmc.facebook.com/intern/diff/D22354804/)

**NOTE FOR REVIEWERS**: This PR has internal Facebook specific changes or comments, please review them on [Phabricator](https://our.internmc.facebook.com/intern/diff/D22354804/)!

[ghstack-poisoned]
ljk53 added a commit that referenced this pull request Jul 2, 2020
Pull Request resolved: #40903

This PR continues the work of #38467 - decoupling Autograd and Trace for manually registered ops.
ghstack-source-id: 107087630

Differential Revision: [D22354804](https://our.internmc.facebook.com/intern/diff/D22354804/)

**NOTE FOR REVIEWERS**: This PR has internal Facebook specific changes or comments, please review them on [Phabricator](https://our.internmc.facebook.com/intern/diff/D22354804/)!
This PR continues the work of #38467 - decoupling Autograd and Trace for manually registered ops.

Differential Revision: [D22354804](https://our.internmc.facebook.com/intern/diff/D22354804/)

**NOTE FOR REVIEWERS**: This PR has internal Facebook specific changes or comments, please review them on [Phabricator](https://our.internmc.facebook.com/intern/diff/D22354804/)!

[ghstack-poisoned]
ljk53 added a commit that referenced this pull request Jul 3, 2020
Pull Request resolved: #40903

This PR continues the work of #38467 - decoupling Autograd and Trace for manually registered ops.
ghstack-source-id: 107106964

Differential Revision: [D22354804](https://our.internmc.facebook.com/intern/diff/D22354804/)

**NOTE FOR REVIEWERS**: This PR has internal Facebook specific changes or comments, please review them on [Phabricator](https://our.internmc.facebook.com/intern/diff/D22354804/)!
This PR continues the work of #38467 - decoupling Autograd and Trace for manually registered ops.

Differential Revision: [D22354804](https://our.internmc.facebook.com/intern/diff/D22354804/)

**NOTE FOR REVIEWERS**: This PR has internal Facebook specific changes or comments, please review them on [Phabricator](https://our.internmc.facebook.com/intern/diff/D22354804/)!

[ghstack-poisoned]
@ezyang
Copy link
Copy Markdown
Contributor

ezyang commented Jul 6, 2020

This is good stuff, but I think there's a decent chance the tracing logic here didn't need to be manually written out. Looking at the original manual code, you can see that there are bits that were codegenned, and then bits that were edited up to do some sort of custom logic. In most of the cases, the tracing logic was outside of the "this was manually edited sections." How hard would it be to wire up the codegen to generate tracing for those cases too?

(Maybe this is not worth it, since the PoR is to shortly delete all this code and replace it with boxed fallback.)

Comment thread test/jit/test_tracer.py
x = torch.randn(4, 4, requires_grad=True)

def f(x):
out = Variable(torch.zeros(x.size()))
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

yay!

Copy link
Copy Markdown
Contributor

@ezyang ezyang left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

There is not too much manual code that was carried over, so I think this is fine as is to go in.

@ljk53
Copy link
Copy Markdown
Contributor Author

ljk53 commented Jul 6, 2020

There is not too much manual code that was carried over, so I think this is fine as is to go in.

Yeah, most ops here are simply falling through, among all the 5 manually registered ops: copy_ has special logic to emit as expand_as, resize_ and resize_as_ only to emit warning message (not sure if it's still right thing to do?), only detach and detach_ are regular form that can be replaced by codegen - but we'll need create a separate boolean flag to differentiate these 2 from other manual registered ops, so I decided not to do so...

@facebook-github-bot
Copy link
Copy Markdown
Contributor

This pull request has been merged in 7f60642.

@facebook-github-bot facebook-github-bot deleted the gh/ljk53/154/head branch July 10, 2020 14:18
laurentdupin pushed a commit to laurentdupin/pytorch that referenced this pull request Apr 24, 2026
Summary:
Pull Request resolved: pytorch#40903

This PR continues the work of pytorch#38467 - decoupling Autograd and Trace for manually registered ops.
ghstack-source-id: 107158638

Test Plan: CI

Differential Revision: D22354804

fbshipit-source-id: f5ea45ade2850296c62707a2a4449d7d67a9f5b5
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

Projects

None yet

Development

Successfully merging this pull request may close these issues.

5 participants