Skip to content

Fix slow gradcheck when outputs that don't require grad precede those that do#77743

Closed
soulitzer wants to merge 5 commits intogh/soulitzer/81/basefrom
gh/soulitzer/81/head
Closed

Fix slow gradcheck when outputs that don't require grad precede those that do#77743
soulitzer wants to merge 5 commits intogh/soulitzer/81/basefrom
gh/soulitzer/81/head

Conversation

@soulitzer
Copy link
Contributor

@soulitzer soulitzer commented May 18, 2022

Stack from ghstack:

Future work: fix for fast gradcheck as well, that would a bit require more effort though

Fixes: #77230

@facebook-github-bot
Copy link
Contributor

facebook-github-bot commented May 18, 2022

🔗 Helpful links

❌ 1 New Failures

As of commit a14ef97 (more details on the Dr. CI page):

Expand to see more
  • 1/1 failures introduced in this PR

🕵️ 1 new failure recognized by patterns

The following CI failures do not appear to be due to upstream breakages

See GitHub Actions build pull / linux-xenial-py3.7-gcc5.4 / test (backwards_compat, 1, 1, linux.2xlarge) (1/1)

Step: "Test" (full log | diagnosis details | 🔁 rerun)

2022-05-24T01:37:04.4278588Z The PR is introduc...m to confirm whether this change is wanted or not.
2022-05-24T01:37:04.4261823Z processing existing schema:  text(__torch__.torch.classes.profiling.SourceRef _0) -> (str _0)
2022-05-24T01:37:04.4263634Z processing existing schema:  count(__torch__.torch.classes.profiling.InstructionStats _0) -> (int _0)
2022-05-24T01:37:04.4265210Z processing existing schema:  duration_ns(__torch__.torch.classes.profiling.InstructionStats _0) -> (int _0)
2022-05-24T01:37:04.4267045Z processing existing schema:  source(__torch__.torch.classes.profiling.SourceStats _0) -> (__torch__.torch.classes.profiling.SourceRef _0)
2022-05-24T01:37:04.4269607Z processing existing schema:  line_map(__torch__.torch.classes.profiling.SourceStats _0) -> (Dict(int, __torch__.torch.classes.profiling.InstructionStats) _0)
2022-05-24T01:37:04.4270726Z processing existing schema:  __init__(__torch__.torch.classes.profiling._ScriptProfile _0) -> (NoneType _0)
2022-05-24T01:37:04.4272192Z processing existing schema:  enable(__torch__.torch.classes.profiling._ScriptProfile _0) -> (NoneType _0)
2022-05-24T01:37:04.4273853Z processing existing schema:  disable(__torch__.torch.classes.profiling._ScriptProfile _0) -> (NoneType _0)
2022-05-24T01:37:04.4276573Z processing existing schema:  _dump_stats(__torch__.torch.classes.profiling._ScriptProfile _0) -> (__torch__.torch.classes.profiling.SourceStats[] _0)
2022-05-24T01:37:04.4277910Z processing existing schema:  __init__(__torch__.torch.classes.dist_rpc.WorkerInfo _0, str _1, int _2) -> (NoneType _0)
2022-05-24T01:37:04.4278588Z The PR is introducing backward incompatible changes to the operator library. Please contact PyTorch team to confirm whether this change is wanted or not. 
2022-05-24T01:37:04.4279022Z 
2022-05-24T01:37:04.4279138Z Broken ops: [
2022-05-24T01:37:04.4279732Z 	prims::uniform(int[] shape, *, Scalar low, Scalar high, int dtype, Device device) -> (Tensor)
2022-05-24T01:37:04.4280429Z 	prims::empty_strided(int[] shape, int[] strides, *, int dtype, Device device, bool requires_grad) -> (Tensor)
2022-05-24T01:37:04.4281103Z 	prims::var(Tensor inp, int[]? dims, *, int correction, int? output_dtype=None) -> (Tensor)
2022-05-24T01:37:04.4281661Z 	prims::where(Tensor pred, Tensor a, Tensor b) -> (Tensor)
2022-05-24T01:37:04.4282132Z 	prims::cat(Tensor[] tensors, int dim) -> (Tensor)
2022-05-24T01:37:04.4282550Z 	prims::log10(Tensor self) -> (Tensor)
2022-05-24T01:37:04.4282998Z 	prims::fill(Tensor self, Scalar value) -> (Tensor)
2022-05-24T01:37:04.4283429Z 	prims::exp2(Tensor self) -> (Tensor)

This comment was automatically generated by Dr. CI (expand for details).

Please report bugs/suggestions to the (internal) Dr. CI Users group.

Click here to manually regenerate this comment.

@lezcano
Copy link
Collaborator

lezcano commented May 18, 2022

fwiw, to test that this works please uncomment the preprocessing formula used in the OpInfos of linalg.lu and in linalg.slogdet

@soulitzer soulitzer changed the title Fix gradcheck when outputs that don't require grad precede those that do Fix slow gradcheck when outputs that don't require grad precede those that do May 18, 2022
@soulitzer
Copy link
Contributor Author

soulitzer commented May 18, 2022

Ok its a bit more work to get this working with fast gradcheck, so just going to scope this PR to slow gradcheck. I think that's okay for now because we only use fast internally and we can just keep those wrappers that filter out the outputs in opinfo testing.

soulitzer added 2 commits May 18, 2022 15:45
…ecede those that do"

* 

Future work: fix for fast gradcheck as well, that would a bit require more effort though

Fixes: #77230

[ghstack-poisoned]
facebook-github-bot pushed a commit that referenced this pull request May 26, 2022
… do (#77743)

Summary:
Pull Request resolved: #77743

Approved by: https://github.com/malfet

Test Plan: contbuild & OSS CI, see https://hud.pytorch.org/commit/pytorch/pytorch/588826b38908cc30861a02b6a067654a8e1cbc52

Reviewed By: mehtanirav

Differential Revision: D36668790

Pulled By: soulitzer

fbshipit-source-id: d52753686cb92df7cf877107cc7dda3a22532219
@facebook-github-bot facebook-github-bot deleted the gh/soulitzer/81/head branch May 28, 2022 14:17
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Projects

None yet

Development

Successfully merging this pull request may close these issues.

4 participants