Skip to content

removed compile cache and static argnums#85783

Closed
Chillee wants to merge 8 commits intogh/chillee/132/basefrom
gh/chillee/132/head
Closed

removed compile cache and static argnums#85783
Chillee wants to merge 8 commits intogh/chillee/132/basefrom
gh/chillee/132/head

Conversation

@pytorch-bot
Copy link
Copy Markdown

pytorch-bot bot commented Sep 28, 2022

🔗 Helpful Links

🧪 See artifacts and rendered test results at hud.pytorch.org/pr/85783

Note: Links to docs will display an error until the docs builds have been completed.

✅ No Failures

As of commit 85b98eb:
💚 Looks good so far! There are no failures yet. 💚

This comment was automatically generated by Dr. CI and updates every 15 minutes.

Copy link
Copy Markdown
Contributor

@wconstab wconstab left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

i'll stamp anyway but,

  • what's the rationale for removing the cache? is it known buggy, and/or known not to help perf much?
  • (maybe next time) consider landing separate diff for adding test_pythonkey and backwards tests in case it raises trouble?

@ops(op_db + additional_op_db, allowed_dtypes=(torch.float,))
@patch("functorch.compile.config.use_dynamic_shapes", True)
@patch("functorch.compile.config.use_fake_tensor", True)
@patch("functorch.compile.config.use_functionalize", False)
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

why is functionalize disabled?

Copy link
Copy Markdown
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Because functionalization currently doesn't work with symbolic shapes, so enabling functionalization just causes all tests to fail. Will add note.

*flat_args_for_cache,
)

cached_fn, out_spec = cached_res
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

rename to compiled_fn and delete cached_res above?

Copy link
Copy Markdown
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We still cache the saved function, we just never recompile now.

@Chillee
Copy link
Copy Markdown
Collaborator Author

Chillee commented Sep 28, 2022

@wconstab Basically, it occasionally causes issues for us, and now that AOTAutograd's intended role is to sit behind Dynamo, it's no longer particularly needed and just adds complexity.

Copy link
Copy Markdown
Contributor

@wconstab wconstab left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

oh @Chillee hold up, did you accidentally re-add test_pythonkey after renaming it in the previous diff? i glossed over that at first.

…c shape testing to aotautograd"

[ghstack-poisoned]
@Chillee
Copy link
Copy Markdown
Collaborator Author

Chillee commented Sep 28, 2022

@wconstab just re-removed it right as you commented lol.

…c shape testing to aotautograd"

[ghstack-poisoned]
…c shape testing to aotautograd"

[ghstack-poisoned]
…c shape testing to aotautograd"

[ghstack-poisoned]
@Chillee
Copy link
Copy Markdown
Collaborator Author

Chillee commented Sep 28, 2022

@wconstab Also, splitting into 2 PRs, I'll add the symbolic shape testing in the second one.

…c shape testing to aotautograd"

[ghstack-poisoned]
…c shape testing to aotautograd"

[ghstack-poisoned]
…c shape testing to aotautograd"

[ghstack-poisoned]
@Chillee Chillee changed the title removed compile cache and static argnums and added symbolic shape testing to aotautograd removed compile cache and static argnums Sep 28, 2022
@Chillee Chillee mentioned this pull request Sep 28, 2022
@Chillee
Copy link
Copy Markdown
Collaborator Author

Chillee commented Sep 28, 2022

@pytorchbot merge -g

@pytorchmergebot
Copy link
Copy Markdown
Collaborator

@pytorchbot successfully started a merge job. Check the current status here.
The merge job was triggered with the green (-g) flag. This means that your change will be merged once all checks on your PR have passed (ETA: 0-4 Hours). If this is not the intended behavior, feel free to use some of the other merge options in the wiki.
Please reach out to the PyTorch DevX Team with feedback or questions!

@github-actions
Copy link
Copy Markdown
Contributor

Hey @Chillee.
You've committed this PR, but it does not have both a 'release notes: ...' and 'topics: ...' label. Please add one of each to the PR. The 'release notes: ...' label should represent the part of PyTorch that this PR changes (fx, autograd, distributed, etc) and the 'topics: ...' label should represent the kind of PR it is (not user facing, new feature, bug fix, perf improvement, etc). The list of valid labels can be found here for the 'release notes: ...' and here for the 'topics: ...'.
For changes that are 'topic: not user facing' there is no need for a release notes label.

drisspg pushed a commit to drisspg/pytorch that referenced this pull request Sep 29, 2022
@facebook-github-bot facebook-github-bot deleted the gh/chillee/132/head branch October 1, 2022 14:19
mehtanirav pushed a commit that referenced this pull request Oct 4, 2022
Rick0317 pushed a commit to Rick0317/pytorch that referenced this pull request Oct 19, 2022
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Projects

None yet

Development

Successfully merging this pull request may close these issues.

4 participants