[GHF][mergebot] record ghstack dependencies in the commit message#105251
[GHF][mergebot] record ghstack dependencies in the commit message#105251izaitsevfb wants to merge 2 commits intomainfrom
Conversation
🔗 Helpful Links🧪 See artifacts and rendered test results at hud.pytorch.org/pr/105251
Note: Links to docs will display an error until the docs builds have been completed. ✅ No FailuresAs of commit 2bc8476: This comment was automatically generated by Dr. CI and updates every 15 minutes. |
.github/scripts/test_trymerge.py
Outdated
| pass | ||
|
|
||
| @staticmethod | ||
| def mock_pr(num: int, title: str, body: str, closed: bool) -> GitHubPR: |
There was a problem hiding this comment.
I have a feeling that you don't need to devise your own mock here in test_trymerge unless there is a clear reason to do so. The usual way to mock PR info is to use the generic patches below and uses an actual PR number in PyTorch:
@mock.patch("trymerge.get_rockset_results", side_effect=mocked_rockset_results) <-- Cache all graphql query and use the result as the mock value
@mock.patch("trymerge.gh_graphql", side_effect=mocked_gh_graphql) <-- Cache all Rockset result and use it as the mock value
Here is a recent example I have in mind #105098. Before running the test, you can export GITHUB_TOKEN and ROCKSET_API_KEY readily. When you run the test locally, mocked_gh_graphql would make a single query to PyTorch to get the PR info and cache them locally in .github/scripts/gql_mocks.json for all future use.
c984885 to
2bc8476
Compare
|
@pytorchbot merge |
Merge startedYour change will be merged once all checks pass (ETA 0-4 Hours). Learn more about merging in the wiki. Questions? Feedback? Please reach out to the PyTorch DevX Team |
|
Hi @izaitsevfb, one of the tests added here depended on the referenced ghstack PR never closing. Since that PR is now merged, the github gql queries the tests makes now return different results, causing the test to fail. I've disabled this test for now in #107385, can you please fix this test to be more resilient? Either mock parts of the PR or perhaps better yet create a new dummy github account, have it create the ghstack PR you want, and then discard that account's username/password :P |
This is weird, I assumed all the requests are mocked. I'll investigate. cc @huydhn |
…#107385) Two fixes: - Stop querying `pushDate`, which [has been deprecated ](https://docs.github.com/en/graphql/reference/objects) and now always returns null - Disables the test `test_merge_ghstack_into` which was recently added in #105251. This test used the results of another person's ghstack PR for testing, but as the dev submitted chunks of their PR this test's assumptions have been broken. cc @izaitsevfb for a long term fix here Pull Request resolved: #107385 Approved by: https://github.com/clee2000
I think so too. I could run the test that Zain mentions fine locally. So just curious if the file |
|
Ah I get it now, as Zain clean up the |
One should always use `unittest.assert` methods rather than plain `assert` as later can be turned into a noop if Python runtime is invoked with optimizations enabled Fixes use of `assert` introduced by #105251
One should always use `unittest.assert` methods rather than plain `assert` as later can be turned into a noop if Python runtime is invoked with optimizations enabled Fixes use of `assert` introduced by #105251 Pull Request resolved: #110179 Approved by: https://github.com/huydhn
Currently all information about the dependencies of ghstack PRs (e.g. #105010) is stripped away:
pytorch/.github/scripts/trymerge.py
Lines 1077 to 1078 in c984885
This PR adds this information back in a more compact form. All dependencies (PR numbers) of each PR in ghstack are recorded.
The resulting commit message will look like this (the last line is new):
Testing
Unit tests.
Note Re:
# type: ignore[assignment]in unit tests.I did my due diligence to find alternatives. Unfortunately mypy doesn't support this way of patching methods, and the alternatives are either extremely verbose or don't work for this case. I decided it's not worth the effort (since the problem is limited only to the unit test).