[CI] Add third shard to pull/linux-jammy-py3.10-gcc11 distributed CPU tests#177211
[CI] Add third shard to pull/linux-jammy-py3.10-gcc11 distributed CPU tests#177211huydhn wants to merge 1 commit intopytorch:mainfrom
Conversation
… tests Increase distributed test shards from 2 to 3 to reduce per-shard test time and improve CI latency. Signed-off-by: Huy Do <huydhn@gmail.com>
🔗 Helpful Links🧪 See artifacts and rendered test results at hud.pytorch.org/pr/177211
Note: Links to docs will display an error until the docs builds have been completed. ✅ No FailuresAs of commit 0e08161 with merge base ad67e7a ( This comment was automatically generated by Dr. CI and updates every 15 minutes. |
|
@pytorchbot merge -f "Would not hurt" |
Merge startedYour change will be merged immediately since you used the force (-f) flag, bypassing any CI checks (ETA: 1-5 minutes). Please use Learn more about merging in the wiki. Questions? Feedback? Please reach out to the PyTorch DevX Team |
|
@pytorchbot revert -m "Reverted automatically by pytorch's autorevert, to avoid this behaviour add the tag autorevert: disable" -c autorevert This PR is attributed to have caused regression in:
Please investigate and fix the issues. |
|
Claude finished @pytorch-auto-revert[bot]'s task in 1m 50s —— View job Diagnostics: vllm-test Failure Attribution for PR #177211
Failing JobJob: Failed Tests (2 failures, 44 passed, 8 skipped)
Root CauseBoth failures have the same root cause — a vLLM model weight loading error for the The vLLM Is This a Legitimate Revert?No — this is almost certainly a false attribution. Here's why:
Guidance for the Author@huydhn — No code fix is needed. The revert was a false positive by the autorevert bot. The vLLM To reland the PR:
|
|
@pytorchbot successfully started a revert job. Check the current status here. |
|
@huydhn your PR has been successfully reverted. |
…uted CPU tests (#177211)" This reverts commit 1c18262. Reverted #177211 on behalf of https://github.com/pytorch-auto-revert due to Reverted automatically by pytorch's autorevert, to avoid this behaviour add the tag autorevert: disable ([comment](#177211 (comment)))
|
@pytorchbot merge -f "Cause and effect is hard for autorevert bot" |
Merge startedYour change will be merged immediately since you used the force (-f) flag, bypassing any CI checks (ETA: 1-5 minutes). Please use Learn more about merging in the wiki. Questions? Feedback? Please reach out to the PyTorch DevX Team |
… tests (pytorch#177211) Increase distributed test shards from 2 to 3 to reduce per-shard test time and improve CI latency. Before today, those jobs already took more than 3+ hours to finish, so a recent change might pushes them over the limit. Pull Request resolved: pytorch#177211 Approved by: https://github.com/malfet
…uted CPU tests (pytorch#177211)" This reverts commit 1c18262. Reverted pytorch#177211 on behalf of https://github.com/pytorch-auto-revert due to Reverted automatically by pytorch's autorevert, to avoid this behaviour add the tag autorevert: disable ([comment](pytorch#177211 (comment)))
… tests (pytorch#177211) Increase distributed test shards from 2 to 3 to reduce per-shard test time and improve CI latency. Before today, those jobs already took more than 3+ hours to finish, so a recent change might pushes them over the limit. Pull Request resolved: pytorch#177211 Approved by: https://github.com/malfet
Increase distributed test shards from 2 to 3 to reduce per-shard test time and improve CI latency.
Before today, those jobs already took more than 3+ hours to finish, so a recent change might pushes them over the limit.