Skip to content

Move torchbench model configuration into a YAML file.#120299

Closed
ysiraichi wants to merge 5 commits intogh/ysiraichi/59/basefrom
gh/ysiraichi/59/head
Closed

Move torchbench model configuration into a YAML file.#120299
ysiraichi wants to merge 5 commits intogh/ysiraichi/59/basefrom
gh/ysiraichi/59/head

Conversation

@ysiraichi
Copy link
Collaborator

@ysiraichi ysiraichi commented Feb 21, 2024

Stack from ghstack (oldest at bottom):

This PR moves other aspects of torchbench's model configuration (e.g. batch size,
tolerance requirements, etc.) into a new YAML file: torchbench.yaml. It also merges the
recently added torchbench_skip_models.yaml file inside the skip key.

This is an effort so that external consumers are able to easily replicate the performance
results and coverage results from the PyTorch HUD.

cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @aakhundov @kadeng @miladm

This PR moves other aspects of torchbench's model configuration (e.g. batch size,
tolerance requirements, etc.) into a new YAML file: `torchbench.yaml`. It also merges the
recently added `torchbench_skip_models.yaml` file inside the `skip` key.

This is an effort so that external consumers are able to easily replicate the performance
results and coverage results from the PyTorch HUD.

[ghstack-poisoned]
@pytorch-bot
Copy link

pytorch-bot bot commented Feb 21, 2024

🔗 Helpful Links

🧪 See artifacts and rendered test results at hud.pytorch.org/pr/120299

Note: Links to docs will display an error until the docs builds have been completed.

✅ You can merge normally! (2 Unrelated Failures)

As of commit 59ecec6 with merge base edf1c4e (image):

UNSTABLE - The following jobs failed but were likely due to flakiness present on trunk and has been marked as unstable:

This comment was automatically generated by Dr. CI and updates every 15 minutes.

This PR moves other aspects of torchbench's model configuration (e.g. batch size,
tolerance requirements, etc.) into a new YAML file: `torchbench.yaml`. It also merges the
recently added `torchbench_skip_models.yaml` file inside the `skip` key.

This is an effort so that external consumers are able to easily replicate the performance
results and coverage results from the PyTorch HUD.

cc voznesenskym penguinwu EikanWang jgong5 Guobing-Chen XiaobingSuper zhuhaozhe blzheng wenzhe-nrv jiayisunx chenyang78 aakhundov kadeng miladm 

[ghstack-poisoned]
This PR moves other aspects of torchbench's model configuration (e.g. batch size,
tolerance requirements, etc.) into a new YAML file: `torchbench.yaml`. It also merges the
recently added `torchbench_skip_models.yaml` file inside the `skip` key.

This is an effort so that external consumers are able to easily replicate the performance
results and coverage results from the PyTorch HUD.

cc voznesenskym penguinwu EikanWang jgong5 Guobing-Chen XiaobingSuper zhuhaozhe blzheng wenzhe-nrv jiayisunx chenyang78 aakhundov kadeng miladm 

[ghstack-poisoned]
ysiraichi added a commit that referenced this pull request Feb 21, 2024
This PR moves other aspects of torchbench's model configuration (e.g. batch size,
tolerance requirements, etc.) into a new YAML file: `torchbench.yaml`. It also merges the
recently added `torchbench_skip_models.yaml` file inside the `skip` key.

This is an effort so that external consumers are able to easily replicate the performance
results and coverage results from the PyTorch HUD.

ghstack-source-id: d6f7308
Pull Request resolved: #120299
@ezyang ezyang requested a review from jansel February 22, 2024 05:00
@ezyang
Copy link
Contributor

ezyang commented Feb 22, 2024

Didn't I see another YAML PR? Is this the same thing?

@ysiraichi
Copy link
Collaborator Author

Yes. That was #120117. The difference is that we moved only skips to the YAML file. Now, we are moving every list/dict of models that we need for any specific configuration. We figured we would need those when trying to replicate PyTorch HUD results.

This PR moves other aspects of torchbench's model configuration (e.g. batch size,
tolerance requirements, etc.) into a new YAML file: `torchbench.yaml`. It also merges the
recently added `torchbench_skip_models.yaml` file inside the `skip` key.

This is an effort so that external consumers are able to easily replicate the performance
results and coverage results from the PyTorch HUD.

cc voznesenskym penguinwu EikanWang jgong5 Guobing-Chen XiaobingSuper zhuhaozhe blzheng wenzhe-nrv jiayisunx chenyang78 aakhundov kadeng miladm 

[ghstack-poisoned]
ysiraichi added a commit that referenced this pull request Feb 22, 2024
This PR moves other aspects of torchbench's model configuration (e.g. batch size,
tolerance requirements, etc.) into a new YAML file: `torchbench.yaml`. It also merges the
recently added `torchbench_skip_models.yaml` file inside the `skip` key.

This is an effort so that external consumers are able to easily replicate the performance
results and coverage results from the PyTorch HUD.

ghstack-source-id: 2a37939
Pull Request resolved: #120299
@ysiraichi
Copy link
Collaborator Author

It doesn't look like the CI failure is related to this PR.

@ysiraichi
Copy link
Collaborator Author

@ezyang @jansel This PR should be ready for review. Could you take a look at it when you have some time?

@lezcano
Copy link
Collaborator

lezcano commented Feb 23, 2024

@pytorchbot merge

@pytorch-bot pytorch-bot bot added the ciflow/trunk Trigger trunk jobs on your pull request label Feb 23, 2024
@pytorchmergebot
Copy link
Collaborator

Merge failed

Reason: This PR needs a release notes: label
If your changes are user facing and intended to be a part of release notes, please use a label starting with release notes:.

If not, please add the topic: not user facing label.

To add a label, you can comment to pytorchbot, for example
@pytorchbot label "topic: not user facing"

For more information, see
https://github.com/pytorch/pytorch/wiki/PyTorch-AutoLabel-Bot#why-categorize-for-release-notes-and-how-does-it-work.

Details for Dev Infra team Raised by workflow job

@lezcano
Copy link
Collaborator

lezcano commented Feb 23, 2024

@pytorchbot merge -r

@pytorchmergebot
Copy link
Collaborator

@pytorchbot started a rebase job onto refs/remotes/origin/viable/strict. Check the current status here

This PR moves other aspects of torchbench's model configuration (e.g. batch size,
tolerance requirements, etc.) into a new YAML file: `torchbench.yaml`. It also merges the
recently added `torchbench_skip_models.yaml` file inside the `skip` key.

This is an effort so that external consumers are able to easily replicate the performance
results and coverage results from the PyTorch HUD.

cc voznesenskym penguinwu EikanWang jgong5 Guobing-Chen XiaobingSuper zhuhaozhe blzheng wenzhe-nrv jiayisunx chenyang78 aakhundov kadeng miladm 

[ghstack-poisoned]
@pytorchmergebot
Copy link
Collaborator

Successfully rebased gh/ysiraichi/59/orig onto refs/remotes/origin/viable/strict, please pull locally before adding more changes (for example, via ghstack checkout https://github.com/pytorch/pytorch/pull/120299)

pytorchmergebot pushed a commit that referenced this pull request Feb 23, 2024
This PR moves other aspects of torchbench's model configuration (e.g. batch size,
tolerance requirements, etc.) into a new YAML file: `torchbench.yaml`. It also merges the
recently added `torchbench_skip_models.yaml` file inside the `skip` key.

This is an effort so that external consumers are able to easily replicate the performance
results and coverage results from the PyTorch HUD.

ghstack-source-id: c1eaece
Pull Request resolved: #120299
@pytorchmergebot
Copy link
Collaborator

Merge started

Your change will be merged once all checks pass (ETA 0-4 Hours).

Learn more about merging in the wiki.

Questions? Feedback? Please reach out to the PyTorch DevX Team

Advanced Debugging
Check the merge workflow status
here

facebook-github-bot pushed a commit to pytorch/benchmark that referenced this pull request Feb 24, 2024
Summary:
This PR moves other aspects of torchbench's model configuration (e.g. batch size,
tolerance requirements, etc.) into a new YAML file: `torchbench.yaml`. It also merges the
recently added `torchbench_skip_models.yaml` file inside the `skip` key.

This is an effort so that external consumers are able to easily replicate the performance
results and coverage results from the PyTorch HUD.

X-link: pytorch/pytorch#120299
Approved by: https://github.com/jansel

Reviewed By: jeanschmidt

Differential Revision: D54123721

fbshipit-source-id: c6e69269775fa8a70021fe13313293a527c6b3e1
@github-actions github-actions bot deleted the gh/ysiraichi/59/head branch March 25, 2024 01:53
pytorchmergebot pushed a commit that referenced this pull request Jul 27, 2024
facebook-github-bot pushed a commit to pytorch/benchmark that referenced this pull request Jul 30, 2024
Summary:
Similar to pytorch/pytorch#120299

X-link: pytorch/pytorch#131724
Approved by: https://github.com/shunting314

Reviewed By: ZainRizvi

Differential Revision: D60431445

Pulled By: kit1980

fbshipit-source-id: 0674b8d66459d92edd4cc4f65af984989a2fb31a
pbielak added a commit to intel/torch-xpu-ops that referenced this pull request Mar 11, 2026
- Re-enable `detectron2_maskrcnn` skip in skip.all.
- Re-enable all `timm_*` model skips in skip.all.
- Keep explicit upstream PR context comments for `modded_nanogpt`
  and `pytorch_CycleGAN_and_pix2pix`.
- Remove stale expected-accuracy rows for skipped models.

Relevant PRs:
[1] pytorch/pytorch#120299
[2] pytorch/pytorch#164816
[3] pytorch/pytorch#172125
[4] pytorch/pytorch#175066
[5] #2306
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Projects

None yet

Development

Successfully merging this pull request may close these issues.

6 participants