Skip to content

Refactoring pipeline parallelism test cases to be device agnostic [1/n]#146472

Closed
AnantGulati wants to merge 5 commits intopytorch:mainfrom
AnantGulati:AnantGulati_pipeline_refactoring
Closed

Refactoring pipeline parallelism test cases to be device agnostic [1/n]#146472
AnantGulati wants to merge 5 commits intopytorch:mainfrom
AnantGulati:AnantGulati_pipeline_refactoring

Conversation

@AnantGulati
Copy link
Contributor

@AnantGulati AnantGulati commented Feb 5, 2025

In this series of PR we intend to refactor pipeline parallelism test cases to enable to be completely device agnostic.

These changes will include the following approaches to do the same :

  • Allowing for multiple device types using instantiate_device_type_test
  • Replacing calls to cuda stream with torch.get_device_module(device) wherever it applies

This should result in improvement in usability for all devices

For this PR we have shown support for the following devices:

  • CPU (wherever applicable)
  • CUDA
  • HPU
  • XPU

To add other device new users can simply append their device to the device list

cc @H-Huang @awgu @kwen2501 @wanchaol @fegin @fduwjj @wz337 @wconstab @d4l3k @c-p-i-o

@pytorch-bot
Copy link

pytorch-bot bot commented Feb 5, 2025

🔗 Helpful Links

🧪 See artifacts and rendered test results at hud.pytorch.org/pr/146472

Note: Links to docs will display an error until the docs builds have been completed.

✅ No Failures

As of commit 44719de with merge base 1c87280 (image):
💚 Looks good so far! There are no failures yet. 💚

This comment was automatically generated by Dr. CI and updates every 15 minutes.

@pytorch-bot pytorch-bot bot added oncall: distributed Add this issue/PR to distributed oncall triage queue topic: not user facing topic category labels Feb 5, 2025
@AnantGulati
Copy link
Contributor Author

@kwen2501 Could you please review this PR

@mikaylagawarecki mikaylagawarecki added the triaged This issue has been looked at a team member, and triaged and prioritized into an appropriate module label Feb 7, 2025
@H-Huang H-Huang added the module: pipelining Pipeline Parallelism label Feb 7, 2025
Copy link
Member

@H-Huang H-Huang left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Overall looks fine to me, lets wait for CI before landing. These tests are mostly covering some pipeline parallelism utilities and unrelated to the actual model splitting / execution used in pipeline parallelism. I think there are still some gaps to support torch.distributed.pipelining on CPU and other devices

@AnantGulati
Copy link
Contributor Author

Overall looks fine to me, lets wait for CI before landing. These tests are mostly covering some pipeline parallelism utilities and unrelated to the actual model splitting / execution used in pipeline parallelism. I think there are still some gaps to support torch.distributed.pipelining on CPU and other devices

Yes, there is still some more effort required to add support for torch.distributed.pipelining on multiple devices. I am still analyzing the test cases that cover the model splitting and execution in more detail and I am hoping to add support to them in future PRs

@AnantGulati
Copy link
Contributor Author

@pytorchbot merge

@pytorch-bot pytorch-bot bot added the ciflow/trunk Trigger trunk jobs on your pull request label Feb 8, 2025
@pytorchmergebot
Copy link
Collaborator

Merge started

Your change will be merged once all checks pass (ETA 0-4 Hours).

Learn more about merging in the wiki.

Questions? Feedback? Please reach out to the PyTorch DevX Team

Advanced Debugging
Check the merge workflow status
here

@pytorchmergebot
Copy link
Collaborator

The merge job was canceled or timed out. This most often happen if two merge requests were issued for the same PR, or if merge job was waiting for more than 6 hours for tests to finish. In later case, please do not hesitate to reissue the merge command
For more information see pytorch-bot wiki.

@AnantGulati
Copy link
Contributor Author

@pytorchbot merge

@pytorchmergebot
Copy link
Collaborator

Merge started

Your change will be merged once all checks pass (ETA 0-4 Hours).

Learn more about merging in the wiki.

Questions? Feedback? Please reach out to the PyTorch DevX Team

Advanced Debugging
Check the merge workflow status
here

@pytorchmergebot
Copy link
Collaborator

The merge job was canceled or timed out. This most often happen if two merge requests were issued for the same PR, or if merge job was waiting for more than 6 hours for tests to finish. In later case, please do not hesitate to reissue the merge command
For more information see pytorch-bot wiki.

@AnantGulati
Copy link
Contributor Author

@pytorchbot merge

@pytorchmergebot
Copy link
Collaborator

Merge started

Your change will be merged once all checks pass (ETA 0-4 Hours).

Learn more about merging in the wiki.

Questions? Feedback? Please reach out to the PyTorch DevX Team

Advanced Debugging
Check the merge workflow status
here

@pytorchmergebot
Copy link
Collaborator

The merge job was canceled or timed out. This most often happen if two merge requests were issued for the same PR, or if merge job was waiting for more than 6 hours for tests to finish. In later case, please do not hesitate to reissue the merge command
For more information see pytorch-bot wiki.

@AnantGulati
Copy link
Contributor Author

@H-Huang The merge is getting blocked due to one RoCm check not starting

Could you please force advice

Thanks

@H-Huang
Copy link
Member

H-Huang commented Feb 11, 2025

@pytorchbot merge

@pytorchmergebot
Copy link
Collaborator

Merge started

Your change will be merged once all checks pass (ETA 0-4 Hours).

Learn more about merging in the wiki.

Questions? Feedback? Please reach out to the PyTorch DevX Team

Advanced Debugging
Check the merge workflow status
here

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

ciflow/trunk Trigger trunk jobs on your pull request Merged module: pipelining Pipeline Parallelism oncall: distributed Add this issue/PR to distributed oncall triage queue open source topic: not user facing topic category triaged This issue has been looked at a team member, and triaged and prioritized into an appropriate module

Projects

None yet

Development

Successfully merging this pull request may close these issues.

5 participants