This repository was archived by the owner on Mar 3, 2026. It is now read-only.
Conversation
We replace the `for` loop in both Llama and Mixtral with an equivalent `HomogenousSequential` layer, which can be either run a for loop or use `torch_xla`'s scan operator. This is a clean-ish way to turn scan on/off without cluttering the modeling code. I also adjusted Mixtral slightly so that we can even run `scan` in Mixtral with its static MoE implementation. Scanning over GMM on the other hand won't work until GMM forward/backward is wrapped in a custom op similar to pytorch/xla#8654. Test: added unit test. Next PR will change the trainer to apply scan.
bhavya01
reviewed
Mar 17, 2025
zpcore
reviewed
Mar 17, 2025
zpcore
reviewed
Mar 17, 2025
zpcore
reviewed
Mar 17, 2025
zpcore
reviewed
Mar 17, 2025
bhavya01
approved these changes
Mar 17, 2025
zpcore
reviewed
Mar 18, 2025
Contributor
Author
|
@zpcore i saw you added a number of comments but didn't press "Request Changes" or "Approve" -- let me know if you would like to request changes or approve. |
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Sign up for free
to subscribe to this conversation on GitHub.
Already have an account?
Sign in.
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.
We replace the
forloop in both Llama and Mixtral with an equivalentHomogenousSequentiallayer, which can be either run a for loop or usetorch_xla's scan operator. This is a clean-ish way to turn scan on/off without cluttering the modeling code.I also adjusted Mixtral slightly so that we can even run
scanin Mixtral with its static MoE implementation. In order to integrate with scan, we need to refactor the Mixtral decoder for loop into a format where results from the previous iteration feed into the next iteration. Scanning over GMM on the other hand won't work until GMM forward/backward is wrapped in a custom op similar to pytorch/xla#8654.Cleanup the README that got jumbled in #111 while I'm here.
Test: added unit test. Next PR will change the trainer to apply scan.