Skip to content

[Pallas] Make gmm support bf16#7133

Merged
alanwaketan merged 3 commits intomasterfrom
alanwaketan/gmm7
May 29, 2024
Merged

[Pallas] Make gmm support bf16#7133
alanwaketan merged 3 commits intomasterfrom
alanwaketan/gmm7

Conversation

@alanwaketan
Copy link
Copy Markdown
Collaborator

Summary:
This pull request does:

  1. make gmm support bf16,
  2. don't visit_empty_groups for gmm,
  3. make the reference gmm torchy.

Test Plan:
python test/test_gmm.py

@alanwaketan alanwaketan self-assigned this May 29, 2024
return jnp.float16
elif dtype == torch.bfloat16:
if dtype == torch.float32:
if _XLA_USE_BF16:
Copy link
Copy Markdown
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

.. we should really try to just cast everything to bf16 manually after we are done with current deadline...

Copy link
Copy Markdown
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is just for backward compatibility....

@alanwaketan
Copy link
Copy Markdown
Collaborator Author

Thanks Jack for approving the change.

@alanwaketan
Copy link
Copy Markdown
Collaborator Author

Skip GPU CI to move fast...

@alanwaketan alanwaketan merged commit fb37312 into master May 29, 2024
@alanwaketan alanwaketan deleted the alanwaketan/gmm7 branch May 29, 2024 03:32
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants