Skip to content

[Backport] Make a FlashAttention Wrapper#6827

Merged
lsy323 merged 1 commit intor2.3from
alanwaketan/backport
Mar 27, 2024
Merged

[Backport] Make a FlashAttention Wrapper#6827
lsy323 merged 1 commit intor2.3from
alanwaketan/backport

Conversation

@alanwaketan
Copy link
Copy Markdown
Collaborator

Summary:
This pull request introduces a FlashAttention wrapper that aims to:

  1. Override some default settings for the best performance out of box.
  2. Ease the UX such that users don't need to do all the custom_kernel paper works.

Test Plan:
PJRT_DEVICE=TPU python test/test_pallas.py -v -k test_flash_attention_wrapper

tmp

introduce flash_attention

Add test case

Fix the test

Fix linters
Copy link
Copy Markdown
Collaborator

@JackCaoG JackCaoG left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

How many more pallas related pr you need to backport? Ideally we should not backport feature to the 2.3 branch anymore.

@alanwaketan
Copy link
Copy Markdown
Collaborator Author

Most of them are landed last week. Just they all have dependence. So I have to back port them one by one...

@alanwaketan
Copy link
Copy Markdown
Collaborator Author

The flash attention forward related feature are done. I will only backport the fixes from now on.

For backward and distributed, I won't backport them to 2.3.

@lsy323 lsy323 merged commit db7112a into r2.3 Mar 27, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants