Skip to content

Fix ROPE embeddings for LLama#29138

Closed
zucchini-nlp wants to merge 2 commits intohuggingface:mainfrom
zucchini-nlp:llama_test
Closed

Fix ROPE embeddings for LLama#29138
zucchini-nlp wants to merge 2 commits intohuggingface:mainfrom
zucchini-nlp:llama_test

Conversation

@zucchini-nlp
Copy link
Member

What does this PR do?

This test failed on my PR and I checked to see the reason. I found that the changes introduced to make llama compile compatible are causing the issue.

The fixes here are tested with fullgraph compile, compilation is still working without graph breaks. Additionally, the failing test was run 500 times. I found that aside from rope embeddings, the cause of test failure was in SDPA attention. I cannot say what is the reason exactly, but running the test 500 times gives 95% success in SDPA and 100% success in eager, using the fixes introduced in this PR. Prior to these fixes, the tests were running with 90% success for both attentions.

Before submitting

  • This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
  • Did you read the contributor guideline,
    Pull Request section?
  • Was this discussed/approved via a Github issue or the forum? Please add a link
    to it if that's the case.
  • Did you make sure to update the documentation with your changes? Here are the
    documentation guidelines, and
    here are tips on formatting docstrings.
  • Did you write any new necessary tests?

Who can review?

Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.

@gante

Copy link
Collaborator

@ArthurZucker ArthurZucker left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks for opening the PR 🤗 sorry maybe too early to review 😉

def forward(self, x, position_ids, seq_len=None):
# x: [bs, num_attention_heads, seq_len, head_size]
freqs = (self.inv_freq[:, None].float().expand(-1, position_ids.shape[0]) @ (position_ids.float())).t()
freqs = torch.einsum("i,bl->bli", self.inv_freq.float(), position_ids.float())
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

No einsum should not be used, for 1 we like to have explicit computation, and secondly, this will fail with different dtypes. Bfloat float etc.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I just saw it is fixed on main. Never mind, I'll close the PR

@zucchini-nlp zucchini-nlp deleted the llama_test branch February 26, 2024 12:46
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants