Skip to content

refact : fix convert script + zero out KV cache to avoid nans#3523

Merged
ggerganov merged 4 commits intomasterfrom
fix-refact
Oct 9, 2023
Merged

refact : fix convert script + zero out KV cache to avoid nans#3523
ggerganov merged 4 commits intomasterfrom
fix-refact

Conversation

@ggerganov
Copy link
Member

@ggerganov ggerganov commented Oct 7, 2023

ref: #3329 (comment)

Question: should we first mask the KV tensor and then apply ALiBi?

https://github.com/ggerganov/llama.cpp/blob/bdbe11719d81dfdc955b762b6d99796724e292b7/llama.cpp#L3763-L3771

If that were the case, then the above KV cache initialization wouldn't be needed since any uninitialized values will be masked with -INF

@ggerganov ggerganov mentioned this pull request Oct 7, 2023
@slaren
Copy link
Member

slaren commented Oct 7, 2023

If that were the case, then the above KV cache initialization wouldn't be needed since any uninitialized values will be masked with -INF

But nan - INF is still nan, so I don't think that this would work for removing nans before alibi.

@ggerganov ggerganov added the need feedback Testing and feedback with results are needed label Oct 8, 2023
@ggerganov ggerganov merged commit fcca0a7 into master Oct 9, 2023
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

need feedback Testing and feedback with results are needed

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants