-
Notifications
You must be signed in to change notification settings - Fork 51
Questions about implementation #7
Copy link
Copy link
Closed
Description
Hi, first of all thanks for a useful library. I've been looking into your implementation of prompt weighting and have questions about it. (i'm only interested in get_embeddings_for_weighted_prompt_fragments function, without blending and etc).
- if you have separate function for handling weights < 1, why in the first call to
build_weighted_embedding_tensorthis weights are also used? - the logic for handling negative cases makes much more sense to me, why not to adapt the same for positive weighs?
i've tried to change your implementation by adapting similar strategy for weights > 1 and it seems to give much more consistent results.
there is another implementation suggestion. currently you're calculating embedding_without_this by removing the weighted piece. it leads to significant change of the whole final embedding. i've observed that if instead you mask the tokens by passing attention_mask to text_encoder the embedding in general is changes less, giving more precise "direction" of the change.
Reactions are currently unavailable
Metadata
Metadata
Assignees
Labels
No labels