Generate: move prepare_inputs_for_generation in encoder-decoder llms#34048
Generate: move prepare_inputs_for_generation in encoder-decoder llms#34048gante merged 6 commits intohuggingface:mainfrom
prepare_inputs_for_generation in encoder-decoder llms#34048Conversation
|
@zucchini-nlp this PR may have a conflict with your encoder-decoder+compile PR 👀 |
zucchini-nlp
left a comment
There was a problem hiding this comment.
Thanks! I will update my PR when this one gets merged. Left a tiny question about Blip-2, overall LGTM as long as the tests don't complain
There was a problem hiding this comment.
is it okay we're losing this? Seems like BLIP was forcefully passing this kwarg for later setting the cache?
O think we don't have tests for BlipText, neither for VLM part so we can't rely on tests for BLIP 😭 (I'll work on it soon, rn I'm working on Idefics models and BLIP will be next)
There was a problem hiding this comment.
uhmm perhaps -- is_decoder=True is the default everywhere (in forward, in the config), but the user could force it to False. Going to revert
(I suspect this class is never used with is_decoder=True, but too late to fix that :D )
There was a problem hiding this comment.
yeah, blip is a difficult case, better keep it overriden hehe
ArthurZucker
left a comment
There was a problem hiding this comment.
🧼 🧼 🧼 🧼 Very nice!
ca46d3b to
40d6c34
Compare
40d6c34 to
369b614
Compare
|
Ran the following slow tests before merging:
|
Don't assume that past_key_values is part of the model_kwargs. This fix is similar to huggingface#2140 but for encoder-decoder models. It became necessary after huggingface/transformers#34048 was merged into transformers.
Don't assume that past_key_values is part of the model_kwargs. This fix is similar to #2140 but for encoder-decoder models. It became necessary after huggingface/transformers#34048 was merged into transformers.
Don't assume that past_key_values is part of the model_kwargs. This fix is similar to huggingface#2140 but for encoder-decoder models. It became necessary after huggingface/transformers#34048 was merged into transformers.
Don't assume that past_key_values is part of the model_kwargs. This fix is similar to huggingface#2140 but for encoder-decoder models. It became necessary after huggingface/transformers#34048 was merged into transformers.
Don't assume that past_key_values is part of the model_kwargs. This fix is similar to huggingface#2140 but for encoder-decoder models. It became necessary after huggingface/transformers#34048 was merged into transformers.
SabaPivot
left a comment
There was a problem hiding this comment.
Could you please reply?
Much appreciated!
| def test_prepare_inputs_for_generation_encoder_decoder_llm(self): | ||
| """ | ||
| Same as `test_prepare_inputs_for_generation_decoder_llm` but for encoder-decoder models. Main difference: we | ||
| should look for `decoder_input_ids`, instead of `input_ids`. | ||
| """ | ||
| model = AutoModelForSeq2SeqLM.from_pretrained("hf-internal-testing/tiny-random-t5") | ||
| model = model.to(torch_device) | ||
|
|
||
| # 1. Sanity check: the model's `prepare_inputs_for_generation` comes from `GenerationMixin` | ||
| self.assertTrue("GenerationMixin" in str(model.prepare_inputs_for_generation)) | ||
|
|
||
| # 2. If we pass input ids by themselves, we should get back the same input ids -- with the encoder-decoder key | ||
| decoder_input_ids = torch.tensor([[1, 2, 3], [4, 5, 6]]).to(torch_device) | ||
| model_inputs = model.prepare_inputs_for_generation(decoder_input_ids) | ||
| self.assertTrue(torch.all(model_inputs["decoder_input_ids"] == decoder_input_ids)) | ||
|
|
||
| # 3. If we pass the attention mask too, we will get back the attention mask. Encoder-decoder models usually | ||
| # don't use `position_ids` | ||
| decoder_attention_mask = torch.tensor([[1, 1, 1], [1, 1, 1]]).to(torch_device) | ||
| model_inputs = model.prepare_inputs_for_generation( | ||
| decoder_input_ids, decoder_attention_mask=decoder_attention_mask | ||
| ) | ||
| self.assertTrue(torch.all(model_inputs["decoder_attention_mask"] == decoder_attention_mask)) | ||
| self.assertTrue("position_ids" not in model_inputs) | ||
|
|
||
| # 4. `use_cache` (and other kwargs, like the encoder outputs) are forwarded | ||
| self.assertFalse("use_cache" in model_inputs) # From the previous input, there is no `use_cache` | ||
| model_inputs = model.prepare_inputs_for_generation(decoder_input_ids, use_cache=True, encoder_outputs="foo") | ||
| self.assertTrue(model_inputs["use_cache"] is True) | ||
| self.assertTrue(model_inputs["encoder_outputs"] == "foo") | ||
| # See the decoder-only test for more corner cases. The code is the same, so we don't repeat it here. | ||
|
|
There was a problem hiding this comment.
Should I add this to my
AutoAdapterModel
to generate in adapters using T5?
There was a problem hiding this comment.
If you mean the tests, you should not need to add it anywhere as it is ran only to test the correctness of new modifications.
In general it is advised to post question in the forum if it is not a bug or feature request
Don't assume that past_key_values is part of the model_kwargs. This fix is similar to huggingface#2140 but for encoder-decoder models. It became necessary after huggingface/transformers#34048 was merged into transformers.
Don't assume that past_key_values is part of the model_kwargs. This fix is similar to #2140 but for encoder-decoder models. It became necessary after huggingface/transformers#34048 was merged into transformers.
What does this PR do?
Part of step 6 in #32685
Follow-up to #33870
This PR:
GenerationMixin.prepare_inputs_for_generationto usedecoder_input_idsin encoder-decoder modelsprepare_inputs_for_generationin encoder-decoder llms 🔪 😎