-
Notifications
You must be signed in to change notification settings - Fork 32.5k
Description
System Info
- `transformers` version: 4.45.0.dev0
- Platform: Linux-6.8.0-40-generic-x86_64-with-glibc2.35
- Python version: 3.10.13
- Huggingface_hub version: 0.24.7
- Safetensors version: 0.4.5
- Accelerate version: 0.34.2
- Accelerate config: not found
- PyTorch version (GPU?): 2.4.1+cu121 (True)
- Tensorflow version (GPU?): 2.15.1 (False)
- Flax version (CPU?/GPU?/TPU?): 0.7.0 (cpu)
- Jax version: 0.4.13
- JaxLib version: 0.4.13
- Using distributed or parallel set-up in script?: <fill in>
- Using GPU in script?: <fill in>
- GPU type: NVIDIA GeForce GTX 1070 Ti
Who can help?
Information
- The official example scripts
- My own modified scripts
Tasks
- An officially supported task in the
examplesfolder (such as GLUE/SQuAD, ...) - My own task or dataset (give details below)
Reproduction
- Download an audio sample https://drive.google.com/file/d/1eVeFUyfHWMpmFSRYxmBWaNe_JLEQqT8G/view?usp=sharing
- Use transformers v4.41 + my fix from Fix missing
sequences_scoresin the Whisper beam search output #32970 (it allows to output sequence_score) - Run the code below to get 5 hypotheses of Beam Search on audio transcription
from transformers import AutoProcessor, AutoModelForSpeechSeq2Seq
import torch
import librosa
# Load the processor and model
processor = AutoProcessor.from_pretrained("openai/whisper-tiny")
model = AutoModelForSpeechSeq2Seq.from_pretrained("openai/whisper-tiny")
# Load and preprocess the audio file
audio_path = "audio.mp3"
audio, sr = librosa.load(audio_path, sr=16000) # Ensure the sample rate is 16kHz
# Preprocess the audio to get the input features
inputs = processor(audio, sampling_rate=16000, return_tensors="pt")
# Generate the transcription using Beam Search with the model
beam_outputs = model.generate(
inputs["input_features"],
num_beams=5, # Number of beams
num_return_sequences=5, # Number of hypotheses to return
early_stopping=True,
output_scores=True,
return_dict_in_generate=True,
)
# Decode the generated transcriptions
hypotheses = [processor.decode(output_ids, skip_special_tokens=True) for output_ids in beam_outputs.sequences]
# Print out the hypotheses
for i, hypothesis in enumerate(hypotheses):
print(f"Hypothesis {i + 1}: {hypothesis}. Score: {beam_outputs.sequences_scores[i]}")Expected behavior
Together with @ylacombe we identified that after Pull Request #30984 Whisper Beam Search generation doesn't work as intended.
See more detailed discussion on Pull Request #32970
The code above must return 5 unique hypotheses due to the core principle of the Beam Search - to select num_beams best tokens in a top_k sampling fashion. Instead, we are getting the same results with the highest probability. See below for how Beam Search used to work in version v4.25.1 and how it works now.
transformers v4.25.1
Hypothesis 1: How is Mozilla going to handle and be with this? Thank you.. Score: -0.4627407491207123
Hypothesis 2: How is Mozilla going to handle and be with this? Thank you and Q.. Score: -0.4789799749851227
Hypothesis 3: How is Mozilla going to handle and be with this? Thank you, and cute.. Score: -0.48414239287376404
Hypothesis 4: How is Mozilla going to handle and be with this? Thank you and cute.. Score: -0.4972183108329773
Hypothesis 5: How is Mozilla going to handle and be with this? Thank you, and Q.. Score: -0.5054414868354797
transformers v4.44.1 + My Fix from #32970
Hypothesis 1: How is Mozilla going to handle and be with this? Thank you.. Score: -0.5495038032531738
Hypothesis 2: How is Mozilla going to handle and be with this? Thank you.. Score: -0.5495040416717529
Hypothesis 3: How is Mozilla going to handle and be with this? Thank you.. Score: -0.5495036840438843
Hypothesis 4: How is Mozilla going to handle and be with this? Thank you.. Score: -0.5495036244392395
Hypothesis 5: How is Mozilla going to handle and be with this? Thank you.. Score: -0.5495033264160156
@ylacombe has found the bug in _expand_variables_for_generation function.
The function artificially expands the batch size to num_return_sequences, which causes an issue when this expanded batch size is passed to GenerationMixin.generate. Specifically, if batch_size=5 and num_return_sequences > 1, the model generates batch_size * num_beams beams but retains only the most probable beam for each element of the original batch.
Impact
This bug results in the num_return_sequences parameter not being compatible with both short-form and long-form generation. Users expecting multiple return sequences will only receive the most probable sequence, which may not meet the intended use case.
cc @eustlb