Skip to content

StoppingCriteria tracks elements separately in the batch#29056

Closed
zucchini-nlp wants to merge 0 commit intohuggingface:mainfrom
zucchini-nlp:stopping_crtiteria
Closed

StoppingCriteria tracks elements separately in the batch#29056
zucchini-nlp wants to merge 0 commit intohuggingface:mainfrom
zucchini-nlp:stopping_crtiteria

Conversation

@zucchini-nlp
Copy link
Member

What does this PR do?

As was pointed out in #28932 , StoppingCriteria needs to stop generation per batch element and return a boolean tensor of batch_size. This PR adds the logic to track each row and when StoppingCriteria is triggered, stop generating for that particular row only.

Note that the when #28932 gets merged, we need to add logic to handle beam related generation. The problem is that beam search has an internal logic of tracking EOS tokens and adds candidate tokens to hypothesis when done. And if StoppingCriteria will take the responsibility to track custom EOS tokens, it has to be passed to beam scorer. Right now I am not sure if calling StoppingCriteria twice is a good decision. First time to check candidate beams, and the second time for the chosen beams. What do you think @gante? It can be something like:

cur_len = input_ids.shape[-1] + 1
beam_next_input_ids = torch.cat([input_ids[next_indices, :], next_tokens.unsqueeze(-1)], dim=-1)
beam_next_input_ids = beam_next_input_ids.view(-1, cur_len)
next_is_done = stopping_criteria(beam_next_input_ids, scores)

# is all are done, then it's prob max_length, not custom EOS being triggered
if all(next_is_done):
    next_is_done = torch.full_like(next_indices, False, dtype=torch.bool)
next_is_done = next_is_done.view(next_indices.shape)

Before submitting

  • This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
  • Did you read the contributor guideline,
    Pull Request section?
  • Was this discussed/approved via a Github issue or the forum? Please add a link
    to it if that's the case.
  • Did you make sure to update the documentation with your changes? Here are the
    documentation guidelines, and
    here are tips on formatting docstrings.
  • Did you write any new necessary tests?

Who can review?

Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
@gante

@zucchini-nlp zucchini-nlp changed the title StoppingCrtiteria tracks elements separately in the batch StoppingCriteria tracks elements separately in the batch Feb 16, 2024
@HuggingFaceDocBuilderDev

The docs for this PR live here. All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.

Copy link
Contributor

@gante gante left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Looking good 👍

Regarding beam search: the current PyTorch implementation tracks beam termination in a separate class (as opposed to e.g. unfinished_sequences in greedy decoding). Ideally, beam termination should come from the stopping criteria as well but, as you wrote, calling it twice is suboptimal. We should first refactor beam search to be torch.compile-compatible first, then think of a solution for this particular case (leaving it as it is for now).

Copy link
Contributor

@gante gante left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

oops, meant to approve.

btw, you'll need to run make fixup on your end to get rid of the formatting errors in CI.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants