Skip to content

Bump tokenizers from 0.21 to 0.22.2#2

Closed
dependabot[bot] wants to merge 1 commit intomainfrom
dependabot/pip/tokenizers-0.22.2
Closed

Bump tokenizers from 0.21 to 0.22.2#2
dependabot[bot] wants to merge 1 commit intomainfrom
dependabot/pip/tokenizers-0.22.2

Conversation

@dependabot
Copy link
Copy Markdown
Contributor

@dependabot dependabot Bot commented on behalf of github Apr 17, 2026

Bumps tokenizers from 0.21 to 0.22.2.

Release notes

Sourced from tokenizers's releases.

Release v0.22.2

What's Changed

Okay mostly doing the release for these PR:

Basically good typing with at least ty, and a lot fast (from 4 to 8x faster) loading vocab with a lot of added tokens and GIL free !?

New Contributors

Full Changelog: huggingface/tokenizers@v0.22.1...v0.22.2

v0.22.1

Release v0.22.1

Main change:

  • Bump huggingface_hub upper version (#1866) from @​Wauplin
  • chore(trainer): add and improve trainer signature (#1838) from @​shenxiangzhuang
  • Some doc updates: c91d76ae558ca2dc1aa725959e65dc21bf1fed7e, 7b0217894c1e2baed7354ab41503841b47af7cf9, 57eb8d7d9564621221784f7949b9efdeb7a49ac1

v0.22.0

What's Changed

... (truncated)

Commits

Dependabot compatibility score

Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting @dependabot rebase.


Dependabot commands and options

You can trigger Dependabot actions by commenting on this PR:

  • @dependabot rebase will rebase this PR
  • @dependabot recreate will recreate this PR, overwriting any edits that have been made to it
  • @dependabot show <dependency name> ignore conditions will show all of the ignore conditions of the specified dependency
  • @dependabot ignore this major version will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself)
  • @dependabot ignore this minor version will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself)
  • @dependabot ignore this dependency will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself)

Bumps [tokenizers](https://github.com/huggingface/tokenizers) from 0.21 to 0.22.2.
- [Release notes](https://github.com/huggingface/tokenizers/releases)
- [Changelog](https://github.com/huggingface/tokenizers/blob/main/RELEASE.md)
- [Commits](huggingface/tokenizers@v0.21.0...v0.22.2)

---
updated-dependencies:
- dependency-name: tokenizers
  dependency-version: 0.22.2
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
@dependabot dependabot Bot added dependencies Pull requests that update a dependency file python Pull requests that update python code labels Apr 17, 2026
@dependabot dependabot Bot changed the base branch from verl_0902 to main April 17, 2026 06:02
@FortPercent FortPercent deleted the dependabot/pip/tokenizers-0.22.2 branch April 29, 2026 08:13
@dependabot @github
Copy link
Copy Markdown
Contributor Author

dependabot Bot commented on behalf of github Apr 29, 2026

OK, I won't notify you again about this release, but will get in touch when a new version is available. If you'd rather skip all updates until the next major or minor version, let me know by commenting @dependabot ignore this major version or @dependabot ignore this minor version. You can also ignore all major, minor, or patch releases for a dependency by adding an ignore condition with the desired update_types to your config file.

If you change your mind, just re-open this PR and I'll resolve any conflicts on it.

FortPercent pushed a commit that referenced this pull request Apr 29, 2026
Co-authored-by: Bihan  Rana <bihan@Bihans-MacBook-Pro.local>
Co-authored-by: peterschmidt85 <andrey.cheptsov@gmail.com>
FortPercent pushed a commit that referenced this pull request May 2, 2026
1. reward_models/hps.py: revert frame indexing to ``video_frames[0]``.
   The earlier ``video_frames[:, 0]`` "fix" was based on a wrong layout
   assumption (assumed (C, T, H, W)). split_video_frames(permute_to_tchw=
   True) actually produces (T, C, H, W), matching aesthetic.py:130 — so
   ``video_frames[0]`` is the correct first-frame slice. The wrong fix
   produced ``KeyError ((H, W, T), '|u1')`` from PIL on a T-channel array.

2. dp_actor.py: train_timesteps was ``int(N * fraction)`` which floored
   to 0 for sampling_steps in {1, 2} (the rollout drops the final
   sigma->0 step, leaving N=sampling_steps-1 trainable timesteps). The
   policy update silently no-op'd. Changed to ``max(1, int(...))`` and
   added a RuntimeError when N <= 0 so the failure is loud.

3. run_dancegrpo_smoke.sh: bump default SAMPLING_STEPS from 1 to 4.
   With sampling_steps=1 the rollout produces 0 trainable timesteps
   (see #2), which made the smoke run but never update the policy. 4
   keeps the smoke fast while still giving the actor real gradient
   signal under the default timestep_fraction=0.6.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
FortPercent pushed a commit that referenced this pull request May 7, 2026
Co-authored-by: Bihan  Rana <bihan@Bihans-MacBook-Pro.local>
Co-authored-by: peterschmidt85 <andrey.cheptsov@gmail.com>
FortPercent pushed a commit that referenced this pull request May 7, 2026
1. reward_models/hps.py: revert frame indexing to ``video_frames[0]``.
   The earlier ``video_frames[:, 0]`` "fix" was based on a wrong layout
   assumption (assumed (C, T, H, W)). split_video_frames(permute_to_tchw=
   True) actually produces (T, C, H, W), matching aesthetic.py:130 — so
   ``video_frames[0]`` is the correct first-frame slice. The wrong fix
   produced ``KeyError ((H, W, T), '|u1')`` from PIL on a T-channel array.

2. dp_actor.py: train_timesteps was ``int(N * fraction)`` which floored
   to 0 for sampling_steps in {1, 2} (the rollout drops the final
   sigma->0 step, leaving N=sampling_steps-1 trainable timesteps). The
   policy update silently no-op'd. Changed to ``max(1, int(...))`` and
   added a RuntimeError when N <= 0 so the failure is loud.

3. run_dancegrpo_smoke.sh: bump default SAMPLING_STEPS from 1 to 4.
   With sampling_steps=1 the rollout produces 0 trainable timesteps
   (see #2), which made the smoke run but never update the policy. 4
   keeps the smoke fast while still giving the actor real gradient
   signal under the default timestep_fraction=0.6.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

dependencies Pull requests that update a dependency file python Pull requests that update python code

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant