Skip to content

[doc, ci] chore: bump verl pin past router packaging fix (#5209)#51

Closed
SamitHuang wants to merge 1 commit intoverl-project:mainfrom
SamitHuang:docs/install-router-workaround
Closed

[doc, ci] chore: bump verl pin past router packaging fix (#5209)#51
SamitHuang wants to merge 1 commit intoverl-project:mainfrom
SamitHuang:docs/install-router-workaround

Conversation

@SamitHuang
Copy link
Copy Markdown
Collaborator

Bump the pinned verl SHA from a512e90 to a4351480 (the merge commit of verl-project/verl#5209) in docs/start/install.md and the three CI workflows that pin verl explicitly. The previous pin shipped a wheel without verl/experimental/reward_loop/router/ because the upstream directory had no init.py and setuptools' default package discovery silently dropped it, breaking the FlowGRPO trainer at runtime with "ModuleNotFoundError: No module named 'verl.experimental.reward_loop.router'".

The merge commit adds the missing init.py, so the wheel now includes the directory and the FlowGRPO trainer imports cleanly.

What does this PR do?

Add concise overview of what this PR aims to achieve or accomplish. Reference related GitHub issues and PRs that help with the review.

Checklist Before Starting

  • Search for similar PRs. Paste at least one query link here: ...
  • Format the PR title as [{modules}] {type}: {description} (This will be checked by the CI)
    • {modules} include fsdp, vllm_omni, rollout, trainer, ci, training_utils, recipe, ray, worker, single_controller, misc, perf, model, algo, env, tool, ckpt, doc, data, cfg, reward, diffusion, omni, tests, docker
    • If this PR involves multiple modules, separate them with , like [diffusion, doc]
    • {type} is in feat, fix, refactor, chore, test
    • If this PR breaks any API (CLI arguments, config, function signature, etc.), add [BREAKING] to the beginning of the title.
    • Example: [BREAKING][diffusion, fsdp] feat: new rollout scheduler

Test

For changes that can not be tested by CI (e.g., algorithm implementation, new model support), validate by experiment(s) and show results like training curve plots, evaluation results, etc.

API and Usage Example

Demonstrate how the API changes if any, and provide usage example(s) if possible.

# Add code snippet or script demonstrating how to use this

Design & Code Changes

Demonstrate the high-level design if this PR is complex, and list the specific changes.

Checklist Before Submitting

Important

Please check all the following items before requesting a review, otherwise the reviewer might deprioritize this PR for review.

  • Read the Contribute Guide.
  • Apply pre-commit checks: pre-commit install && pre-commit run --all-files --show-diff-on-failure --color=always
  • Add / Update the documentation.
  • Add unit or end-to-end test(s) to the CI workflow to cover all the code. If not feasible, explain why: ...

@SamitHuang SamitHuang requested a review from zhtmike April 30, 2026 17:39
@SamitHuang SamitHuang added the ready-for-ci read for running CI label Apr 30, 2026
Copy link
Copy Markdown

@gemini-code-assist gemini-code-assist Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request updates the installation documentation by refreshing the 'Last updated' date and pinning a new git SHA for the verl library. Feedback indicates that the PR description mentions CI workflow updates that are missing from the current diff. Additionally, it is suggested to align the documentation with the default configuration in requirements.txt rather than pinning a specific git SHA to maintain consistency and avoid confusion.

Comment thread docs/start/install.md Outdated
uv pip install vllm==0.18.0
uv pip install vllm-omni==0.18
uv pip install git+https://github.com/verl-project/verl.git@a512e90fddcefb64baaa6384e9cf8571b6bfab0b
uv pip install git+https://github.com/verl-project/verl.git@a4351480871347092436d17573ad3ccf75b24122
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

The PR title and description mention updating the verl pin in three CI workflows, but these changes are not present in the current diff. Additionally, the documentation is updated to pin a specific git SHA while requirements.txt uses a version range (verl>=0.7.1). To avoid confusion, documentation should describe the default configuration rather than detailing specific variants like a git SHA. Please include the CI updates and ensure consistency with the default configuration in requirements.txt to prevent runtime errors.

References
  1. Documentation for examples should describe the default configuration to avoid confusion, rather than detailing every possible variant.

…ckaging bug

The pinned verl commit (a512e90) ships a wheel that is missing
verl/experimental/reward_loop/router/ because the upstream directory had
no __init__.py at that commit and setuptools' default package discovery
silently drops it. This breaks the FlowGRPO trainer at runtime with
"ModuleNotFoundError: No module named 'verl.experimental.reward_loop.router'".

Switch the verl install in docs/start/install.md from a wheel install
(uv pip install git+…@<commit>) to a clone-and-editable install pinned
at the same commit. An editable install exposes the source tree on
sys.path, so router/ is picked up as a PEP 420 implicit namespace
package and the import works without any per-venv patching.

CI workflows are intentionally not touched because they don't exercise
the broken codepath. The pin will be bumped past
verl-project/verl#5209 once verl-omni is also adapted to the breaking
LLMServerClient refactor in verl-project/verl#6129 (tracked separately).
Copy link
Copy Markdown
Collaborator

@zhtmike zhtmike left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This trivial bug should fix in verl side instead of doing hacking/patching?

SamitHuang added a commit to SamitHuang/verl-omni that referenced this pull request May 1, 2026
Adapt verl-omni's diffusion agent loop and ray trainer to verl-project/verl#6129,
which removed AsyncLLMServerManager and made AgentLoopManager / AgentLoopWorker
consume an LLMServerClient produced by a separately-owned LLMServerManager.

verl-omni changes:
- DiffusionAgentLoopWorker.__init__ now takes (config, llm_client, teacher_client,
  reward_loop_worker_handles), matching the positional contract that
  AgentLoopManager.create() uses when spawning workers. _get_rollout_and_model_config
  was also dropped upstream, so the config slicing is inlined to keep the diff
  minimal.
- ray_diffusion_trainer now creates an LLMServerManager first, hands its client to
  AgentLoopManager.create(), and uses llm_server_manager.get_replicas() (instead of
  async_rollout_manager.rollout_replicas) to wire the CheckpointEngineManager. This
  mirrors the new pattern in upstream verl/trainer/ppo/ray_trainer.py.
- tests/agent_loop/test_diffusion_agent_loop.py is updated for the new API; in
  standalone test mode LLMServerManager spins up its own replicas via
  rollout.nnodes / n_gpus_per_node.

Pin / docs / CI:
- Bump the pinned verl commit to a4351480 (the merge commit of #5209), which is
  the first commit that ships verl/experimental/reward_loop/router/ in the wheel
  AND contains the #6129 refactor that this change adapts to. With this commit,
  the workaround in PR verl-project#51 (clone + editable install) is no longer required.
- Restore the simple `uv pip install git+...@<commit>` install line in
  docs/start/install.md.
- Bump the same pin in .github/workflows/{cpu_unit_tests,sanity,type-coverage-check}.yml.

This is a BREAKING change because DiffusionAgentLoopWorker.__init__ signature changed.
Any downstream code that subclasses or directly instantiates DiffusionAgentLoopWorker
must switch from (servers, load_balancer_handle, teacher_servers, teacher_load_balancer_handle)
to (llm_client, teacher_client). No public CLI/config surface is affected.

Signed-off-by: samithuang <285365963@qq.com>
@SamitHuang
Copy link
Copy Markdown
Collaborator Author

closed as #52 will fix

@SamitHuang SamitHuang closed this May 1, 2026
SamitHuang added a commit that referenced this pull request May 1, 2026
* [BREAKING][rollout] feat: adapt to verl LLMServerClient refactor

Adapt verl-omni's diffusion agent loop and ray trainer to verl-project/verl#6129,
which removed AsyncLLMServerManager and made AgentLoopManager / AgentLoopWorker
consume an LLMServerClient produced by a separately-owned LLMServerManager.

verl-omni changes:
- DiffusionAgentLoopWorker.__init__ now takes (config, llm_client, teacher_client,
  reward_loop_worker_handles), matching the positional contract that
  AgentLoopManager.create() uses when spawning workers. _get_rollout_and_model_config
  was also dropped upstream, so the config slicing is inlined to keep the diff
  minimal.
- ray_diffusion_trainer now creates an LLMServerManager first, hands its client to
  AgentLoopManager.create(), and uses llm_server_manager.get_replicas() (instead of
  async_rollout_manager.rollout_replicas) to wire the CheckpointEngineManager. This
  mirrors the new pattern in upstream verl/trainer/ppo/ray_trainer.py.
- tests/agent_loop/test_diffusion_agent_loop.py is updated for the new API; in
  standalone test mode LLMServerManager spins up its own replicas via
  rollout.nnodes / n_gpus_per_node.

Pin / docs / CI:
- Bump the pinned verl commit to a4351480 (the merge commit of #5209), which is
  the first commit that ships verl/experimental/reward_loop/router/ in the wheel
  AND contains the #6129 refactor that this change adapts to. With this commit,
  the workaround in PR #51 (clone + editable install) is no longer required.
- Restore the simple `uv pip install git+...@<commit>` install line in
  docs/start/install.md.
- Bump the same pin in .github/workflows/{cpu_unit_tests,sanity,type-coverage-check}.yml.

This is a BREAKING change because DiffusionAgentLoopWorker.__init__ signature changed.
Any downstream code that subclasses or directly instantiates DiffusionAgentLoopWorker
must switch from (servers, load_balancer_handle, teacher_servers, teacher_load_balancer_handle)
to (llm_client, teacher_client). No public CLI/config surface is affected.

Signed-off-by: samithuang <285365963@qq.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

ready-for-ci read for running CI

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants