Skip to content

[upgrade] update Megatron-Bridge source and LoRA CI to megatron e2e tests and #1023

Merged
yushengsu-thu merged 1 commit intoradixark:mainfrom
yushengsu-thu:upgrade-megatron-bridge
Apr 21, 2026
Merged

[upgrade] update Megatron-Bridge source and LoRA CI to megatron e2e tests and #1023
yushengsu-thu merged 1 commit intoradixark:mainfrom
yushengsu-thu:upgrade-megatron-bridge

Conversation

@yushengsu-thu
Copy link
Copy Markdown
Collaborator

  • Add lora_tests to e2e-test-megatron CI job so LoRA test runs with run-ci-megatron label
  • Update Dockerfile to use radixark/Megatron-Bridge@bridge instead of personal fork

- Add lora_tests to e2e-test-megatron CI job so LoRA test runs with run-ci-megatron label
- Update Dockerfile to use radixark/Megatron-Bridge@bridge instead of personal fork

Made-with: Cursor
@yushengsu-thu yushengsu-thu marked this pull request as ready for review April 21, 2026 02:33
Copilot AI review requested due to automatic review settings April 21, 2026 02:33
Copy link
Copy Markdown
Contributor

@gemini-code-assist gemini-code-assist Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request updates the Megatron-Bridge dependency in the Dockerfile to a new repository and branch. The review identifies opportunities to improve build reproducibility by pinning a specific commit hash and to reduce image size by using the --no-cache-dir flag. It also notes that the CI configuration changes mentioned in the PR description are missing.

Comment thread docker/Dockerfile
# RUN pip install git+https://github.com/fzyzcjy/Megatron-Bridge.git@dev_rl --no-build-isolation
RUN pip install "nvidia-modelopt[torch]>=0.37.0" --no-build-isolation
RUN pip install git+https://github.com/yushengsu-thu/Megatron-Bridge.git@merged-megatron-0.16.0rc0-miles --no-deps --no-build-isolation
RUN pip install git+https://github.com/radixark/Megatron-Bridge.git@bridge --no-deps --no-build-isolation
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

There are two improvement opportunities for this line:

  1. Reproducibility: Pinning a git dependency to a mutable branch like @bridge makes the build non-deterministic, as the branch content can change. It is highly recommended to use a specific commit hash to ensure build stability.
  2. Efficiency: Adding the --no-cache-dir flag is a best practice in Dockerfiles to prevent pip from storing cache files, which helps keep the resulting image size smaller.

Additionally, the PR description mentions adding lora_tests to the CI job, but the corresponding configuration changes (e.g., in .github/workflows) are missing from the current patch set.

RUN pip install git+https://github.com/radixark/Megatron-Bridge.git@bridge --no-deps --no-build-isolation --no-cache-dir

Copy link
Copy Markdown

Copilot AI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pull request overview

Updates CI and build dependencies to align Megatron e2e + LoRA testing with the intended upstream bridge source.

Changes:

  • Run lora_tests as part of the e2e-test-megatron workflow job (triggered by run-ci-megatron).
  • Switch the Docker image to install Megatron-Bridge from radixark/Megatron-Bridge@bridge.

Reviewed changes

Copilot reviewed 3 out of 3 changed files in this pull request and generated 1 comment.

File Description
docker/Dockerfile Updates Megatron-Bridge install source used in the image build.
.github/workflows/pr-test.yml.j2 Adds LoRA tests to the Megatron e2e job definition in the workflow template.
.github/workflows/pr-test.yml Regenerates the concrete workflow YAML to include the LoRA test in the Megatron job matrix.

💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.

Comment thread docker/Dockerfile
Comment on lines 84 to +86
# RUN pip install git+https://github.com/fzyzcjy/Megatron-Bridge.git@dev_rl --no-build-isolation
RUN pip install "nvidia-modelopt[torch]>=0.37.0" --no-build-isolation
RUN pip install git+https://github.com/yushengsu-thu/Megatron-Bridge.git@merged-megatron-0.16.0rc0-miles --no-deps --no-build-isolation
RUN pip install git+https://github.com/radixark/Megatron-Bridge.git@bridge --no-deps --no-build-isolation
Copy link
Copy Markdown
Collaborator

@guapisolo guapisolo left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

approve to unblock

@yushengsu-thu yushengsu-thu merged commit 99956c0 into radixark:main Apr 21, 2026
19 checks passed
DavidBellamy added a commit to LLM360/miles that referenced this pull request Apr 21, 2026
…region clusters (#10)

* Revert "[BUGFIX] [P2PRDMA] Add rollout post-processing after P2PRDMA weight updates" (radixark#882)

* [Fix] fix ci (radixark#894)

* Avoid threading for ray getting object (radixark#886)

* Add explicit errors for unsupported Megatron profiles (radixark#887)

* Add nvfp4 quantizer files (radixark#907)

* Bump flash-linear-attention version to 0.4.2 (radixark#892)

* [BUGFIX] Invoke "post_process_quantization" by default after weight updating (radixark#890)

Co-authored-by: Yueming Yuan <yym022502@gmail.com>

* Add heartbeat and id to session server (radixark#866)

* fix: adding thin glm5 image to docker build + latest tag sync (radixark#871)

* Add consistent hashing routing policy for rollout (radixark#891)

Co-authored-by: Yueming Yuan <yueming@Mac.attlocal.net>

* [example] add retool v2 example with multi-turn framework interfaces (radixark#654)

Co-authored-by: GuanxingLu <gxlu02@gmail.com>
Co-authored-by: Claude Opus 4.6 (1M context) <noreply@anthropic.com>

* Expose rollout-batch-size, n-samples-per-prompt, global-batch-size as CLI args in swe-agent-v2 (radixark#954)

Co-authored-by: Shi Dong <shi.dong@radixark.ai>

* chore: remove obsolete swe-agent server.py and run-qwen3.sh (radixark#952)

Co-authored-by: Claude Opus 4.6 (1M context) <noreply@anthropic.com>

* Add weight staleness control for fully async rollout (radixark#958)

* Fix/pause generation mode (radixark#924)

Co-authored-by: Yueming Yuan <yym022502@gmail.com>

* [v0.5.10][1] Bump sglang to v0.5.10 (radixark#898)

* [v0.5.10][2] Fix apply_chat_template behavior for transformers >=5.0 (radixark#926)

Co-authored-by: guapisolo <guapisolo@gmail.com>
Co-authored-by: Claude Opus 4.6 (1M context) <noreply@anthropic.com>

* [v0.5.10][3] Fix processor return_tensors duplicate kwarg for transformers >=5.0 (radixark#927)

Co-authored-by: guapisolo <guapisolo@gmail.com>
Co-authored-by: Claude Opus 4.6 (1M context) <noreply@anthropic.com>

* [v0.5.10][4] Fix _no_split_modules set not subscriptable in transformers >=5.0 (radixark#931)

* [v0.5.10][5] Disable piecewise cuda graph to avoid NVLS oom (radixark#935)

* [v0.5.10][6][FSDP] fix outdated weight update logic in FSDP (radixark#948)

Co-authored-by: guapisolo <guapisolo@gmail.com>
Co-authored-by: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Co-authored-by: maocheng23 <35615230+maocheng23@users.noreply.github.com>
Co-authored-by: gemini-code-assist[bot] <176961590+gemini-code-assist[bot]@users.noreply.github.com>

* [v0.5.10][7][FSDP] move FSDP to experimental and disable by default (radixark#961)

* Add skiplist and more robust calculation on val (radixark#965)

* [fix] tiny fix debug rollout only in weight version check (radixark#967)

* feat: real cp support with relayout fix for qwen3.5 train/rollout mismatch (radixark#885)

* [AMD] Upgrade to sglv0.5.10 (radixark#973)

* switch model to actor (radixark#756)

* [fix] support general logic to bypass fp32 downcast and fix qwen35 A_log dtype (radixark#975)

Co-authored-by: yueming-yuan <yym022502@gmail.com>

* fix: populate prefix_cache_info in OpenAI/session rollout path (radixark#960)

* Remove prepare_harbor_tasks.py; use harbor-private adapters (radixark#982)

* [fix] Skip flush_cache in in_place mode and add fully async example (radixark#974)

Co-authored-by: Claude Opus 4.6 (1M context) <noreply@anthropic.com>

* GLM47 full cmd for async and sync reasoning (radixark#986)

Co-authored-by: Claude Opus 4.6 (1M context) <noreply@anthropic.com>

* fix: handle non-tool appended messages in TITO incremental tokenization (radixark#949)

Co-authored-by: Yanbin Jiang <jybsuper@gmail.com>

* [docker] Add sgl-model-gateway install and download .tar.gz assets (radixark#895)

* [ci] fix hf rate limit error by caching tokenizer loading (radixark#1014)

Co-authored-by: maocheng23 <35615230+maocheng23@users.noreply.github.com>

* Use load_generate_function in legacy sglang_rollout path (radixark#1016)

* Update CODEOWNERS to add new reviewers (radixark#1021)

* Support moe lora for gpt-oss (radixark#798)

Co-authored-by: Ethan (Yusheng) Su <yushengsu.thu@gmail.com>

* [fix] restore expert_bias to fp32 before bridge weight export (radixark#811)

* [chore] drop legacy transformers upgrade pin for glm47-flash and qwen35 (radixark#1018)

* [fix] Enforce param dtype before wrap ddp (radixark#992)

Co-authored-by: Zhichen Zeng <zczeng@uw.edu>

* [upgrade] update Megatron-Bridge source and LoRA CI to megatron e2e tests and  (radixark#1023)

* [CI] Drop --use-miles-router from R3 tests and add r3 comparasion test between sgl & miles router (radixark#1015)

* wandb: raise init_timeout, add retry wrapper, fix shared-mode init for cross-region clusters

In online + shared mode, both `init_wandb_primary` and `init_wandb_secondary`
make HTTPS round-trips to wandb cloud (login + run create/attach). On
high-latency cross-region clusters (e.g. Abu Dhabi MBZUAI ↔ wandb-cloud
US-West) with concurrent actor bursts, a single round-trip can exceed the
wandb SDK's 90s default `init_timeout` — tearing down the whole run
with a silent handshake abort. Observed on RL360 job 1564420, which
forced `WANDB_MODE=offline` as a global default ever since (see
https://github.com/LLM360/RL360/issues/87).

The issue's original diagnosis assumed a local primary↔secondary socket
handshake race. That's not how shared mode works — per wandb's own
feature PR (wandb/wandb#6882), each writer spawns
an independent wandb-core that talks to the cloud directly; aggregation
is server-side by run_id. No local socket exists. The failure mode is
pure network/latency, not a local readiness race.

Changes
-------

- Bump `init_timeout` to 300s for primary and secondary Settings.
  Configurable via `WANDB_INIT_TIMEOUT_SECS` env var for tuning.
- Wrap both init paths in a bounded exponential-backoff retry
  (`_wandb_init_with_retry`) that re-attempts on wandb.errors.CommError
  and wandb.errors.UsageError. 3 attempts with 5→10→20s backoff by
  default, tunable via `WANDB_INIT_RETRY_ATTEMPTS` /
  `WANDB_INIT_RETRY_BACKOFF_SECS`.
- Add `x_label` tagging per wandb distributed-training docs: primary
  gets `rank_<rank>_primary`, secondaries get `rank_<rank>_secondary`.
  Enables per-rank console-log filtering in the wandb UI.
- Drop `reinit=True` from secondary init_kwargs. Shared mode natively
  supports concurrent writers on a single run; `reinit=True` triggered
  stale-state warnings on secondary actors without functional benefit.

Followups this change enables
-----------------------------

- `WANDB_MODE=offline` can be removed from scale.yaml's extra_env
  default once a pilot run confirms online mode boots cleanly.
- The tmux-based `~/bin/wandb-sync-rl360.sh` workaround on David's M2
  account becomes obsolete (no more offline-only default).
- Near-realtime wandb dashboards replace the ~2-minute-lag offline
  sync; per-rank system metrics via x_label filtering.

---------

Co-authored-by: JD <jaedon.guo@gmail.com>
Co-authored-by: Ethan (Yusheng) Su <yushengsu.thu@gmail.com>
Co-authored-by: fzyzcjy <5236035+fzyzcjy@users.noreply.github.com>
Co-authored-by: Ziang Li <ziangli@umich.edu>
Co-authored-by: Zhichen Zeng <zczeng@uw.edu>
Co-authored-by: JensenFire <xinji1@microsoft.com>
Co-authored-by: Yueming Yuan <yym022502@gmail.com>
Co-authored-by: maocheng23 <35615230+maocheng23@users.noreply.github.com>
Co-authored-by: Douglas Yang <douglasyang88@gmail.com>
Co-authored-by: Yueming Yuan <yueming@Mac.attlocal.net>
Co-authored-by: Huapeng Zhou <73010314+PopSoda2002@users.noreply.github.com>
Co-authored-by: GuanxingLu <gxlu02@gmail.com>
Co-authored-by: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Co-authored-by: Shi-Dong <Shi-Dong@users.noreply.github.com>
Co-authored-by: Shi Dong <shi.dong@radixark.ai>
Co-authored-by: Jiajun Li <48857426+guapisolo@users.noreply.github.com>
Co-authored-by: guapisolo <guapisolo@gmail.com>
Co-authored-by: gemini-code-assist[bot] <176961590+gemini-code-assist[bot]@users.noreply.github.com>
Co-authored-by: Yuzhen Zhou <82826991+zyzshishui@users.noreply.github.com>
Co-authored-by: Yanbin Jiang <jybsuper@gmail.com>
Co-authored-by: Ying Sheng <sqy1415@gmail.com>
Co-authored-by: Yisheng Gong <yishenggong9437@gmail.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants