refactor: Migrate to new UX2 for python launch#2003
Conversation
WalkthroughThe changes refactor TRTLLM backend launch scripts and internal logic to use Python module-based invocations ( Changes
Sequence Diagram(s)sequenceDiagram
participant User
participant Shell Script
participant Python Module (dynamo.frontend / dynamo.trtllm)
participant Main Entrypoint
participant Worker/Frontend Logic
User->>Shell Script: Launch backend (e.g., agg.sh)
Shell Script->>Python Module: python -m dynamo.frontend / dynamo.trtllm [args]
Python Module->>Main Entrypoint: Import and call main()
Main Entrypoint->>Worker/Frontend Logic: Initialize with router_mode (from CLI)
Worker/Frontend Logic-->>Main Entrypoint: Start service (router mode governs behavior)
Possibly related PRs
Poem
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. 🪧 TipsChatThere are 3 ways to chat with CodeRabbit:
SupportNeed help? Create a ticket on our support page for assistance with any issues or questions. Note: Be mindful of the bot's finite context window. It's strongly recommended to break down tasks such as reading entire modules into smaller chunks. For a focused discussion, use review comments to chat about specific files and their changes, instead of using the PR comments. CodeRabbit Commands (Invoked using PR comments)
Other keywords and placeholders
Documentation and Community
|
There was a problem hiding this comment.
Actionable comments posted: 4
🔭 Outside diff range comments (1)
components/backends/trtllm/launch/disagg.sh (1)
40-45: Decode worker isn’t cleaned up on SIGINTThe script traps and kills
$DYNAMO_PIDand$PREFILL_PIDbut not the foreground decode worker. A Ctrl-C will leave it running.A minimal fix:
-wait $DYNAMO_PID $PREFILL_PID 2>/dev/null || true +kill $DECODE_PID 2>/dev/null || true +wait $DYNAMO_PID $PREFILL_PID $DECODE_PID 2>/dev/null || trueor launch decode in the background and use
execfor simpler semantics.
🧹 Nitpick comments (6)
pyproject.toml (1)
82-82: Long package list is getting brittleManually enumerating every
src/dynamofolder is error-prone and easy to forget during future moves. Consider switching to:[tool.hatch.build.targets.wheel] packages = ["**/src/dynamo"]or declare a single top-level
packages = ["dynamo"]with an appropriatepackages.findinclude rule.components/backends/trtllm/multinode/start_frontend_services.sh (1)
16-16: Useexecand explicit host binding for cleaner signal handling
python3 -m dynamo.frontend --http-port 8000runs as a child; SIGTERM to the script won’t propagate correctly and PID 1 reparenting can leave zombies when this is run in a container. Replacing the shell process with the Python process fixes that:-python3 -m dynamo.frontend --http-port 8000 +exec python3 -m dynamo.frontend --host 0.0.0.0 --http-port 8000(The extra
--host 0.0.0.0ensures the service is reachable from other pods/nodes; the previousdynamo-rundefaulted to that.)components/backends/trtllm/launch/disagg.sh (1)
26-28: Sameexec/host comment applies to the frontend launchConsider:
-python3 -m dynamo.frontend --http-port=8000 & +exec python3 -m dynamo.frontend --host 0.0.0.0 --http-port=8000 &to avoid orphaned children in k8s jobs.
components/backends/trtllm/launch/agg.sh (1)
22-24: Frontend launch: apply the sameexec/ host pattern-python3 -m dynamo.frontend --http-port 8000 & +exec python3 -m dynamo.frontend --host 0.0.0.0 --http-port 8000 &Keeps signal handling and reachability consistent across all launch scripts.
components/backends/trtllm/src/dynamo/trtllm/main.py (1)
42-51: Consider using a dictionary for router mode mappingThe
get_router_modefunction could be more concise using a dictionary mapping. Also, the else clause should be unreachable if the enum is properly validated during config parsing.-def get_router_mode(router_mode: ConfigRouterMode) -> RouterMode: - if router_mode == ConfigRouterMode.KV: - return RouterMode.KV - elif router_mode == ConfigRouterMode.ROUND_ROBIN: - return RouterMode.RoundRobin - elif router_mode == ConfigRouterMode.RANDOM: - return RouterMode.Random - else: - raise ValueError(f"Invalid router mode: {router_mode}") +def get_router_mode(router_mode: ConfigRouterMode) -> RouterMode: + mapping = { + ConfigRouterMode.KV: RouterMode.KV, + ConfigRouterMode.ROUND_ROBIN: RouterMode.RoundRobin, + ConfigRouterMode.RANDOM: RouterMode.Random, + } + return mapping[router_mode]This approach is more maintainable and will raise a KeyError if an unmapped value is passed, which shouldn't happen with proper enum usage.
components/backends/trtllm/src/dynamo/trtllm/utils/trtllm_utils.py (1)
14-18: Consider consistent casing for RouterMode enum valuesThe enum values use inconsistent casing:
ROUND_ROBIN = "RoundRobin"(PascalCase)KV = "KV"(uppercase)RANDOM = "Random"(PascalCase)This inconsistency might be causing the kv/KV confusion seen in the shell scripts.
Consider using consistent casing across all enum values for better maintainability.
📜 Review details
Configuration used: .coderabbit.yaml
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (13)
components/backends/trtllm/launch/agg.sh(1 hunks)components/backends/trtllm/launch/agg_router.sh(1 hunks)components/backends/trtllm/launch/disagg.sh(2 hunks)components/backends/trtllm/launch/disagg_router.sh(2 hunks)components/backends/trtllm/multinode/start_frontend_services.sh(1 hunks)components/backends/trtllm/multinode/start_trtllm_worker.sh(1 hunks)components/backends/trtllm/src/dynamo/trtllm/__init__.py(1 hunks)components/backends/trtllm/src/dynamo/trtllm/__main__.py(1 hunks)components/backends/trtllm/src/dynamo/trtllm/main.py(5 hunks)components/backends/trtllm/src/dynamo/trtllm/utils/request_handlers/handler_base.py(1 hunks)components/backends/trtllm/src/dynamo/trtllm/utils/request_handlers/handlers.py(1 hunks)components/backends/trtllm/src/dynamo/trtllm/utils/trtllm_utils.py(5 hunks)pyproject.toml(1 hunks)
🧰 Additional context used
🧠 Learnings (10)
📓 Common learnings
Learnt from: biswapanda
PR: ai-dynamo/dynamo#1412
File: lib/bindings/python/src/dynamo/runtime/logging.py:100-100
Timestamp: 2025-06-06T21:48:35.214Z
Learning: In the Dynamo codebase, BentoML has been completely removed from all executable code, with only documentation and attribution references remaining. The error_loggers configuration in lib/bindings/python/src/dynamo/runtime/logging.py should not include "bentoml" since those modules no longer exist.
components/backends/trtllm/src/dynamo/trtllm/utils/request_handlers/handler_base.py (1)
Learnt from: tanmayv25
PR: ai-dynamo/dynamo#1391
File: examples/tensorrt_llm/common/base_engine.py:171-176
Timestamp: 2025-06-05T01:10:51.865Z
Learning: In examples/tensorrt_llm/common/base_engine.py, the _init_engine method is called only once during initialization, so direct mutation of the _default_sampling_params object during setup is safe and appropriate.
pyproject.toml (1)
Learnt from: biswapanda
PR: ai-dynamo/dynamo#1412
File: lib/bindings/python/src/dynamo/runtime/logging.py:100-100
Timestamp: 2025-06-06T21:48:35.214Z
Learning: In the Dynamo codebase, BentoML has been completely removed from all executable code, with only documentation and attribution references remaining. The error_loggers configuration in lib/bindings/python/src/dynamo/runtime/logging.py should not include "bentoml" since those modules no longer exist.
components/backends/trtllm/multinode/start_trtllm_worker.sh (2)
Learnt from: fsaady
PR: ai-dynamo/dynamo#1730
File: examples/sglang/slurm_jobs/scripts/worker_setup.py:230-244
Timestamp: 2025-07-03T10:14:30.570Z
Learning: In examples/sglang/slurm_jobs/scripts/worker_setup.py, background processes (like nats-server, etcd) are intentionally left running even if later processes fail. This design choice allows users to manually connect to nodes and debug issues without having to restart the entire SLURM job from scratch, providing operational flexibility for troubleshooting in cluster environments.
Learnt from: fsaady
PR: ai-dynamo/dynamo#1730
File: examples/sglang/slurm_jobs/job_script_template.j2:59-59
Timestamp: 2025-07-02T13:20:28.800Z
Learning: In the SLURM job script template at examples/sglang/slurm_jobs/job_script_template.j2, the `--total_nodes` parameter represents the total nodes per worker type (prefill or decode), not the total nodes in the entire cluster. Each worker type needs to know its own group size for distributed coordination.
components/backends/trtllm/multinode/start_frontend_services.sh (2)
Learnt from: GuanLuo
PR: ai-dynamo/dynamo#1371
File: examples/llm/benchmarks/vllm_multinode_setup.sh:18-25
Timestamp: 2025-06-05T01:46:15.509Z
Learning: In multi-node setups with head/worker architecture, the head node typically doesn't need environment variables pointing to its own services (like NATS_SERVER, ETCD_ENDPOINTS) because local processes can access them via localhost. Only worker nodes need these environment variables to connect to the head node's external IP address.
Learnt from: fsaady
PR: ai-dynamo/dynamo#1730
File: examples/sglang/slurm_jobs/scripts/worker_setup.py:230-244
Timestamp: 2025-07-03T10:14:30.570Z
Learning: In examples/sglang/slurm_jobs/scripts/worker_setup.py, background processes (like nats-server, etcd) are intentionally left running even if later processes fail. This design choice allows users to manually connect to nodes and debug issues without having to restart the entire SLURM job from scratch, providing operational flexibility for troubleshooting in cluster environments.
components/backends/trtllm/launch/agg.sh (1)
Learnt from: fsaady
PR: ai-dynamo/dynamo#1730
File: examples/sglang/slurm_jobs/scripts/worker_setup.py:230-244
Timestamp: 2025-07-03T10:14:30.570Z
Learning: In examples/sglang/slurm_jobs/scripts/worker_setup.py, background processes (like nats-server, etcd) are intentionally left running even if later processes fail. This design choice allows users to manually connect to nodes and debug issues without having to restart the entire SLURM job from scratch, providing operational flexibility for troubleshooting in cluster environments.
components/backends/trtllm/launch/disagg.sh (1)
Learnt from: fsaady
PR: ai-dynamo/dynamo#1730
File: examples/sglang/slurm_jobs/scripts/worker_setup.py:230-244
Timestamp: 2025-07-03T10:14:30.570Z
Learning: In examples/sglang/slurm_jobs/scripts/worker_setup.py, background processes (like nats-server, etcd) are intentionally left running even if later processes fail. This design choice allows users to manually connect to nodes and debug issues without having to restart the entire SLURM job from scratch, providing operational flexibility for troubleshooting in cluster environments.
components/backends/trtllm/launch/agg_router.sh (1)
Learnt from: fsaady
PR: ai-dynamo/dynamo#1730
File: examples/sglang/slurm_jobs/scripts/worker_setup.py:230-244
Timestamp: 2025-07-03T10:14:30.570Z
Learning: In examples/sglang/slurm_jobs/scripts/worker_setup.py, background processes (like nats-server, etcd) are intentionally left running even if later processes fail. This design choice allows users to manually connect to nodes and debug issues without having to restart the entire SLURM job from scratch, providing operational flexibility for troubleshooting in cluster environments.
components/backends/trtllm/src/dynamo/trtllm/main.py (1)
Learnt from: nnshah1
PR: ai-dynamo/dynamo#1444
File: tests/fault_tolerance/utils/metrics.py:30-32
Timestamp: 2025-07-01T13:55:03.940Z
Learning: The `@dynamo_worker()` decorator in the dynamo codebase returns a wrapper that automatically injects the `runtime` parameter before calling the wrapped function. This means callers only need to provide the non-runtime parameters, while the decorator handles injecting the runtime argument automatically. For example, a function with signature `async def get_metrics(runtime, log_dir)` decorated with `@dynamo_worker()` can be called as `get_metrics(log_dir)` because the decorator wrapper injects the runtime parameter.
components/backends/trtllm/launch/disagg_router.sh (1)
Learnt from: fsaady
PR: ai-dynamo/dynamo#1730
File: examples/sglang/slurm_jobs/scripts/worker_setup.py:230-244
Timestamp: 2025-07-03T10:14:30.570Z
Learning: In examples/sglang/slurm_jobs/scripts/worker_setup.py, background processes (like nats-server, etcd) are intentionally left running even if later processes fail. This design choice allows users to manually connect to nodes and debug issues without having to restart the entire SLURM job from scratch, providing operational flexibility for troubleshooting in cluster environments.
🧬 Code Graph Analysis (4)
components/backends/trtllm/src/dynamo/trtllm/utils/request_handlers/handler_base.py (1)
components/backends/trtllm/src/dynamo/trtllm/utils/disagg_utils.py (1)
DisaggregatedParamsCodec(21-64)
components/backends/trtllm/src/dynamo/trtllm/__main__.py (1)
components/backends/trtllm/src/dynamo/trtllm/main.py (1)
main(188-189)
components/backends/trtllm/src/dynamo/trtllm/main.py (7)
lib/bindings/python/rust/lib.rs (1)
_core(61-115)components/backends/trtllm/src/dynamo/trtllm/utils/trtllm_utils.py (5)
RouterMode(14-17)Config(29-62)cmd_line_args(95-205)is_first_worker(65-80)parse_endpoint(83-92)lib/bindings/python/src/dynamo/_core.pyi (1)
DistributedRuntime(42-65)lib/bindings/python/src/dynamo/runtime/__init__.py (1)
dynamo_worker(34-60)lib/bindings/python/src/dynamo/runtime/logging.py (1)
configure_dynamo_logging(77-105)components/backends/trtllm/src/dynamo/trtllm/utils/request_handlers/handler_base.py (1)
RequestHandlerConfig(46-57)components/backends/trtllm/src/dynamo/trtllm/utils/request_handlers/handlers.py (1)
RequestHandlerFactory(14-48)
components/backends/trtllm/src/dynamo/trtllm/utils/trtllm_utils.py (1)
components/backends/trtllm/src/dynamo/trtllm/utils/request_handlers/handler_base.py (2)
DisaggregationMode(34-37)DisaggregationStrategy(40-42)
🔇 Additional comments (12)
components/backends/trtllm/src/dynamo/trtllm/utils/request_handlers/handler_base.py (1)
26-29: LGTM! Import path correctly updated to absolute imports.The change from relative to absolute imports (
dynamo.trtllm.utils.disagg_utils) aligns with the package restructuring and follows Python best practices for explicit import paths.components/backends/trtllm/src/dynamo/trtllm/utils/request_handlers/handlers.py (1)
6-11: LGTM! Import path correctly standardized to absolute imports.The update to use the fully qualified import path
dynamo.trtllm.utils.request_handlers.handler_baseis consistent with the package restructuring and maintains all required imports.components/backends/trtllm/src/dynamo/trtllm/__main__.py (1)
1-6: LGTM! Proper main.py implementation for module execution.The file correctly implements the module entry point pattern, enabling
python -m dynamo.trtllmexecution as intended by the UX2 migration. The structure is clean and follows Python conventions.components/backends/trtllm/src/dynamo/trtllm/__init__.py (1)
1-2: LGTM! Proper package initialization file.The minimal
__init__.pycorrectly establishes thedynamo.trtllmpackage namespace, enabling the module-based execution pattern introduced in this refactor.components/backends/trtllm/launch/disagg.sh (1)
31-37: Verify module entry point and engine-arg quoting
- Ensure
dynamo/trtllm/__main__.pyis present; otherwisepython -m dynamo.trtllmwill fail at runtime.$PREFILL_ENGINE_ARGScontains a YAML path with spaces? If so, it needs quoting inside the flag to avoid word-splitting:--extra-engine-args="$PREFILL_ENGINE_ARGS"(Current line has one space before the variable and splits correctly only if the var has no spaces.)
components/backends/trtllm/launch/agg.sh (1)
27-30: Confirm new worker CLI parity
components/worker.pyexposed flags like--tensor-parallel-size. Double-check thatdynamo.trtllmsurfaces every flag used by existing example notebooks / docs to avoid silent regressions.components/backends/trtllm/launch/agg_router.sh (1)
28-33: Router mode value casing may break CLI parsing
--router-mode KVis uppercase while the enum in Python code is usually lowercase (kv). If the parser usesRouterMode[value.upper()]this works; otherwise it raisesValueError.Please verify and standardise the casing across all scripts.
components/backends/trtllm/launch/disagg_router.sh (1)
42-58: Module invocation changes look goodThe migration from running
components/worker.pydirectly to usingpython3 -m dynamo.trtllmaligns well with the PR objectives and follows Python best practices for module execution.components/backends/trtllm/src/dynamo/trtllm/main.py (2)
113-123: Router mode condition changes are correctThe replacement of
config.publish_events_and_metricswithconfig.router_mode == ConfigRouterMode.KVis consistent throughout the file and properly maintains the same logic for KV cache event publishing.Also applies to: 168-186
188-192: Clean refactoring with main() functionThe addition of the
main()function wrapper provides a cleaner entry point and follows Python best practices.components/backends/trtllm/src/dynamo/trtllm/utils/trtllm_utils.py (2)
133-138: Well-implemented router mode argument parsingThe command line argument implementation:
- Properly validates input against enum values
- Provides clear help text
- Correctly converts string input to enum type
This is a clean replacement for the previous boolean flag approach.
Also applies to: 203-203
22-22: Confirm Intentional Switch to Chat Model VariantThe default model path was changed from the instruction-tuned
"TinyLlama-1.1B-Instruct"(optimized for single-turn, structured responses)
to the chat-tuned
"TinyLlama/TinyLlama-1.1B-Chat-v1.0"(optimized for multi-turn conversational AI).Please verify that you intended to switch to the chat-oriented model, as its behavior and performance differ significantly from the instruct variant.
• Location:
components/backends/trtllm/src/dynamo/trtllm/utils/trtllm_utils.py, line 22
• Old value:"TinyLlama-1.1B-Instruct"
• New value:"TinyLlama/TinyLlama-1.1B-Chat-v1.0"
|
Why does the worker have a routing mode? |
commit e330d96 Author: Yan Ru Pei <yanrpei@gmail.com> Date: Fri Jul 18 13:40:54 2025 -0700 feat: enable / disable chunked prefill for mockers (#2015) Signed-off-by: Yan Ru Pei <yanrpei@gmail.com> Co-authored-by: coderabbitai[bot] <136622811+coderabbitai[bot]@users.noreply.github.com> commit 353146e Author: GuanLuo <41310872+GuanLuo@users.noreply.github.com> Date: Fri Jul 18 13:33:36 2025 -0700 feat: add vLLM v1 multi-modal example. Add llama4 Maverick example (#1990) Signed-off-by: GuanLuo <41310872+GuanLuo@users.noreply.github.com> Co-authored-by: krishung5 <krish@nvidia.com> commit 1f07dab Author: Jacky <18255193+kthui@users.noreply.github.com> Date: Fri Jul 18 13:04:20 2025 -0700 feat: Add migration to LLM requests (#1930) commit 5f17918 Author: Tanmay Verma <tanmayv@nvidia.com> Date: Fri Jul 18 12:59:34 2025 -0700 refactor: Migrate to new UX2 for python launch (#2003) commit fc12436 Author: Graham King <grahamk@nvidia.com> Date: Fri Jul 18 14:52:57 2025 -0400 feat(frontend): router-mode settings (#2001) commit dc75cf1 Author: ptarasiewiczNV <104908264+ptarasiewiczNV@users.noreply.github.com> Date: Fri Jul 18 18:47:28 2025 +0200 chore: Move NIXL repo clone to Dockerfiles (#2009) commit f6f392c Author: Iman Tabrizian <10105175+Tabrizian@users.noreply.github.com> Date: Thu Jul 17 18:44:17 2025 -0700 Remove link to the fix for disagg + eagle3 for TRT-LLM example (#2006) Signed-off-by: Iman Tabrizian <10105175+Tabrizian@users.noreply.github.com> commit cc90ca6 Author: atchernych <atchernych@nvidia.com> Date: Thu Jul 17 18:34:40 2025 -0700 feat: Create a convenience script to uninstall Dynamo Deploy CRDs (#1933) commit 267b422 Author: Greg Clark <grclark@nvidia.com> Date: Thu Jul 17 20:44:21 2025 -0400 chore: loosed python requirement versions (#1998) Signed-off-by: Greg Clark <grclark@nvidia.com> commit b8474e5 Author: ishandhanani <82981111+ishandhanani@users.noreply.github.com> Date: Thu Jul 17 16:35:05 2025 -0700 chore: update cmake and gap installation and sgl in wideep container (#1991) commit 157a3b0 Author: Biswa Panda <biswa.panda@gmail.com> Date: Thu Jul 17 15:38:12 2025 -0700 fix: incorrect helm upgrade command (#2000) commit 0dfca2c Author: Ryan McCormick <rmccormick@nvidia.com> Date: Thu Jul 17 15:33:33 2025 -0700 ci: Update trtllm gitlab triggers for new components directory and test script (#1992) commit f3fb09e Author: Kris Hung <krish@nvidia.com> Date: Thu Jul 17 14:59:59 2025 -0700 fix: Fix syntax for tokio-console (#1997) commit dacffb8 Author: Biswa Panda <biswa.panda@gmail.com> Date: Thu Jul 17 14:57:10 2025 -0700 fix: use non-dev golang image for operator (#1993) commit 2b29a0a Author: zaristei <zaristei@berkeley.edu> Date: Thu Jul 17 13:10:42 2025 -0700 fix: Working Arm Build Dockerfile for Vllm_v1 (#1844) commit 2430d89 Author: Ryan McCormick <rmccormick@nvidia.com> Date: Thu Jul 17 12:57:46 2025 -0700 test: Add trtllm kv router tests (#1988) commit 1eadc01 Author: Graham King <grahamk@nvidia.com> Date: Thu Jul 17 15:07:41 2025 -0400 feat(runtime): Support tokio-console (#1986) commit b62e633 Author: GuanLuo <41310872+GuanLuo@users.noreply.github.com> Date: Thu Jul 17 11:16:28 2025 -0700 feat: support separate chat_template.jinja file (#1853) commit 8ae3719 Author: Hongkuan Zhou <tedzhouhk@gmail.com> Date: Thu Jul 17 11:12:35 2025 -0700 chore: add some details to dynamo deploy quickstart and fix deploy.sh (#1978) Signed-off-by: Hongkuan Zhou <tedzhouhk@gmail.com> Co-authored-by: julienmancuso <161955438+julienmancuso@users.noreply.github.com> commit 08891ff Author: Ryan McCormick <rmccormick@nvidia.com> Date: Thu Jul 17 10:57:42 2025 -0700 fix: Update trtllm tests to use new scripts instead of dynamo serve (#1979) commit 49b7a0d Author: Ryan Olson <ryanolson@users.noreply.github.com> Date: Thu Jul 17 08:35:04 2025 -0600 feat: record + analyze logprobs (#1957) commit 6d2be14 Author: Biswa Panda <biswa.panda@gmail.com> Date: Thu Jul 17 00:17:58 2025 -0700 refactor: replace vllm with vllm_v1 container (#1953) Co-authored-by: alec-flowers <aflowers@nvidia.com> commit 4d2a31a Author: ishandhanani <82981111+ishandhanani@users.noreply.github.com> Date: Wed Jul 16 18:04:09 2025 -0700 chore: add port reservation to utils (#1980) commit 1e3e4a0 Author: Alec <35311602+alec-flowers@users.noreply.github.com> Date: Wed Jul 16 15:54:04 2025 -0700 fix: port race condition through deterministic ports (#1937) commit 4ad281f Author: Tanmay Verma <tanmayv@nvidia.com> Date: Wed Jul 16 14:33:51 2025 -0700 refactor: Move TRTLLM example to the component/backends (#1976) commit 57d24a1 Author: Misha Chornyi <99709299+mc-nv@users.noreply.github.com> Date: Wed Jul 16 14:10:24 2025 -0700 build: Removing shell configuration violations. It's bad practice to hardcod… (#1973) commit 182d3b5 Author: Graham King <grahamk@nvidia.com> Date: Wed Jul 16 16:12:40 2025 -0400 chore(bindings): Remove mistralrs / llama.cpp (#1970) commit def6eaa Author: Harrison Saturley-Hall <454891+saturley-hall@users.noreply.github.com> Date: Wed Jul 16 15:50:23 2025 -0400 feat: attributions for debian deps of sglang, trtllm, vllm runtime containers (#1971) commit f31732a Author: Yan Ru Pei <yanrpei@gmail.com> Date: Wed Jul 16 11:22:15 2025 -0700 feat: integrate mocker with dynamo-run and python cli (#1927) commit aba6099 Author: Graham King <grahamk@nvidia.com> Date: Wed Jul 16 12:26:32 2025 -0400 perf(router): Remove lock from router hot path (#1963) commit b212103 Author: Hongkuan Zhou <tedzhouhk@gmail.com> Date: Wed Jul 16 08:55:33 2025 -0700 docs: add notes in docs to deprecate local connector (#1959) commit 7b325ee Author: Biswa Panda <biswa.panda@gmail.com> Date: Tue Jul 15 18:52:00 2025 -0700 fix: vllm router examples (#1942) commit a50be1a Author: hhzhang16 <54051230+hhzhang16@users.noreply.github.com> Date: Tue Jul 15 17:58:01 2025 -0700 feat: update CODEOWNERS (#1926) commit e260fdf Author: Harrison Saturley-Hall <454891+saturley-hall@users.noreply.github.com> Date: Tue Jul 15 18:49:21 2025 -0400 feat: add bitnami helm chart attribution (#1943) Signed-off-by: Harrison Saturley-Hall <454891+saturley-hall@users.noreply.github.com> Co-authored-by: coderabbitai[bot] <136622811+coderabbitai[bot]@users.noreply.github.com> commit 1c03404 Author: Biswa Panda <biswa.panda@gmail.com> Date: Tue Jul 15 14:26:24 2025 -0700 fix: update inference gateway deployment instructions (#1940) commit 5ca570f Author: Graham King <grahamk@nvidia.com> Date: Tue Jul 15 16:54:03 2025 -0400 chore: Rename dynamo.ingress to dynamo.frontend (#1944) commit 7b9182f Author: Graham King <grahamk@nvidia.com> Date: Tue Jul 15 16:33:07 2025 -0400 chore: Move examples/cli to lib/bindings/examples/cli (#1952) commit 40d40dd Author: Graham King <grahamk@nvidia.com> Date: Tue Jul 15 16:02:19 2025 -0400 chore(multi-modal): Rename frontend.py to web.py (#1951) commit a9e0891 Author: Ryan Olson <ryanolson@users.noreply.github.com> Date: Tue Jul 15 12:30:30 2025 -0600 feat: adding http clients and recorded response stream (#1919) commit 4128d58 Author: Biswa Panda <biswa.panda@gmail.com> Date: Tue Jul 15 10:30:47 2025 -0700 feat: allow helm upgrade using deploy script (#1936) commit 4da078b Author: Graham King <grahamk@nvidia.com> Date: Tue Jul 15 12:57:38 2025 -0400 fix: Remove OpenSSL dependency, use Rust TLS (#1945) commit fc004d4 Author: jthomson04 <jwillthomson19@gmail.com> Date: Tue Jul 15 08:45:42 2025 -0700 fix: Fix TRT-LLM container build when using a custom pip wheel (#1825) commit 3c6fc6f Author: ishandhanani <82981111+ishandhanani@users.noreply.github.com> Date: Mon Jul 14 22:35:20 2025 -0700 chore: fix typo (#1938) commit de7fe38 Author: Alec <35311602+alec-flowers@users.noreply.github.com> Date: Mon Jul 14 21:47:12 2025 -0700 feat: add vllm e2e integration tests (#1935) commit 860f3f7 Author: Keiven C <213854356+keivenchang@users.noreply.github.com> Date: Mon Jul 14 21:44:19 2025 -0700 chore: metrics endpoint variables renamed from HTTP_SERVER->SYSTEM (#1934) Co-authored-by: Keiven Chang <keivenchang@users.noreply.github.com> commit fc402a3 Author: Biswa Panda <biswa.panda@gmail.com> Date: Mon Jul 14 21:21:20 2025 -0700 feat: configurable namespace for vllm v1 example (#1909) commit df40d2c Author: ZichengMa <zichengma1225@gmail.com> Date: Mon Jul 14 21:11:29 2025 -0700 docs: fix typo and add mount-workspace to vllm doc (#1931) Signed-off-by: ZichengMa <zichengma1225@gmail.com> Co-authored-by: Alec <35311602+alec-flowers@users.noreply.github.com> commit 901715b Author: Tanmay Verma <tanmayv@nvidia.com> Date: Mon Jul 14 20:14:51 2025 -0700 refactor: Refactor the TRTLLM examples remove dynamo SDK (#1884) commit 5bf23d5 Author: hhzhang16 <54051230+hhzhang16@users.noreply.github.com> Date: Mon Jul 14 18:29:19 2025 -0700 feat: update DynamoGraphDeployments for vllm_v1 (#1890) Co-authored-by: mohammedabdulwahhab <furkhan324@berkeley.edu> commit 9e76590 Author: ishandhanani <82981111+ishandhanani@users.noreply.github.com> Date: Mon Jul 14 17:29:56 2025 -0700 docs: organize sglang readme (#1910) commit ef59ac8 Author: KrishnanPrash <140860868+KrishnanPrash@users.noreply.github.com> Date: Mon Jul 14 16:16:44 2025 -0700 docs: TRTLLM Example of Llama4+Eagle3 (Speculative Decoding) (#1828) Signed-off-by: KrishnanPrash <140860868+KrishnanPrash@users.noreply.github.com> Co-authored-by: Iman Tabrizian <10105175+Tabrizian@users.noreply.github.com> commit 053041e Author: Jorge António <matroid@outlook.com> Date: Tue Jul 15 00:06:38 2025 +0100 fix: resolve incorrect finish reason propagation (#1857) commit 3733f58 Author: Graham King <grahamk@nvidia.com> Date: Mon Jul 14 19:04:22 2025 -0400 feat(backends): Python llama.cpp engine (#1925) commit 6a1350c Author: Tushar Sharma <tusharma@nvidia.com> Date: Mon Jul 14 14:56:36 2025 -0700 build: minor improvements to sglang dockerfile (#1917) commit e2a619b Author: Neelay Shah <neelays@nvidia.com> Date: Mon Jul 14 14:52:53 2025 -0700 fix: remove environment variable passing (#1911) Signed-off-by: Neelay Shah <neelays@nvidia.com> Co-authored-by: Neelay Shah <neelays@a4u8g-0057.ipp2u2.colossus.nvidia.com> commit 3d17a49 Author: Schwinn Saereesitthipitak <17022745+galletas1712@users.noreply.github.com> Date: Mon Jul 14 14:41:56 2025 -0700 refactor: remove dynamo build (#1778) Signed-off-by: Schwinn Saereesitthipitak <17022745+galletas1712@users.noreply.github.com> commit 3e0cb07 Author: Anant Sharma <anants@nvidia.com> Date: Mon Jul 14 15:43:48 2025 -0400 fix: copy attributions and license to trtllm runtime container (#1916) commit fc36bf5 Author: ishandhanani <82981111+ishandhanani@users.noreply.github.com> Date: Mon Jul 14 12:31:49 2025 -0700 feat: receive kvmetrics from sglang scheduler (#1789) Co-authored-by: zixuanzhang226 <zixuanzhang@bytedance.com> commit df91fce Author: Yan Ru Pei <yanrpei@gmail.com> Date: Mon Jul 14 12:24:04 2025 -0700 feat: prefill aware routing (#1895) commit ad8ad66 Author: Graham King <grahamk@nvidia.com> Date: Mon Jul 14 15:20:35 2025 -0400 feat: Shrink the ai-dynamo wheel by 35 MiB (#1918) Remove http and llmctl binaries. They have been unused for a while. commit 480b41d Author: Graham King <grahamk@nvidia.com> Date: Mon Jul 14 15:06:45 2025 -0400 feat: Python frontend / ingress node (#1912)
commit cb6de94 Author: ptarasiewiczNV <104908264+ptarasiewiczNV@users.noreply.github.com> Date: Sun Jul 20 22:34:50 2025 +0200 chore: Install vLLM and WideEP kernels in vLLM runtime container (#2010) Signed-off-by: Alec <35311602+alec-flowers@users.noreply.github.com> Co-authored-by: Alec <35311602+alec-flowers@users.noreply.github.com> Co-authored-by: alec-flowers <aflowers@nvidia.com> commit fe63c17 Author: Alec <35311602+alec-flowers@users.noreply.github.com> Date: Fri Jul 18 17:45:08 2025 -0700 fix: Revert "feat: add vLLM v1 multi-modal example. Add llama4 Maverick ex… (#2017) commit bf1998f Author: jthomson04 <jwillthomson19@gmail.com> Date: Fri Jul 18 17:23:50 2025 -0700 fix: Don't detokenize twice in TRT-LLM examples (#1955) commit 343a481 Author: Ryan Olson <ryanolson@users.noreply.github.com> Date: Fri Jul 18 16:22:43 2025 -0600 feat: http disconnects (#2014) commit e330d96 Author: Yan Ru Pei <yanrpei@gmail.com> Date: Fri Jul 18 13:40:54 2025 -0700 feat: enable / disable chunked prefill for mockers (#2015) Signed-off-by: Yan Ru Pei <yanrpei@gmail.com> Co-authored-by: coderabbitai[bot] <136622811+coderabbitai[bot]@users.noreply.github.com> commit 353146e Author: GuanLuo <41310872+GuanLuo@users.noreply.github.com> Date: Fri Jul 18 13:33:36 2025 -0700 feat: add vLLM v1 multi-modal example. Add llama4 Maverick example (#1990) Signed-off-by: GuanLuo <41310872+GuanLuo@users.noreply.github.com> Co-authored-by: krishung5 <krish@nvidia.com> commit 1f07dab Author: Jacky <18255193+kthui@users.noreply.github.com> Date: Fri Jul 18 13:04:20 2025 -0700 feat: Add migration to LLM requests (#1930) commit 5f17918 Author: Tanmay Verma <tanmayv@nvidia.com> Date: Fri Jul 18 12:59:34 2025 -0700 refactor: Migrate to new UX2 for python launch (#2003) commit fc12436 Author: Graham King <grahamk@nvidia.com> Date: Fri Jul 18 14:52:57 2025 -0400 feat(frontend): router-mode settings (#2001) commit dc75cf1 Author: ptarasiewiczNV <104908264+ptarasiewiczNV@users.noreply.github.com> Date: Fri Jul 18 18:47:28 2025 +0200 chore: Move NIXL repo clone to Dockerfiles (#2009) commit f6f392c Author: Iman Tabrizian <10105175+Tabrizian@users.noreply.github.com> Date: Thu Jul 17 18:44:17 2025 -0700 Remove link to the fix for disagg + eagle3 for TRT-LLM example (#2006) Signed-off-by: Iman Tabrizian <10105175+Tabrizian@users.noreply.github.com> commit cc90ca6 Author: atchernych <atchernych@nvidia.com> Date: Thu Jul 17 18:34:40 2025 -0700 feat: Create a convenience script to uninstall Dynamo Deploy CRDs (#1933) commit 267b422 Author: Greg Clark <grclark@nvidia.com> Date: Thu Jul 17 20:44:21 2025 -0400 chore: loosed python requirement versions (#1998) Signed-off-by: Greg Clark <grclark@nvidia.com> commit b8474e5 Author: ishandhanani <82981111+ishandhanani@users.noreply.github.com> Date: Thu Jul 17 16:35:05 2025 -0700 chore: update cmake and gap installation and sgl in wideep container (#1991) commit 157a3b0 Author: Biswa Panda <biswa.panda@gmail.com> Date: Thu Jul 17 15:38:12 2025 -0700 fix: incorrect helm upgrade command (#2000) commit 0dfca2c Author: Ryan McCormick <rmccormick@nvidia.com> Date: Thu Jul 17 15:33:33 2025 -0700 ci: Update trtllm gitlab triggers for new components directory and test script (#1992) commit f3fb09e Author: Kris Hung <krish@nvidia.com> Date: Thu Jul 17 14:59:59 2025 -0700 fix: Fix syntax for tokio-console (#1997) commit dacffb8 Author: Biswa Panda <biswa.panda@gmail.com> Date: Thu Jul 17 14:57:10 2025 -0700 fix: use non-dev golang image for operator (#1993) commit 2b29a0a Author: zaristei <zaristei@berkeley.edu> Date: Thu Jul 17 13:10:42 2025 -0700 fix: Working Arm Build Dockerfile for Vllm_v1 (#1844) commit 2430d89 Author: Ryan McCormick <rmccormick@nvidia.com> Date: Thu Jul 17 12:57:46 2025 -0700 test: Add trtllm kv router tests (#1988) commit 1eadc01 Author: Graham King <grahamk@nvidia.com> Date: Thu Jul 17 15:07:41 2025 -0400 feat(runtime): Support tokio-console (#1986) commit b62e633 Author: GuanLuo <41310872+GuanLuo@users.noreply.github.com> Date: Thu Jul 17 11:16:28 2025 -0700 feat: support separate chat_template.jinja file (#1853) commit 8ae3719 Author: Hongkuan Zhou <tedzhouhk@gmail.com> Date: Thu Jul 17 11:12:35 2025 -0700 chore: add some details to dynamo deploy quickstart and fix deploy.sh (#1978) Signed-off-by: Hongkuan Zhou <tedzhouhk@gmail.com> Co-authored-by: julienmancuso <161955438+julienmancuso@users.noreply.github.com> commit 08891ff Author: Ryan McCormick <rmccormick@nvidia.com> Date: Thu Jul 17 10:57:42 2025 -0700 fix: Update trtllm tests to use new scripts instead of dynamo serve (#1979) commit 49b7a0d Author: Ryan Olson <ryanolson@users.noreply.github.com> Date: Thu Jul 17 08:35:04 2025 -0600 feat: record + analyze logprobs (#1957) commit 6d2be14 Author: Biswa Panda <biswa.panda@gmail.com> Date: Thu Jul 17 00:17:58 2025 -0700 refactor: replace vllm with vllm_v1 container (#1953) Co-authored-by: alec-flowers <aflowers@nvidia.com> commit 4d2a31a Author: ishandhanani <82981111+ishandhanani@users.noreply.github.com> Date: Wed Jul 16 18:04:09 2025 -0700 chore: add port reservation to utils (#1980) commit 1e3e4a0 Author: Alec <35311602+alec-flowers@users.noreply.github.com> Date: Wed Jul 16 15:54:04 2025 -0700 fix: port race condition through deterministic ports (#1937) commit 4ad281f Author: Tanmay Verma <tanmayv@nvidia.com> Date: Wed Jul 16 14:33:51 2025 -0700 refactor: Move TRTLLM example to the component/backends (#1976) commit 57d24a1 Author: Misha Chornyi <99709299+mc-nv@users.noreply.github.com> Date: Wed Jul 16 14:10:24 2025 -0700 build: Removing shell configuration violations. It's bad practice to hardcod… (#1973) commit 182d3b5 Author: Graham King <grahamk@nvidia.com> Date: Wed Jul 16 16:12:40 2025 -0400 chore(bindings): Remove mistralrs / llama.cpp (#1970) commit def6eaa Author: Harrison Saturley-Hall <454891+saturley-hall@users.noreply.github.com> Date: Wed Jul 16 15:50:23 2025 -0400 feat: attributions for debian deps of sglang, trtllm, vllm runtime containers (#1971) commit f31732a Author: Yan Ru Pei <yanrpei@gmail.com> Date: Wed Jul 16 11:22:15 2025 -0700 feat: integrate mocker with dynamo-run and python cli (#1927) commit aba6099 Author: Graham King <grahamk@nvidia.com> Date: Wed Jul 16 12:26:32 2025 -0400 perf(router): Remove lock from router hot path (#1963) commit b212103 Author: Hongkuan Zhou <tedzhouhk@gmail.com> Date: Wed Jul 16 08:55:33 2025 -0700 docs: add notes in docs to deprecate local connector (#1959) commit 7b325ee Author: Biswa Panda <biswa.panda@gmail.com> Date: Tue Jul 15 18:52:00 2025 -0700 fix: vllm router examples (#1942) commit a50be1a Author: hhzhang16 <54051230+hhzhang16@users.noreply.github.com> Date: Tue Jul 15 17:58:01 2025 -0700 feat: update CODEOWNERS (#1926) commit e260fdf Author: Harrison Saturley-Hall <454891+saturley-hall@users.noreply.github.com> Date: Tue Jul 15 18:49:21 2025 -0400 feat: add bitnami helm chart attribution (#1943) Signed-off-by: Harrison Saturley-Hall <454891+saturley-hall@users.noreply.github.com> Co-authored-by: coderabbitai[bot] <136622811+coderabbitai[bot]@users.noreply.github.com> commit 1c03404 Author: Biswa Panda <biswa.panda@gmail.com> Date: Tue Jul 15 14:26:24 2025 -0700 fix: update inference gateway deployment instructions (#1940) commit 5ca570f Author: Graham King <grahamk@nvidia.com> Date: Tue Jul 15 16:54:03 2025 -0400 chore: Rename dynamo.ingress to dynamo.frontend (#1944) commit 7b9182f Author: Graham King <grahamk@nvidia.com> Date: Tue Jul 15 16:33:07 2025 -0400 chore: Move examples/cli to lib/bindings/examples/cli (#1952) commit 40d40dd Author: Graham King <grahamk@nvidia.com> Date: Tue Jul 15 16:02:19 2025 -0400 chore(multi-modal): Rename frontend.py to web.py (#1951) commit a9e0891 Author: Ryan Olson <ryanolson@users.noreply.github.com> Date: Tue Jul 15 12:30:30 2025 -0600 feat: adding http clients and recorded response stream (#1919) commit 4128d58 Author: Biswa Panda <biswa.panda@gmail.com> Date: Tue Jul 15 10:30:47 2025 -0700 feat: allow helm upgrade using deploy script (#1936) commit 4da078b Author: Graham King <grahamk@nvidia.com> Date: Tue Jul 15 12:57:38 2025 -0400 fix: Remove OpenSSL dependency, use Rust TLS (#1945) commit fc004d4 Author: jthomson04 <jwillthomson19@gmail.com> Date: Tue Jul 15 08:45:42 2025 -0700 fix: Fix TRT-LLM container build when using a custom pip wheel (#1825) commit 3c6fc6f Author: ishandhanani <82981111+ishandhanani@users.noreply.github.com> Date: Mon Jul 14 22:35:20 2025 -0700 chore: fix typo (#1938) commit de7fe38 Author: Alec <35311602+alec-flowers@users.noreply.github.com> Date: Mon Jul 14 21:47:12 2025 -0700 feat: add vllm e2e integration tests (#1935) commit 860f3f7 Author: Keiven C <213854356+keivenchang@users.noreply.github.com> Date: Mon Jul 14 21:44:19 2025 -0700 chore: metrics endpoint variables renamed from HTTP_SERVER->SYSTEM (#1934) Co-authored-by: Keiven Chang <keivenchang@users.noreply.github.com> commit fc402a3 Author: Biswa Panda <biswa.panda@gmail.com> Date: Mon Jul 14 21:21:20 2025 -0700 feat: configurable namespace for vllm v1 example (#1909) commit df40d2c Author: ZichengMa <zichengma1225@gmail.com> Date: Mon Jul 14 21:11:29 2025 -0700 docs: fix typo and add mount-workspace to vllm doc (#1931) Signed-off-by: ZichengMa <zichengma1225@gmail.com> Co-authored-by: Alec <35311602+alec-flowers@users.noreply.github.com> commit 901715b Author: Tanmay Verma <tanmayv@nvidia.com> Date: Mon Jul 14 20:14:51 2025 -0700 refactor: Refactor the TRTLLM examples remove dynamo SDK (#1884) commit 5bf23d5 Author: hhzhang16 <54051230+hhzhang16@users.noreply.github.com> Date: Mon Jul 14 18:29:19 2025 -0700 feat: update DynamoGraphDeployments for vllm_v1 (#1890) Co-authored-by: mohammedabdulwahhab <furkhan324@berkeley.edu> commit 9e76590 Author: ishandhanani <82981111+ishandhanani@users.noreply.github.com> Date: Mon Jul 14 17:29:56 2025 -0700 docs: organize sglang readme (#1910) commit ef59ac8 Author: KrishnanPrash <140860868+KrishnanPrash@users.noreply.github.com> Date: Mon Jul 14 16:16:44 2025 -0700 docs: TRTLLM Example of Llama4+Eagle3 (Speculative Decoding) (#1828) Signed-off-by: KrishnanPrash <140860868+KrishnanPrash@users.noreply.github.com> Co-authored-by: Iman Tabrizian <10105175+Tabrizian@users.noreply.github.com> commit 053041e Author: Jorge António <matroid@outlook.com> Date: Tue Jul 15 00:06:38 2025 +0100 fix: resolve incorrect finish reason propagation (#1857) commit 3733f58 Author: Graham King <grahamk@nvidia.com> Date: Mon Jul 14 19:04:22 2025 -0400 feat(backends): Python llama.cpp engine (#1925) commit 6a1350c Author: Tushar Sharma <tusharma@nvidia.com> Date: Mon Jul 14 14:56:36 2025 -0700 build: minor improvements to sglang dockerfile (#1917) commit e2a619b Author: Neelay Shah <neelays@nvidia.com> Date: Mon Jul 14 14:52:53 2025 -0700 fix: remove environment variable passing (#1911) Signed-off-by: Neelay Shah <neelays@nvidia.com> Co-authored-by: Neelay Shah <neelays@a4u8g-0057.ipp2u2.colossus.nvidia.com> commit 3d17a49 Author: Schwinn Saereesitthipitak <17022745+galletas1712@users.noreply.github.com> Date: Mon Jul 14 14:41:56 2025 -0700 refactor: remove dynamo build (#1778) Signed-off-by: Schwinn Saereesitthipitak <17022745+galletas1712@users.noreply.github.com> commit 3e0cb07 Author: Anant Sharma <anants@nvidia.com> Date: Mon Jul 14 15:43:48 2025 -0400 fix: copy attributions and license to trtllm runtime container (#1916) commit fc36bf5 Author: ishandhanani <82981111+ishandhanani@users.noreply.github.com> Date: Mon Jul 14 12:31:49 2025 -0700 feat: receive kvmetrics from sglang scheduler (#1789) Co-authored-by: zixuanzhang226 <zixuanzhang@bytedance.com> commit df91fce Author: Yan Ru Pei <yanrpei@gmail.com> Date: Mon Jul 14 12:24:04 2025 -0700 feat: prefill aware routing (#1895) commit ad8ad66 Author: Graham King <grahamk@nvidia.com> Date: Mon Jul 14 15:20:35 2025 -0400 feat: Shrink the ai-dynamo wheel by 35 MiB (#1918) Remove http and llmctl binaries. They have been unused for a while. commit 480b41d Author: Graham King <grahamk@nvidia.com> Date: Mon Jul 14 15:06:45 2025 -0400 feat: Python frontend / ingress node (#1912)
commit d4b5414 Author: atchernych <atchernych@nvidia.com> Date: Mon Jul 21 13:10:24 2025 -0700 fix: mypy error (#2029) commit 79337c7 Author: Ryan McCormick <rmccormick@nvidia.com> Date: Mon Jul 21 12:12:16 2025 -0700 build: support custom TRTLLM build for commits not on main branch (#2021) commit 95dd942 Author: atchernych <atchernych@nvidia.com> Date: Mon Jul 21 12:09:33 2025 -0700 docs: Post-Merge cleanup of the deploy documentation (#1922) commit cb6de94 Author: ptarasiewiczNV <104908264+ptarasiewiczNV@users.noreply.github.com> Date: Sun Jul 20 22:34:50 2025 +0200 chore: Install vLLM and WideEP kernels in vLLM runtime container (#2010) Signed-off-by: Alec <35311602+alec-flowers@users.noreply.github.com> Co-authored-by: Alec <35311602+alec-flowers@users.noreply.github.com> Co-authored-by: alec-flowers <aflowers@nvidia.com> commit fe63c17 Author: Alec <35311602+alec-flowers@users.noreply.github.com> Date: Fri Jul 18 17:45:08 2025 -0700 fix: Revert "feat: add vLLM v1 multi-modal example. Add llama4 Maverick ex… (#2017) commit bf1998f Author: jthomson04 <jwillthomson19@gmail.com> Date: Fri Jul 18 17:23:50 2025 -0700 fix: Don't detokenize twice in TRT-LLM examples (#1955) commit 343a481 Author: Ryan Olson <ryanolson@users.noreply.github.com> Date: Fri Jul 18 16:22:43 2025 -0600 feat: http disconnects (#2014) commit e330d96 Author: Yan Ru Pei <yanrpei@gmail.com> Date: Fri Jul 18 13:40:54 2025 -0700 feat: enable / disable chunked prefill for mockers (#2015) Signed-off-by: Yan Ru Pei <yanrpei@gmail.com> Co-authored-by: coderabbitai[bot] <136622811+coderabbitai[bot]@users.noreply.github.com> commit 353146e Author: GuanLuo <41310872+GuanLuo@users.noreply.github.com> Date: Fri Jul 18 13:33:36 2025 -0700 feat: add vLLM v1 multi-modal example. Add llama4 Maverick example (#1990) Signed-off-by: GuanLuo <41310872+GuanLuo@users.noreply.github.com> Co-authored-by: krishung5 <krish@nvidia.com> commit 1f07dab Author: Jacky <18255193+kthui@users.noreply.github.com> Date: Fri Jul 18 13:04:20 2025 -0700 feat: Add migration to LLM requests (#1930) commit 5f17918 Author: Tanmay Verma <tanmayv@nvidia.com> Date: Fri Jul 18 12:59:34 2025 -0700 refactor: Migrate to new UX2 for python launch (#2003) commit fc12436 Author: Graham King <grahamk@nvidia.com> Date: Fri Jul 18 14:52:57 2025 -0400 feat(frontend): router-mode settings (#2001) commit dc75cf1 Author: ptarasiewiczNV <104908264+ptarasiewiczNV@users.noreply.github.com> Date: Fri Jul 18 18:47:28 2025 +0200 chore: Move NIXL repo clone to Dockerfiles (#2009) commit f6f392c Author: Iman Tabrizian <10105175+Tabrizian@users.noreply.github.com> Date: Thu Jul 17 18:44:17 2025 -0700 Remove link to the fix for disagg + eagle3 for TRT-LLM example (#2006) Signed-off-by: Iman Tabrizian <10105175+Tabrizian@users.noreply.github.com> commit cc90ca6 Author: atchernych <atchernych@nvidia.com> Date: Thu Jul 17 18:34:40 2025 -0700 feat: Create a convenience script to uninstall Dynamo Deploy CRDs (#1933) commit 267b422 Author: Greg Clark <grclark@nvidia.com> Date: Thu Jul 17 20:44:21 2025 -0400 chore: loosed python requirement versions (#1998) Signed-off-by: Greg Clark <grclark@nvidia.com> commit b8474e5 Author: ishandhanani <82981111+ishandhanani@users.noreply.github.com> Date: Thu Jul 17 16:35:05 2025 -0700 chore: update cmake and gap installation and sgl in wideep container (#1991) commit 157a3b0 Author: Biswa Panda <biswa.panda@gmail.com> Date: Thu Jul 17 15:38:12 2025 -0700 fix: incorrect helm upgrade command (#2000) commit 0dfca2c Author: Ryan McCormick <rmccormick@nvidia.com> Date: Thu Jul 17 15:33:33 2025 -0700 ci: Update trtllm gitlab triggers for new components directory and test script (#1992) commit f3fb09e Author: Kris Hung <krish@nvidia.com> Date: Thu Jul 17 14:59:59 2025 -0700 fix: Fix syntax for tokio-console (#1997) commit dacffb8 Author: Biswa Panda <biswa.panda@gmail.com> Date: Thu Jul 17 14:57:10 2025 -0700 fix: use non-dev golang image for operator (#1993) commit 2b29a0a Author: zaristei <zaristei@berkeley.edu> Date: Thu Jul 17 13:10:42 2025 -0700 fix: Working Arm Build Dockerfile for Vllm_v1 (#1844) commit 2430d89 Author: Ryan McCormick <rmccormick@nvidia.com> Date: Thu Jul 17 12:57:46 2025 -0700 test: Add trtllm kv router tests (#1988) commit 1eadc01 Author: Graham King <grahamk@nvidia.com> Date: Thu Jul 17 15:07:41 2025 -0400 feat(runtime): Support tokio-console (#1986) commit b62e633 Author: GuanLuo <41310872+GuanLuo@users.noreply.github.com> Date: Thu Jul 17 11:16:28 2025 -0700 feat: support separate chat_template.jinja file (#1853) commit 8ae3719 Author: Hongkuan Zhou <tedzhouhk@gmail.com> Date: Thu Jul 17 11:12:35 2025 -0700 chore: add some details to dynamo deploy quickstart and fix deploy.sh (#1978) Signed-off-by: Hongkuan Zhou <tedzhouhk@gmail.com> Co-authored-by: julienmancuso <161955438+julienmancuso@users.noreply.github.com> commit 08891ff Author: Ryan McCormick <rmccormick@nvidia.com> Date: Thu Jul 17 10:57:42 2025 -0700 fix: Update trtllm tests to use new scripts instead of dynamo serve (#1979) commit 49b7a0d Author: Ryan Olson <ryanolson@users.noreply.github.com> Date: Thu Jul 17 08:35:04 2025 -0600 feat: record + analyze logprobs (#1957) commit 6d2be14 Author: Biswa Panda <biswa.panda@gmail.com> Date: Thu Jul 17 00:17:58 2025 -0700 refactor: replace vllm with vllm_v1 container (#1953) Co-authored-by: alec-flowers <aflowers@nvidia.com> commit 4d2a31a Author: ishandhanani <82981111+ishandhanani@users.noreply.github.com> Date: Wed Jul 16 18:04:09 2025 -0700 chore: add port reservation to utils (#1980) commit 1e3e4a0 Author: Alec <35311602+alec-flowers@users.noreply.github.com> Date: Wed Jul 16 15:54:04 2025 -0700 fix: port race condition through deterministic ports (#1937) commit 4ad281f Author: Tanmay Verma <tanmayv@nvidia.com> Date: Wed Jul 16 14:33:51 2025 -0700 refactor: Move TRTLLM example to the component/backends (#1976) commit 57d24a1 Author: Misha Chornyi <99709299+mc-nv@users.noreply.github.com> Date: Wed Jul 16 14:10:24 2025 -0700 build: Removing shell configuration violations. It's bad practice to hardcod… (#1973) commit 182d3b5 Author: Graham King <grahamk@nvidia.com> Date: Wed Jul 16 16:12:40 2025 -0400 chore(bindings): Remove mistralrs / llama.cpp (#1970) commit def6eaa Author: Harrison Saturley-Hall <454891+saturley-hall@users.noreply.github.com> Date: Wed Jul 16 15:50:23 2025 -0400 feat: attributions for debian deps of sglang, trtllm, vllm runtime containers (#1971) commit f31732a Author: Yan Ru Pei <yanrpei@gmail.com> Date: Wed Jul 16 11:22:15 2025 -0700 feat: integrate mocker with dynamo-run and python cli (#1927) commit aba6099 Author: Graham King <grahamk@nvidia.com> Date: Wed Jul 16 12:26:32 2025 -0400 perf(router): Remove lock from router hot path (#1963) commit b212103 Author: Hongkuan Zhou <tedzhouhk@gmail.com> Date: Wed Jul 16 08:55:33 2025 -0700 docs: add notes in docs to deprecate local connector (#1959) commit 7b325ee Author: Biswa Panda <biswa.panda@gmail.com> Date: Tue Jul 15 18:52:00 2025 -0700 fix: vllm router examples (#1942) commit a50be1a Author: hhzhang16 <54051230+hhzhang16@users.noreply.github.com> Date: Tue Jul 15 17:58:01 2025 -0700 feat: update CODEOWNERS (#1926) commit e260fdf Author: Harrison Saturley-Hall <454891+saturley-hall@users.noreply.github.com> Date: Tue Jul 15 18:49:21 2025 -0400 feat: add bitnami helm chart attribution (#1943) Signed-off-by: Harrison Saturley-Hall <454891+saturley-hall@users.noreply.github.com> Co-authored-by: coderabbitai[bot] <136622811+coderabbitai[bot]@users.noreply.github.com> commit 1c03404 Author: Biswa Panda <biswa.panda@gmail.com> Date: Tue Jul 15 14:26:24 2025 -0700 fix: update inference gateway deployment instructions (#1940) commit 5ca570f Author: Graham King <grahamk@nvidia.com> Date: Tue Jul 15 16:54:03 2025 -0400 chore: Rename dynamo.ingress to dynamo.frontend (#1944) commit 7b9182f Author: Graham King <grahamk@nvidia.com> Date: Tue Jul 15 16:33:07 2025 -0400 chore: Move examples/cli to lib/bindings/examples/cli (#1952) commit 40d40dd Author: Graham King <grahamk@nvidia.com> Date: Tue Jul 15 16:02:19 2025 -0400 chore(multi-modal): Rename frontend.py to web.py (#1951) commit a9e0891 Author: Ryan Olson <ryanolson@users.noreply.github.com> Date: Tue Jul 15 12:30:30 2025 -0600 feat: adding http clients and recorded response stream (#1919) commit 4128d58 Author: Biswa Panda <biswa.panda@gmail.com> Date: Tue Jul 15 10:30:47 2025 -0700 feat: allow helm upgrade using deploy script (#1936) commit 4da078b Author: Graham King <grahamk@nvidia.com> Date: Tue Jul 15 12:57:38 2025 -0400 fix: Remove OpenSSL dependency, use Rust TLS (#1945) commit fc004d4 Author: jthomson04 <jwillthomson19@gmail.com> Date: Tue Jul 15 08:45:42 2025 -0700 fix: Fix TRT-LLM container build when using a custom pip wheel (#1825) commit 3c6fc6f Author: ishandhanani <82981111+ishandhanani@users.noreply.github.com> Date: Mon Jul 14 22:35:20 2025 -0700 chore: fix typo (#1938) commit de7fe38 Author: Alec <35311602+alec-flowers@users.noreply.github.com> Date: Mon Jul 14 21:47:12 2025 -0700 feat: add vllm e2e integration tests (#1935) commit 860f3f7 Author: Keiven C <213854356+keivenchang@users.noreply.github.com> Date: Mon Jul 14 21:44:19 2025 -0700 chore: metrics endpoint variables renamed from HTTP_SERVER->SYSTEM (#1934) Co-authored-by: Keiven Chang <keivenchang@users.noreply.github.com> commit fc402a3 Author: Biswa Panda <biswa.panda@gmail.com> Date: Mon Jul 14 21:21:20 2025 -0700 feat: configurable namespace for vllm v1 example (#1909) commit df40d2c Author: ZichengMa <zichengma1225@gmail.com> Date: Mon Jul 14 21:11:29 2025 -0700 docs: fix typo and add mount-workspace to vllm doc (#1931) Signed-off-by: ZichengMa <zichengma1225@gmail.com> Co-authored-by: Alec <35311602+alec-flowers@users.noreply.github.com> commit 901715b Author: Tanmay Verma <tanmayv@nvidia.com> Date: Mon Jul 14 20:14:51 2025 -0700 refactor: Refactor the TRTLLM examples remove dynamo SDK (#1884) commit 5bf23d5 Author: hhzhang16 <54051230+hhzhang16@users.noreply.github.com> Date: Mon Jul 14 18:29:19 2025 -0700 feat: update DynamoGraphDeployments for vllm_v1 (#1890) Co-authored-by: mohammedabdulwahhab <furkhan324@berkeley.edu> commit 9e76590 Author: ishandhanani <82981111+ishandhanani@users.noreply.github.com> Date: Mon Jul 14 17:29:56 2025 -0700 docs: organize sglang readme (#1910) commit ef59ac8 Author: KrishnanPrash <140860868+KrishnanPrash@users.noreply.github.com> Date: Mon Jul 14 16:16:44 2025 -0700 docs: TRTLLM Example of Llama4+Eagle3 (Speculative Decoding) (#1828) Signed-off-by: KrishnanPrash <140860868+KrishnanPrash@users.noreply.github.com> Co-authored-by: Iman Tabrizian <10105175+Tabrizian@users.noreply.github.com> commit 053041e Author: Jorge António <matroid@outlook.com> Date: Tue Jul 15 00:06:38 2025 +0100 fix: resolve incorrect finish reason propagation (#1857) commit 3733f58 Author: Graham King <grahamk@nvidia.com> Date: Mon Jul 14 19:04:22 2025 -0400 feat(backends): Python llama.cpp engine (#1925) commit 6a1350c Author: Tushar Sharma <tusharma@nvidia.com> Date: Mon Jul 14 14:56:36 2025 -0700 build: minor improvements to sglang dockerfile (#1917) commit e2a619b Author: Neelay Shah <neelays@nvidia.com> Date: Mon Jul 14 14:52:53 2025 -0700 fix: remove environment variable passing (#1911) Signed-off-by: Neelay Shah <neelays@nvidia.com> Co-authored-by: Neelay Shah <neelays@a4u8g-0057.ipp2u2.colossus.nvidia.com> commit 3d17a49 Author: Schwinn Saereesitthipitak <17022745+galletas1712@users.noreply.github.com> Date: Mon Jul 14 14:41:56 2025 -0700 refactor: remove dynamo build (#1778) Signed-off-by: Schwinn Saereesitthipitak <17022745+galletas1712@users.noreply.github.com> commit 3e0cb07 Author: Anant Sharma <anants@nvidia.com> Date: Mon Jul 14 15:43:48 2025 -0400 fix: copy attributions and license to trtllm runtime container (#1916) commit fc36bf5 Author: ishandhanani <82981111+ishandhanani@users.noreply.github.com> Date: Mon Jul 14 12:31:49 2025 -0700 feat: receive kvmetrics from sglang scheduler (#1789) Co-authored-by: zixuanzhang226 <zixuanzhang@bytedance.com> commit df91fce Author: Yan Ru Pei <yanrpei@gmail.com> Date: Mon Jul 14 12:24:04 2025 -0700 feat: prefill aware routing (#1895) commit ad8ad66 Author: Graham King <grahamk@nvidia.com> Date: Mon Jul 14 15:20:35 2025 -0400 feat: Shrink the ai-dynamo wheel by 35 MiB (#1918) Remove http and llmctl binaries. They have been unused for a while. commit 480b41d Author: Graham King <grahamk@nvidia.com> Date: Mon Jul 14 15:06:45 2025 -0400 feat: Python frontend / ingress node (#1912)
Overview:
Follow-up changes for supporting python3 -m dynamo.trtllm style launch.
Summary by CodeRabbit
New Features
Refactor
Chores