Skip to content

test: Add dynamo serve TRTLLM example to pytest#1417

Merged
tanmayv25 merged 5 commits intomainfrom
tanmayv-tests
Jun 7, 2025
Merged

test: Add dynamo serve TRTLLM example to pytest#1417
tanmayv25 merged 5 commits intomainfrom
tanmayv-tests

Conversation

@tanmayv25
Copy link
Copy Markdown
Contributor

@tanmayv25 tanmayv25 commented Jun 6, 2025

Overview:

Updates pytest to test tensorrt_llm examples.

As next step, would add the test to be run on gitlab CI.

Important note: will run TensorRT-LLM specific tests only when tensorrtllm marker is specified.

Details:

Within Dynamo + TensorRT-LLM container on current main:

pytest -v -m tensorrtllm --ignore=benchmarks/data_generator/tests/ --ignore=lib/bindings/python/tests/
=================================== test session starts ===================================
platform linux -- Python 3.12.3, pytest-8.4.0, pluggy-1.5.0 -- /usr/bin/python
cachedir: .pytest_cache
benchmark: 5.1.0 (defaults: timer=time.perf_counter disable_gc=False min_rounds=5 min_time=0.000005 max_time=1.0 calibration_precision=10 warmup=False warmup_iterations=100000)
hypothesis profile 'default' -> database=DirectoryBasedExampleDatabase(PosixPath('/workspace/.hypothesis/examples'))
rootdir: /workspace
configfile: pyproject.toml
plugins: cov-6.1.1, asyncio-1.0.0, mypy-1.0.1, md-report-0.7.0, timeout-2.4.0, benchmark-5.1.0, pytest_codeblocks-0.17.0, anyio-4.9.0, hypothesis-6.130.8, xdist-3.6.1, xdoctest-1.0.2, rerunfailures-15.0, shard-0.1.2, flakefinder-1.1.0, typeguard-4.3.0
asyncio: mode=Mode.AUTO, asyncio_default_fixture_loop_scope=None, asyncio_default_test_loop_scope=function
collected 222 items / 218 deselected / 4 selected
Running 4 items in this shard: tests/serve/test_dynamo_serve.py::test_serve_deployment[trtllm_agg], tests/serve/test_dynamo_serve.py::test_serve_deployment[trtllm_agg_router], tests/serve/test_dynamo_serve.py::test_serve_deployment[trtllm_disagg], tests/serve/test_dynamo_serve.py::test_serve_deployment[trtllm_disagg_router]

tests/serve/test_dynamo_serve.py::test_serve_deployment[trtllm_agg] PASSED [ 25%]
tests/serve/test_dynamo_serve.py::test_serve_deployment[trtllm_agg_router] PASSED [ 50%]
tests/serve/test_dynamo_serve.py::test_serve_deployment[trtllm_disagg] PASSED [ 75%]
tests/serve/test_dynamo_serve.py::test_serve_deployment[trtllm_disagg_router] PASSED [100%]

====================== 4 passed, 218 deselected in 249.13s (0:04:09) ======================

Summary by CodeRabbit

  • New Features
    • Added new deployment graph test configurations for TensorRT LLM variants, including support for delayed server start.
  • Tests
    • Enhanced test coverage for serve deployments with additional TensorRT LLM scenarios and improved response validation.
    • Introduced configurable delay before health checks in managed test processes.
    • Adjusted test collection behavior so that TensorRT LLM tests are only run when explicitly requested.

@coderabbitai
Copy link
Copy Markdown
Contributor

coderabbitai Bot commented Jun 6, 2025

Walkthrough

The changes introduce a mechanism to selectively skip tests marked with tensorrtllm unless explicitly requested, add new deployment graph configurations for TensorRT LLM variants in the test suite, and implement a configurable start delay for managed deployment processes. Helper functions and fields are added to support these enhancements.

Changes

File(s) Change Summary
tests/conftest.py Added a pytest hook to skip tensorrtllm-marked tests unless -m tensorrtllm is specified.
tests/serve/test_dynamo_serve.py Added new trtllm deployment graphs with router variants and delayed start, updated process handling, refactored response validation with multi-endpoint support and repeated requests.
tests/utils/deployment_graph.py Replaced single endpoint and response_handler with lists; added delayed_start field; updated Payload with separate chat and completions payloads, repeat count; renamed and added response handlers.
tests/utils/managed_process.py Added delayed_start field to ManagedProcess dataclass and implemented delay logic in process startup before health checks.

Sequence Diagram(s)

sequenceDiagram
    participant Pytest as Pytest Runner
    participant Conftest as pytest_collection_modifyitems
    participant Test as Test Item

    Pytest->>Conftest: Collect test items
    Conftest->>Test: Check for 'tensorrtllm' marker
    alt '-m tensorrtllm' not specified
        Conftest->>Test: Mark as skipped with message
    else '-m tensorrtllm' specified
        Conftest->>Test: No change
    end
Loading
sequenceDiagram
    participant Test as test_serve_deployment
    participant DeploymentGraph as DeploymentGraph
    participant ManagedProcess as ManagedProcess

    Test->>DeploymentGraph: Select deployment config (may include delayed_start)
    Test->>ManagedProcess: Start process with delayed_start
    ManagedProcess->>ManagedProcess: Start process
    ManagedProcess->>ManagedProcess: Sleep for delayed_start seconds
    ManagedProcess->>ManagedProcess: Perform health checks
    Test->>ManagedProcess: Send request to endpoint (chat or completions)
    ManagedProcess->>Test: Return response
    Test->>Test: Validate response
    loop repeat_count - 1 times
        Test->>ManagedProcess: Send repeated request
        ManagedProcess->>Test: Return response
        Test->>Test: Validate response
    end
Loading

Poem

In the warren where test cases dwell,
Some bunnies hop, some quietly quell—
With a pause before the race,
And markers hiding in their place,
New graphs bloom in trtllm spring,
While rabbits cheer for everything!
🐇✨


🪧 Tips

Chat

There are 3 ways to chat with CodeRabbit:

  • Review comments: Directly reply to a review comment made by CodeRabbit. Example:
    • I pushed a fix in commit <commit_id>, please review it.
    • Explain this complex logic.
    • Open a follow-up GitHub issue for this discussion.
  • Files and specific lines of code (under the "Files changed" tab): Tag @coderabbitai in a new review comment at the desired location with your query. Examples:
    • @coderabbitai explain this code block.
    • @coderabbitai modularize this function.
  • PR comments: Tag @coderabbitai in a new PR comment to ask questions about the PR branch. For the best results, please provide a very specific query, as very limited context is provided in this mode. Examples:
    • @coderabbitai gather interesting stats about this repository and render them as a table. Additionally, render a pie chart showing the language distribution in the codebase.
    • @coderabbitai read src/utils.ts and explain its main purpose.
    • @coderabbitai read the files in the src/scheduler package and generate a class diagram using mermaid and a README in the markdown format.
    • @coderabbitai help me debug CodeRabbit configuration file.

Support

Need help? Create a ticket on our support page for assistance with any issues or questions.

Note: Be mindful of the bot's finite context window. It's strongly recommended to break down tasks such as reading entire modules into smaller chunks. For a focused discussion, use review comments to chat about specific files and their changes, instead of using the PR comments.

CodeRabbit Commands (Invoked using PR comments)

  • @coderabbitai pause to pause the reviews on a PR.
  • @coderabbitai resume to resume the paused reviews.
  • @coderabbitai review to trigger an incremental review. This is useful when automatic reviews are disabled for the repository.
  • @coderabbitai full review to do a full review from scratch and review all the files again.
  • @coderabbitai summary to regenerate the summary of the PR.
  • @coderabbitai generate sequence diagram to generate a sequence diagram of the changes in this PR.
  • @coderabbitai resolve resolve all the CodeRabbit review comments.
  • @coderabbitai configuration to show the current CodeRabbit configuration for the repository.
  • @coderabbitai help to get help.

Other keywords and placeholders

  • Add @coderabbitai ignore anywhere in the PR description to prevent this PR from being reviewed.
  • Add @coderabbitai summary to generate the high-level summary at a specific location in the PR description.
  • Add @coderabbitai anywhere in the PR title to generate the title automatically.

Documentation and Community

  • Visit our Documentation for detailed information on how to use CodeRabbit.
  • Join our Discord Community to get help, request features, and share feedback.
  • Follow us on X/Twitter for updates and announcements.

Copy link
Copy Markdown
Contributor

@coderabbitai coderabbitai Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 0

🧹 Nitpick comments (1)
tests/utils/managed_process.py (1)

63-64: Use proper logging instead of print statement.

The delayed start implementation is correct, but consider using the existing logger instead of print for better consistency with the rest of the codebase.

-            print(f"delaying start: {self.delayed_start}")
+            self._logger.info("Delaying start: %s seconds", self.delayed_start)
📜 Review details

Configuration used: .coderabbit.yaml
Review profile: CHILL
Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 2019a7d and 7855f7d.

📒 Files selected for processing (4)
  • tests/conftest.py (1 hunks)
  • tests/serve/test_dynamo_serve.py (5 hunks)
  • tests/utils/deployment_graph.py (1 hunks)
  • tests/utils/managed_process.py (2 hunks)
⏰ Context from checks skipped due to timeout of 90000ms (2)
  • GitHub Check: Mirror Repository to GitLab
  • GitHub Check: Build and Test - vllm
🔇 Additional comments (7)
tests/utils/deployment_graph.py (1)

32-32: LGTM!

The delayed_start field addition is well-implemented with appropriate type annotation and sensible default value.

tests/conftest.py (1)

35-46: Excellent implementation of selective test execution.

The pytest hook correctly implements the intended behavior to skip TensorRT-LLM tests unless explicitly requested via -m tensorrtllm. The logic is sound and follows pytest conventions.

tests/serve/test_dynamo_serve.py (5)

152-199: Verify the delayed start times are intentional.

The new TensorRT-LLM deployment graphs are well-configured with appropriate marks and directories. However, the delayed start times of 60 and 120 seconds seem quite long.

Please confirm that these extended delayed start times are necessary for TensorRT-LLM router variants:

  • trtllm_agg_router: 60 seconds
  • trtllm_disagg_router: 120 seconds

These delays significantly extend test execution time, so verification that they're truly required would be valuable.


226-226: LGTM!

Correctly passes the delayed_start attribute from the deployment graph to the managed process, completing the integration chain.


248-257: LGTM!

The new pytest parameters are properly marked with tensorrtllm and appropriate GPU requirements, enabling selective execution via the pytest hook.


282-290: Excellent refactoring with the helper function.

The check_response function effectively consolidates response validation logic, improving code reusability and maintainability. The implementation correctly handles status code validation, content extraction, and expected response verification.


344-353: Good addition for comprehensive validation.

The second request validates that the server remains healthy after the initial request and ensures router mode activation. This enhances test coverage and reliability.

@tanmayv25 tanmayv25 changed the title tests: Add TRTLLM example pytests test: Add TRTLLM example pytests Jun 6, 2025
@github-actions github-actions Bot added the test label Jun 6, 2025
@tanmayv25 tanmayv25 changed the title test: Add TRTLLM example pytests test: Add dynamo serve TRTLLM example to pytest Jun 6, 2025
Comment thread tests/serve/test_dynamo_serve.py Outdated
Comment thread tests/utils/managed_process.py
Comment thread tests/serve/test_dynamo_serve.py
Comment thread tests/serve/test_dynamo_serve.py Outdated
Copy link
Copy Markdown
Contributor

@nnshah1 nnshah1 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

left comment about moving repeat request as a parameter

@tanmayv25 tanmayv25 requested a review from rmccorm4 June 7, 2025 00:45
@tanmayv25 tanmayv25 enabled auto-merge (squash) June 7, 2025 00:46
Copy link
Copy Markdown
Contributor

@coderabbitai coderabbitai Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 0

🧹 Nitpick comments (1)
tests/serve/test_dynamo_serve.py (1)

408-414: Verify repeat count testing logic

The repeat request loop correctly uses payload.repeat_count but may not provide optimal coverage. Consider logging the iteration count or response variations to ensure the repeated requests are actually exercising different code paths.

-            for _ in range(payload.repeat_count):
+            for i in range(payload.repeat_count):
                 response = requests.post(
                     url,
                     json=request_body,
                     timeout=deployment_graph.timeout - elapsed,
                 )
+                logger.info("Repeat request %d/%d completed", i+1, payload.repeat_count)
                 check_response(response, response_handler)
📜 Review details

Configuration used: .coderabbit.yaml
Review profile: CHILL
Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 21ae4f8 and 2595e7a.

📒 Files selected for processing (2)
  • tests/serve/test_dynamo_serve.py (13 hunks)
  • tests/utils/deployment_graph.py (3 hunks)
🧰 Additional context used
🧬 Code Graph Analysis (1)
tests/serve/test_dynamo_serve.py (2)
tests/utils/deployment_graph.py (4)
  • chat_completions_response_handler (49-60)
  • completions_response_handler (63-73)
  • Payload (37-46)
  • DeploymentGraph (21-33)
tests/utils/managed_process.py (1)
  • ManagedProcess (30-232)
⏰ Context from checks skipped due to timeout of 90000ms (1)
  • GitHub Check: Build and Test - vllm
🔇 Additional comments (13)
tests/utils/deployment_graph.py (4)

29-32: Enhancement: Multi-endpoint support with delayed start capability

The transition from single endpoint/response_handler to lists enables testing multiple API endpoints per deployment graph, which aligns well with the PR objective of comprehensive TensorRT-LLM testing. The delayed_start field addresses the race condition mentioned in the past review comments for KV router scenarios.


42-46: Enhanced payload structure for dual API support

The separation of payload_chat and payload_completions properly supports both OpenAI chat completions and completions API formats. The repeat_count field addresses the past review comment requesting the ability to control request repetition for robustness testing.


49-61: Clear function naming for chat completions API

Renaming to chat_completions_response_handler improves clarity about which API format this function handles, distinguishing it from the new completions handler below.


63-73: Robust completions API response validation

The new handler correctly validates the completions API response structure, which differs from chat completions by expecting text in choices rather than message.content. This addresses the past review comment about having sanity tests for v1/completions.

tests/serve/test_dynamo_serve.py (9)

27-28: Updated imports reflect function rename

The import correctly references the renamed chat_completions_response_handler function, maintaining consistency with the changes in deployment_graph.py.


35-57: Enhanced multimodal payload structure

The updated structure properly separates chat and completions payloads while adding repeat count control. The payload content remains appropriate for multimodal testing.


72-82: Comprehensive payload support for both API formats

Excellent addition of payload_completions with appropriate parameters. The repeat_count=10 provides good robustness testing for the text scenario, addressing the past review comment about request repetition control.


183-196: TensorRT-LLM aggregated deployment configuration

The new deployment graph correctly uses the TensorRT-LLM directory and appropriate markers. The configuration aligns with existing patterns for aggregated deployments.


197-214: Router deployment with delayed start workaround

The delayed_start=60 with detailed FIXME comment properly addresses the KV router race condition mentioned in past review comments. This temporary solution prevents the non-recoverable state until the underlying issue is resolved.


229-246: Disaggregated router with extended delay

The longer delayed_start=120 for disaggregated router configuration suggests this scenario needs more time for endpoint registration, which makes sense given the distributed nature of disaggregated deployments.


273-273: Proper delayed start parameter propagation

The delayed_start parameter is correctly passed through to the ManagedProcess base class, ensuring the timing workaround functions as intended.


329-337: Extract response validation into helper function

Good refactoring that centralizes response validation logic, improving maintainability and consistency across multiple endpoint testing.


338-407: Robust multi-endpoint testing with comprehensive error handling

The refactored test logic properly handles multiple endpoints with appropriate payload selection, retry mechanisms for various error conditions, and timeout management. The specific handling of "no instances" and "Model not found" errors demonstrates good understanding of the deployment failure modes.

@tanmayv25 tanmayv25 merged commit cdddaed into main Jun 7, 2025
8 of 9 checks passed
@tanmayv25 tanmayv25 deleted the tanmayv-tests branch June 7, 2025 01:23
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants