[metrics] Add in queue metrics#4412
Closed
hebiao064 wants to merge 7 commits intosgl-project:mainfrom
Closed
Conversation
merrymercy
reviewed
Mar 14, 2025
| total_queue_latency = 0 | ||
| avg_queue_latency = 0 | ||
| for req in can_run_list: | ||
| print(req.queue_time_start, req.queue_time_end) |
Contributor
There was a problem hiding this comment.
remove debug statement. Protect this under if self. enable_metrics
Collaborator
Author
There was a problem hiding this comment.
Thanks, fixed the comments and moved this piece of code under if self.enable_metrics
Co-authored-by: sleepcoo <sleepcoo@gmail.com>
…gl-project#3964) Signed-off-by: wangyu <wangyu.steph@bytedance.com>
Collaborator
Author
After I rebased my branch it seems github was failing to pull my branch, will close this PR and create new one @merrymercy
|
6 tasks
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.

Motivation
When serving LLMs at scale, understanding where time is spent during request processing is crucial for optimization. The current metrics don't provide enough granularity to identify specific bottlenecks in the request lifecycle.
Note about performance concern
We only emit those metrics when
--enable-metricsare specified.Future Work
If this approach is well-received, I plan to implement additional latency breakdowns for:
Metrics Result
Benchmark Result
As expected, the instrumentation adds negligible overhead (within normal benchmark fluctuation). This confirms that the metrics collection doesn't impact performance while providing valuable insights.
Modifications
This PR takes a minimalist approach by focusing only on queue latency as a first step. We're setting queue_time_start when requests enter the queue and queue_time_end when they're selected for processing, then calculating the average latency across all requests in a batch.
Checklist