Support spans count metrics#19293
Conversation
|
Documentation preview for 94d0c38 is available at: More info
|
0620a42 to
53e2674
Compare
71fd561 to
9da86ff
Compare
53e2674 to
788cb97
Compare
9da86ff to
626020a
Compare
788cb97 to
9116756
Compare
Signed-off-by: Serena Ruan <serena.rxy@gmail.com>
Signed-off-by: Serena Ruan <serena.rxy@gmail.com>
626020a to
baa3bf6
Compare
There was a problem hiding this comment.
Pull request overview
This PR adds support for COUNT aggregation metrics on spans and refactors the metric query utilities to support both traces and spans views using a unified query function.
Key Changes:
- Added SPANS view type support with COUNT aggregation for the "span" metric grouped by span_type dimension
- Refactored
query_metrics_for_traces_viewinto a unifiedquery_metricsfunction that handles both TRACES and SPANS view types - Extracted view-specific logic into helper functions (
_apply_view_initial_join,_apply_dimension_to_query,_apply_metric_specific_joins,_get_aggregation_column)
Reviewed changes
Copilot reviewed 3 out of 3 changed files in this pull request and generated 1 comment.
| File | Description |
|---|---|
| tests/store/tracking/test_sqlalchemy_store_query_trace_metrics.py | Added comprehensive test coverage for span count metrics including: no dimensions, grouped by span_type, with time intervals, with filters, and across multiple traces |
| mlflow/store/tracking/utils/sql_trace_metrics_utils.py | Refactored metric query logic into view-agnostic functions, added SPANS_METRICS_CONFIGS, updated time bucket expression to handle spans timestamps (nanoseconds), and generalized dimension/aggregation handling |
| mlflow/store/tracking/sqlalchemy_store.py | Simplified query_trace_metrics by replacing view-specific conditional logic with unified query_metrics function call |
💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.
|
|
||
|
|
||
| def query_metrics_for_traces_view( | ||
| def query_metrics( |
There was a problem hiding this comment.
Shall we separate this into a function to build query and another one to run it and convert result? Since we have many query building logic, it would be easier for us to assert generated query directly rather than having lots of e2e test.
There was a problem hiding this comment.
Isn't verifying both more robust? IMO it's more complex that we verify on the query instead of the result, given we have four database engine types to support
There was a problem hiding this comment.
Yeah we want both. The problem is currently we can only have e2e test, where preparing real data for every potential case is cumbersome, then we tend to add less test coverage because it is tedious. Not a blocker, but I think it is more future proof to make it unit-testable especially we will likely to add more filtering logic.
B-Step62
left a comment
There was a problem hiding this comment.
LGTM, https://github.com/mlflow/mlflow/pull/19293/changes#r2622036483 is not a blocker but I recommend addressing this before the e2e test suite becomes too large.
🛠 DevTools 🛠
Install mlflow from this PR
For Databricks, use the following command:
Related Issues/PRs
#xxxWhat changes are proposed in this pull request?
Support COUNT on spans.
Refactor the utils a bit so we can reuse the function for traces view & spans view, since the main structure is the same and the only difference is what tables to join and columns to fetch.
How is this PR tested?
Does this PR require documentation update?
Release Notes
Is this a user-facing change?
What component(s), interfaces, languages, and integrations does this PR affect?
Components
area/tracking: Tracking Service, tracking client APIs, autologgingarea/models: MLmodel format, model serialization/deserialization, flavorsarea/model-registry: Model Registry service, APIs, and the fluent client calls for Model Registryarea/scoring: MLflow Model server, model deployment tools, Spark UDFsarea/evaluation: MLflow model evaluation features, evaluation metrics, and evaluation workflowsarea/gateway: MLflow AI Gateway client APIs, server, and third-party integrationsarea/prompts: MLflow prompt engineering features, prompt templates, and prompt managementarea/tracing: MLflow Tracing features, tracing APIs, and LLM tracing functionalityarea/projects: MLproject format, project running backendsarea/uiux: Front-end, user experience, plotting, JavaScript, JavaScript dev serverarea/build: Build and test infrastructure for MLflowarea/docs: MLflow documentation pagesHow should the PR be classified in the release notes? Choose one:
rn/none- No description will be included. The PR will be mentioned only by the PR number in the "Small Bugfixes and Documentation Updates" sectionrn/breaking-change- The PR will be mentioned in the "Breaking Changes" sectionrn/feature- A new user-facing feature worth mentioning in the release notesrn/bug-fix- A user-facing bug fix worth mentioning in the release notesrn/documentation- A user-facing documentation change worth mentioning in the release notesShould this PR be included in the next patch release?
Yesshould be selected for bug fixes, documentation updates, and other small changes.Noshould be selected for new features and larger changes. If you're unsure about the release classification of this PR, leave this unchecked to let the maintainers decide.What is a minor/patch release?
Bug fixes, doc updates and new features usually go into minor releases.
Bug fixes and doc updates usually go into patch releases.