Skip to content

Add token usage metadata to litellm judge adapter#21236

Merged
smoorjani merged 1 commit intomlflow:masterfrom
smoorjani:discovery/1-judge-tools
Mar 2, 2026
Merged

Add token usage metadata to litellm judge adapter#21236
smoorjani merged 1 commit intomlflow:masterfrom
smoorjani:discovery/1-judge-tools

Conversation

@smoorjani
Copy link
Collaborator

@smoorjani smoorjani commented Mar 1, 2026

Related Issues/PRs

#xxx

What changes are proposed in this pull request?

  • Add JUDGE_INPUT_TOKENS and JUDGE_OUTPUT_TOKENS to assessment metadata in the litellm adapter alongside existing JUDGE_COST

How is this PR tested?

  • Existing unit/integration tests
  • New unit/integration tests
  • Manual tests

Test 1: Token metadata in string-valued feedback

import mlflow
from mlflow.genai.judges.instructions_judge import InstructionsJudge

mlflow.set_experiment("manual-test-token-metadata")

judge = InstructionsJudge(
    name="politeness_check",
    instructions="Given the inputs {{ inputs }} and the outputs {{ outputs }}, is the output polite and professional?",
    model="openai:/gpt-4.1-mini",
)

feedback = judge(
    inputs={"question": "How do I reset my password?"},
    outputs={"response": "Please navigate to Settings > Security > Reset Password."},
)

print(f"feedback.value = {feedback.value!r}")
print(f"feedback.metadata = {feedback.metadata!r}")

assert feedback.metadata is not None
assert isinstance(feedback.metadata["mlflow.assessment.judgeInputTokens"], int)
assert isinstance(feedback.metadata["mlflow.assessment.judgeOutputTokens"], int)
assert feedback.metadata["mlflow.assessment.judgeInputTokens"] > 0
assert feedback.metadata["mlflow.assessment.judgeOutputTokens"] > 0

Output:

feedback.value = 'Polite and professional'
feedback.metadata = {'mlflow.assessment.judgeCost': 0.00017959999999999997, 'mlflow.assessment.judgeInputTokens': 213, 'mlflow.assessment.judgeOutputTokens': 59}

Test 2: Token metadata with boolean feedback_value_type

import mlflow
from mlflow.genai.judges.instructions_judge import InstructionsJudge

mlflow.set_experiment("manual-test-token-metadata")

judge = InstructionsJudge(
    name="is_helpful",
    instructions="Given the inputs {{ inputs }} and the outputs {{ outputs }}, is the output helpful to the user?",
    model="openai:/gpt-4.1-mini",
    feedback_value_type=bool,
)

feedback = judge(
    inputs={"question": "What is 2+2?"},
    outputs={"response": "4"},
)

print(f"feedback.value = {feedback.value!r}")
print(f"feedback.metadata = {feedback.metadata!r}")

assert feedback.metadata is not None
assert isinstance(feedback.metadata["mlflow.assessment.judgeInputTokens"], int)
assert isinstance(feedback.metadata["mlflow.assessment.judgeOutputTokens"], int)
assert feedback.metadata["mlflow.assessment.judgeInputTokens"] > 0
assert feedback.metadata["mlflow.assessment.judgeOutputTokens"] > 0

Output:

feedback.value = True
feedback.metadata = {'mlflow.assessment.judgeCost': 0.000156, 'mlflow.assessment.judgeInputTokens': 206, 'mlflow.assessment.judgeOutputTokens': 46}

Does this PR require documentation update?

  • No. You can skip the rest of this section.

Does this PR require updating the MLflow Skills repository?

  • No. You can skip the rest of this section.

Release Notes

Is this a user-facing change?

  • No. You can skip the rest of this section.

What component(s), interfaces, languages, and integrations does this PR affect?

Components

  • area/evaluation: MLflow model evaluation features, evaluation metrics, and evaluation workflows

How should the PR be classified in the release notes? Choose one:

  • rn/none - No description will be included. The PR will be mentioned only by the PR number in the "Small Bugfixes and Documentation Updates" section

Should this PR be included in the next patch release?

What is a minor/patch release?
  • Minor release: a release that increments the second part of the version number (e.g., 1.2.0 -> 1.3.0).
    Bug fixes, doc updates and new features usually go into minor releases.
  • Patch release: a release that increments the third part of the version number (e.g., 1.2.0 -> 1.2.1).
    Bug fixes and doc updates usually go into patch releases.
  • Yes (this PR will be cherry-picked and included in the next patch release)
  • No (this PR will be included in the next minor release)

@github-actions
Copy link
Contributor

github-actions bot commented Mar 1, 2026

🛠 DevTools 🛠

Install mlflow from this PR

# mlflow
pip install git+https://github.com/mlflow/mlflow.git@refs/pull/21236/merge
# mlflow-skinny
pip install git+https://github.com/mlflow/mlflow.git@refs/pull/21236/merge#subdirectory=libs/skinny

For Databricks, use the following command:

%sh curl -LsSf https://raw.githubusercontent.com/mlflow/mlflow/HEAD/dev/install-skinny.sh | sh -s pull/21236/merge

@github-actions github-actions bot added area/evaluation MLflow Evaluation rn/none List under Small Changes in Changelogs. size/XL Extra-large PR (500+ LoC) labels Mar 1, 2026
@github-actions
Copy link
Contributor

github-actions bot commented Mar 1, 2026

Documentation preview for f7f734f is available at:

More info
  • Ignore this comment if this PR does not change the documentation.
  • The preview is updated when a new commit is pushed to this PR.
  • This comment was created by this workflow run.
  • The documentation was built by this workflow run.

@smoorjani smoorjani force-pushed the discovery/1-judge-tools branch 2 times, most recently from 508ef81 to 33fdd35 Compare March 2, 2026 01:41
@smoorjani smoorjani changed the title Refactor judge tools: trace_id interface and new tool tests Add token usage metadata to litellm judge adapter Mar 2, 2026
@github-actions github-actions bot added area/evaluation MLflow Evaluation rn/none List under Small Changes in Changelogs. v3.10.1 and removed area/evaluation MLflow Evaluation rn/none List under Small Changes in Changelogs. labels Mar 2, 2026
Add JUDGE_INPUT_TOKENS and JUDGE_OUTPUT_TOKENS to assessment metadata
in the litellm adapter alongside existing JUDGE_COST. Enable Databricks
default judge model as fallback for available tools extraction.

Co-Authored-By: Claude <noreply@anthropic.com>
Signed-off-by: Samraj Moorjani <samraj.moorjani@databricks.com>
Copy link
Collaborator

@serena-ruan serena-ruan left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM!

@harupy harupy requested a review from serena-ruan March 2, 2026 11:55
@github-actions github-actions bot added size/S Small PR (10-49 LoC) and removed size/XL Extra-large PR (500+ LoC) labels Mar 2, 2026
@smoorjani smoorjani added this pull request to the merge queue Mar 2, 2026
Merged via the queue into mlflow:master with commit 8ce990f Mar 2, 2026
63 of 65 checks passed
@smoorjani smoorjani deleted the discovery/1-judge-tools branch March 2, 2026 15:02
daniellok-db pushed a commit to daniellok-db/mlflow that referenced this pull request Mar 5, 2026
Signed-off-by: Samraj Moorjani <samraj.moorjani@databricks.com>
Co-authored-by: Claude <noreply@anthropic.com>
daniellok-db pushed a commit to daniellok-db/mlflow that referenced this pull request Mar 5, 2026
Signed-off-by: Samraj Moorjani <samraj.moorjani@databricks.com>
Co-authored-by: Claude <noreply@anthropic.com>
daniellok-db pushed a commit that referenced this pull request Mar 5, 2026
Signed-off-by: Samraj Moorjani <samraj.moorjani@databricks.com>
Co-authored-by: Claude <noreply@anthropic.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

area/evaluation MLflow Evaluation rn/none List under Small Changes in Changelogs. size/S Small PR (10-49 LoC) v3.10.1

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants