Skip to content

Refactor judge adapters to use a base interface#19029

Merged
smoorjani merged 17 commits intomlflow:masterfrom
smoorjani:gwt-mlflow-adapter-interface
Dec 9, 2025
Merged

Refactor judge adapters to use a base interface#19029
smoorjani merged 17 commits intomlflow:masterfrom
smoorjani:gwt-mlflow-adapter-interface

Conversation

@smoorjani
Copy link
Collaborator

@smoorjani smoorjani commented Nov 25, 2025

🛠 DevTools 🛠

Open in GitHub Codespaces

Install mlflow from this PR

# mlflow
pip install git+https://github.com/mlflow/mlflow.git@refs/pull/19029/merge
# mlflow-skinny
pip install git+https://github.com/mlflow/mlflow.git@refs/pull/19029/merge#subdirectory=libs/skinny

For Databricks, use the following command:

%sh curl -LsSf https://raw.githubusercontent.com/mlflow/mlflow/HEAD/dev/install-skinny.sh | sh -s pull/19029/merge

Related Issues/PRs

#xxx

What changes are proposed in this pull request?

As titled, instead of having all adapters use various functions, have a unified BaseAdapter for the different ways in which you can call judges. This is not a functional change, just a refactor.

How is this PR tested?

  • Existing unit/integration tests
  • New unit/integration tests
  • Manual tests

https://e2-dogfood.staging.cloud.databricks.com/editor/notebooks/2578569526778464?o=6051921418418893

import mlflow
from mlflow.genai import make_judge

mlflow.set_tracking_uri("databricks")
mlflow.set_experiment(experiment_id="3011836326718646")


def test_litellm_adapter_openai():
    judge = make_judge(
        name="test_litellm_judge",
        instructions="Evaluate if the {{ inputs }} matches {{ outputs }}",
        model="openai:/gpt-4o-mini",
    )
    result = judge(inputs="What is 2+2?", outputs="4")
    print(f"LiteLLM: {result}")


def test_databricks_serving_endpoint_adapter():
    judge = make_judge(
        name="test_databricks_endpoint_judge",
        instructions="Evaluate if the {{ inputs }} matches {{ outputs }}",
        model="databricks:/databricks-claude-sonnet-4-5",
    )
    result = judge(inputs="What is 2+2?", outputs="4")
    print(f"Databricks Endpoint: {result}")


def test_databricks_managed_judge_adapter():
    judge = make_judge(
        name="test_databricks_default_judge",
        instructions="Evaluate if the {{ inputs }} matches {{ outputs }}",
        model="databricks",
    )
    result = judge(inputs="What is 2+2?", outputs="4")
    print(f"Databricks Managed: {result}")


if __name__ == "__main__":
    test_litellm_adapter_openai()
    test_databricks_serving_endpoint_adapter()
    test_databricks_managed_judge_adapter()

output:

LiteLLM: Feedback(name='test_litellm_judge', source=AssessmentSource(source_type='LLM_JUDGE', source_id='openai:/gpt-4o-mini'), trace_id=None, run_id=None, rationale="The output correctly answers the query by providing the accurate result of the mathematical expression '2+2', which is indeed '4'. This matches the expected response for the given input.", metadata={'mlflow.assessment.judgeCost': 5.46e-05}, span_id=None, create_time_ms=1765291025597, last_update_time_ms=1765291025597, assessment_id=None, error=None, expectation=None, feedback=FeedbackValue(value='Pass', error=None), overrides=None, valid=True)
Databricks Endpoint: Feedback(name='test_databricks_endpoint_judge', source=AssessmentSource(source_type='LLM_JUDGE', source_id='databricks:/databricks-claude-sonnet-4-5'), trace_id=None, run_id=None, rationale='The input asks "What is 2+2?" and the output provides "4", which is the correct mathematical answer to this basic arithmetic question. The output accurately and directly answers the question posed in the input. This is a clear match between the question and its correct answer.', metadata={'mlflow.assessment.judgeCost': 0.00405599383}, span_id=None, create_time_ms=1765291029632, last_update_time_ms=1765291029632, assessment_id=None, error=None, expectation=None, feedback=FeedbackValue(value='Match', error=None), overrides=None, valid=True)
Databricks Managed: Feedback(name='test_databricks_default_judge', source=AssessmentSource(source_type='LLM_JUDGE', source_id='databricks'), trace_id=None, run_id=None, rationale="The output '4' is the correct answer to the mathematical query 'What is 2+2?'. The evaluation confirms that the AI agent accurately provided the expected result for the given input.", metadata=None, span_id=None, create_time_ms=1765291031518, last_update_time_ms=1765291031518, assessment_id=None, error=None, expectation=None, feedback=FeedbackValue(value='Correct', error=None), overrides=None, valid=True)

Does this PR require documentation update?

  • No. You can skip the rest of this section.
  • Yes. I've updated:
    • Examples
    • API references
    • Instructions

Release Notes

Is this a user-facing change?

  • No. You can skip the rest of this section.
  • Yes. Give a description of this change to be included in the release notes for MLflow users.

What component(s), interfaces, languages, and integrations does this PR affect?

Components

  • area/tracking: Tracking Service, tracking client APIs, autologging
  • area/models: MLmodel format, model serialization/deserialization, flavors
  • area/model-registry: Model Registry service, APIs, and the fluent client calls for Model Registry
  • area/scoring: MLflow Model server, model deployment tools, Spark UDFs
  • area/evaluation: MLflow model evaluation features, evaluation metrics, and evaluation workflows
  • area/gateway: MLflow AI Gateway client APIs, server, and third-party integrations
  • area/prompts: MLflow prompt engineering features, prompt templates, and prompt management
  • area/tracing: MLflow Tracing features, tracing APIs, and LLM tracing functionality
  • area/projects: MLproject format, project running backends
  • area/uiux: Front-end, user experience, plotting, JavaScript, JavaScript dev server
  • area/build: Build and test infrastructure for MLflow
  • area/docs: MLflow documentation pages

How should the PR be classified in the release notes? Choose one:

  • rn/none - No description will be included. The PR will be mentioned only by the PR number in the "Small Bugfixes and Documentation Updates" section
  • rn/breaking-change - The PR will be mentioned in the "Breaking Changes" section
  • rn/feature - A new user-facing feature worth mentioning in the release notes
  • rn/bug-fix - A user-facing bug fix worth mentioning in the release notes
  • rn/documentation - A user-facing documentation change worth mentioning in the release notes

Should this PR be included in the next patch release?

Yes should be selected for bug fixes, documentation updates, and other small changes. No should be selected for new features and larger changes. If you're unsure about the release classification of this PR, leave this unchecked to let the maintainers decide.

What is a minor/patch release?
  • Minor release: a release that increments the second part of the version number (e.g., 1.2.0 -> 1.3.0).
    Bug fixes, doc updates and new features usually go into minor releases.
  • Patch release: a release that increments the third part of the version number (e.g., 1.2.0 -> 1.2.1).
    Bug fixes and doc updates usually go into patch releases.
  • Yes (this PR will be cherry-picked and included in the next patch release)
  • No (this PR will be included in the next minor release)

Signed-off-by: Samraj Moorjani <samraj.moorjani@databricks.com>
@github-actions github-actions bot added v3.6.1 area/evaluation MLflow Evaluation rn/none List under Small Changes in Changelogs. labels Nov 25, 2025
@github-actions
Copy link
Contributor

github-actions bot commented Nov 25, 2025

Documentation preview for d57c1d2 is available at:

More info
  • Ignore this comment if this PR does not change the documentation.
  • The preview is updated when a new commit is pushed to this PR.
  • This comment was created by this workflow run.
  • The documentation was built by this workflow run.

.
Signed-off-by: Samraj Moorjani <samraj.moorjani@databricks.com>
Signed-off-by: Samraj Moorjani <samraj.moorjani@databricks.com>
.
Signed-off-by: Samraj Moorjani <samraj.moorjani@databricks.com>
@serena-ruan
Copy link
Collaborator

serena-ruan commented Nov 26, 2025

/review


🚀 Review workflow started


✅ Review completed.

Review Output

Review Complete

I've reviewed PR #19029 "Refactor judge adapters to use a base interface" and found 1 style violation:

Issue Found:

  • mlflow/genai/judges/utils/invocation_utils.py:111 - The try-catch block includes safe operations that should be outside according to the "Minimize Try-Catch Block Scope" style guideline. I've added a review comment with a suggestion to restructure the code so only the adapter.invoke() call is within the try block.

Overall Assessment:
The refactoring looks good overall - it successfully introduces a clean adapter pattern with:

  • Well-designed base interface (BaseJudgeAdapter)
  • Proper use of dataclasses for input/output
  • Good separation of concerns with individual adapter implementations
  • Appropriate factory pattern for adapter selection

The code follows most Python style guidelines, with just this one scope-related improvement needed.

try:
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The try-catch block includes safe operations that should be outside the try block. According to the Python style guide, you should minimize try-catch block scope by wrapping only the specific operations that can raise exceptions.

The adapter.invoke() call is the operation that can fail, while the remaining lines (feedback = output.feedback, feedback.trace_id = ..., and telemetry recording) are safe operations that should be outside the try block.

Suggested change
try:
try:
output = adapter.invoke(input_params)
except Exception:
# Record failure telemetry only when in Databricks
if in_databricks:
try:
model_provider, model_name = _parse_model_uri(model_uri)
_record_judge_model_usage_failure_databricks_telemetry(
endpoint_name=model_name,
)
except Exception:
# Ignore telemetry errors
_logger.debug("Error tracking judge model usage failure telemetry", exc_info=True)
raise
feedback = output.feedback
feedback.trace_id = trace.info.trace_id if trace is not None else None
# Record success telemetry only when in Databricks
if in_databricks:
try:
provider = "databricks" if model_provider == "endpoints" else model_provider
_record_judge_model_usage_success_databricks_telemetry(
request_id=output.request_id,
model_provider=provider,
endpoint_name=model_name,
num_prompt_tokens=output.num_prompt_tokens,
num_completion_tokens=output.num_completion_tokens,
)
except Exception:
# Ignore telemetry errors
_logger.debug("Error tracking judge model usage success telemetry", exc_info=True)

🤖 Generated with Claude Code

Signed-off-by: Samraj Moorjani <samraj.moorjani@databricks.com>
Signed-off-by: Samraj Moorjani <samraj.moorjani@databricks.com>
Signed-off-by: Samraj Moorjani <samraj.moorjani@databricks.com>
.
Signed-off-by: Samraj Moorjani <samraj.moorjani@databricks.com>
.
Signed-off-by: Samraj Moorjani <samraj.moorjani@databricks.com>
Signed-off-by: Samraj Moorjani <samraj.moorjani@databricks.com>
Signed-off-by: Samraj Moorjani <samraj.moorjani@databricks.com>
Signed-off-by: Samraj Moorjani <samraj.moorjani@databricks.com>
Signed-off-by: Samraj Moorjani <samraj.moorjani@databricks.com>
)


def _create_litellm_message_from_databricks_response(
Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

note that this function, _serialize_messages_to_databricks_prompts, and _invoke_databricks_default_judge are copied verbatim from the ones in databricks_adapter.py

@smoorjani smoorjani requested a review from serena-ruan December 6, 2025 00:09
Signed-off-by: Samraj Moorjani <samraj.moorjani@databricks.com>
Copy link
Collaborator

@serena-ruan serena-ruan left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM!

Signed-off-by: Samraj Moorjani <samraj.moorjani@databricks.com>
Signed-off-by: Samraj Moorjani <samraj.moorjani@databricks.com>
@smoorjani smoorjani enabled auto-merge December 9, 2025 14:37
@smoorjani smoorjani added this pull request to the merge queue Dec 9, 2025
Merged via the queue into mlflow:master with commit 363edbc Dec 9, 2025
49 checks passed
@smoorjani smoorjani deleted the gwt-mlflow-adapter-interface branch December 9, 2025 15:11
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

area/evaluation MLflow Evaluation rn/none List under Small Changes in Changelogs. v3.6.1

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants