Refactor judge adapters to use a base interface#19029
Refactor judge adapters to use a base interface#19029smoorjani merged 17 commits intomlflow:masterfrom
Conversation
Signed-off-by: Samraj Moorjani <samraj.moorjani@databricks.com>
|
Documentation preview for d57c1d2 is available at: More info
|
|
/review ✅ Review completed. Review OutputReview CompleteI've reviewed PR #19029 "Refactor judge adapters to use a base interface" and found 1 style violation: Issue Found:
Overall Assessment:
The code follows most Python style guidelines, with just this one scope-related improvement needed. |
| try: |
There was a problem hiding this comment.
The try-catch block includes safe operations that should be outside the try block. According to the Python style guide, you should minimize try-catch block scope by wrapping only the specific operations that can raise exceptions.
The adapter.invoke() call is the operation that can fail, while the remaining lines (feedback = output.feedback, feedback.trace_id = ..., and telemetry recording) are safe operations that should be outside the try block.
| try: | |
| try: | |
| output = adapter.invoke(input_params) | |
| except Exception: | |
| # Record failure telemetry only when in Databricks | |
| if in_databricks: | |
| try: | |
| model_provider, model_name = _parse_model_uri(model_uri) | |
| _record_judge_model_usage_failure_databricks_telemetry( | |
| endpoint_name=model_name, | |
| ) | |
| except Exception: | |
| # Ignore telemetry errors | |
| _logger.debug("Error tracking judge model usage failure telemetry", exc_info=True) | |
| raise | |
| feedback = output.feedback | |
| feedback.trace_id = trace.info.trace_id if trace is not None else None | |
| # Record success telemetry only when in Databricks | |
| if in_databricks: | |
| try: | |
| provider = "databricks" if model_provider == "endpoints" else model_provider | |
| _record_judge_model_usage_success_databricks_telemetry( | |
| request_id=output.request_id, | |
| model_provider=provider, | |
| endpoint_name=model_name, | |
| num_prompt_tokens=output.num_prompt_tokens, | |
| num_completion_tokens=output.num_completion_tokens, | |
| ) | |
| except Exception: | |
| # Ignore telemetry errors | |
| _logger.debug("Error tracking judge model usage success telemetry", exc_info=True) |
🤖 Generated with Claude Code
mlflow/genai/judges/adapters/databricks_managed_judge_adapter.py
Outdated
Show resolved
Hide resolved
mlflow/genai/judges/adapters/databricks_serving_endpoint_adapter.py
Outdated
Show resolved
Hide resolved
Signed-off-by: Samraj Moorjani <samraj.moorjani@databricks.com>
Signed-off-by: Samraj Moorjani <samraj.moorjani@databricks.com>
Signed-off-by: Samraj Moorjani <samraj.moorjani@databricks.com>
Signed-off-by: Samraj Moorjani <samraj.moorjani@databricks.com>
| ) | ||
|
|
||
|
|
||
| def _create_litellm_message_from_databricks_response( |
There was a problem hiding this comment.
note that this function, _serialize_messages_to_databricks_prompts, and _invoke_databricks_default_judge are copied verbatim from the ones in databricks_adapter.py
Signed-off-by: Samraj Moorjani <samraj.moorjani@databricks.com>
mlflow/genai/judges/adapters/databricks_serving_endpoint_adapter.py
Outdated
Show resolved
Hide resolved
Signed-off-by: Samraj Moorjani <samraj.moorjani@databricks.com>
Signed-off-by: Samraj Moorjani <samraj.moorjani@databricks.com>
🛠 DevTools 🛠
Install mlflow from this PR
For Databricks, use the following command:
Related Issues/PRs
#xxxWhat changes are proposed in this pull request?
As titled, instead of having all adapters use various functions, have a unified
BaseAdapterfor the different ways in which you can call judges. This is not a functional change, just a refactor.How is this PR tested?
https://e2-dogfood.staging.cloud.databricks.com/editor/notebooks/2578569526778464?o=6051921418418893
output:
Does this PR require documentation update?
Release Notes
Is this a user-facing change?
What component(s), interfaces, languages, and integrations does this PR affect?
Components
area/tracking: Tracking Service, tracking client APIs, autologgingarea/models: MLmodel format, model serialization/deserialization, flavorsarea/model-registry: Model Registry service, APIs, and the fluent client calls for Model Registryarea/scoring: MLflow Model server, model deployment tools, Spark UDFsarea/evaluation: MLflow model evaluation features, evaluation metrics, and evaluation workflowsarea/gateway: MLflow AI Gateway client APIs, server, and third-party integrationsarea/prompts: MLflow prompt engineering features, prompt templates, and prompt managementarea/tracing: MLflow Tracing features, tracing APIs, and LLM tracing functionalityarea/projects: MLproject format, project running backendsarea/uiux: Front-end, user experience, plotting, JavaScript, JavaScript dev serverarea/build: Build and test infrastructure for MLflowarea/docs: MLflow documentation pagesHow should the PR be classified in the release notes? Choose one:
rn/none- No description will be included. The PR will be mentioned only by the PR number in the "Small Bugfixes and Documentation Updates" sectionrn/breaking-change- The PR will be mentioned in the "Breaking Changes" sectionrn/feature- A new user-facing feature worth mentioning in the release notesrn/bug-fix- A user-facing bug fix worth mentioning in the release notesrn/documentation- A user-facing documentation change worth mentioning in the release notesShould this PR be included in the next patch release?
Yesshould be selected for bug fixes, documentation updates, and other small changes.Noshould be selected for new features and larger changes. If you're unsure about the release classification of this PR, leave this unchecked to let the maintainers decide.What is a minor/patch release?
Bug fixes, doc updates and new features usually go into minor releases.
Bug fixes and doc updates usually go into patch releases.