[3/4] Add support for multi-turn deepeval scorers#19263
[3/4] Add support for multi-turn deepeval scorers#19263smoorjani merged 27 commits intomlflow:masterfrom
Conversation
Signed-off-by: Samraj Moorjani <samraj.moorjani@databricks.com>
Signed-off-by: Samraj Moorjani <samraj.moorjani@databricks.com>
Signed-off-by: Samraj Moorjani <samraj.moorjani@databricks.com>
Signed-off-by: Samraj Moorjani <samraj.moorjani@databricks.com>
Signed-off-by: Samraj Moorjani <samraj.moorjani@databricks.com>
Signed-off-by: Samraj Moorjani <samraj.moorjani@databricks.com>
Signed-off-by: Samraj Moorjani <samraj.moorjani@databricks.com>
Signed-off-by: Samraj Moorjani <samraj.moorjani@databricks.com>
Signed-off-by: Samraj Moorjani <samraj.moorjani@databricks.com>
Signed-off-by: Samraj Moorjani <samraj.moorjani@databricks.com>
Signed-off-by: Samraj Moorjani <samraj.moorjani@databricks.com>
Signed-off-by: Samraj Moorjani <samraj.moorjani@databricks.com>
Signed-off-by: Samraj Moorjani <samraj.moorjani@databricks.com>
Signed-off-by: Samraj Moorjani <samraj.moorjani@databricks.com>
|
Documentation preview for f6f2472 is available at: More info
|
Signed-off-by: Samraj Moorjani <samraj.moorjani@databricks.com>
Signed-off-by: Samraj Moorjani <samraj.moorjani@databricks.com>
c3033da to
5a548bf
Compare
| assert result.error.error_code == "RuntimeError" | ||
| assert result.error.error_message == "Test error" | ||
| assert result.source.source_type == AssessmentSourceType.LLM_JUDGE | ||
|
|
There was a problem hiding this comment.
This is perhaps out of scope for this PR, but are you planning on adding integration tests that install the latest deepeval and confirm that a single-turn and a multi-turn scorer work?
There was a problem hiding this comment.
Good question - I think we'll need to do this for all integrations, did you have a specific code pointer/place in mind? I can file a follow-up ticket.
There was a problem hiding this comment.
Yes, check out how langchain integration testing works: https://sourcegraph.prod.databricks-corp.com/mlflow/mlflow/-/tree/tests/langchain
From Claude:
Great question! Yes, integration tests that install the actual deepeval package would be valuable. Here's where they should go:
Location: tests/genai/scorers/deepeval/
You'd create a new directory structure similar to how other integrations are organized (e.g., tests/langchain/, tests/openai/). Based on MLflow's patterns, I'd recommend:
tests/genai/scorers/deepeval/
├── __init__.py
├── conftest.py # For fixtures and deepeval-specific setup
└── test_deepeval_integration.py # Integration tests with real deepeval
CI Integration: The integration tests would run as part of the existing genai CI job in .github/workflows/master.yml (around line 380-420). You'd need to:
1. Add deepeval to the pip install line in the genai job:
- name: Install dependencies
run: |
source ./dev/install-common-deps.sh
pip install openai dspy deepeval # Add deepeval here
2. The tests would automatically run with pytest tests/genai
Claude also commends you on your decision to include this in a follow-up PR :D
Signed-off-by: Samraj Moorjani <samraj.moorjani@databricks.com>
Signed-off-by: Samraj Moorjani <samraj.moorjani@databricks.com>
AveshCSingh
left a comment
There was a problem hiding this comment.
Let one small comment, otw LGTM. Please track the follow-ups
| assert result.error.error_code == "RuntimeError" | ||
| assert result.error.error_message == "Test error" | ||
| assert result.source.source_type == AssessmentSourceType.LLM_JUDGE | ||
|
|
There was a problem hiding this comment.
Yes, check out how langchain integration testing works: https://sourcegraph.prod.databricks-corp.com/mlflow/mlflow/-/tree/tests/langchain
From Claude:
Great question! Yes, integration tests that install the actual deepeval package would be valuable. Here's where they should go:
Location: tests/genai/scorers/deepeval/
You'd create a new directory structure similar to how other integrations are organized (e.g., tests/langchain/, tests/openai/). Based on MLflow's patterns, I'd recommend:
tests/genai/scorers/deepeval/
├── __init__.py
├── conftest.py # For fixtures and deepeval-specific setup
└── test_deepeval_integration.py # Integration tests with real deepeval
CI Integration: The integration tests would run as part of the existing genai CI job in .github/workflows/master.yml (around line 380-420). You'd need to:
1. Add deepeval to the pip install line in the genai job:
- name: Install dependencies
run: |
source ./dev/install-common-deps.sh
pip install openai dspy deepeval # Add deepeval here
2. The tests would automatically run with pytest tests/genai
Claude also commends you on your decision to include this in a follow-up PR :D
Signed-off-by: Samraj Moorjani <samraj.moorjani@databricks.com>
Signed-off-by: Samraj Moorjani <samraj.moorjani@databricks.com>
Signed-off-by: Samraj Moorjani <samraj.moorjani@databricks.com>
Signed-off-by: Samraj Moorjani <samraj.moorjani@databricks.com>
🛠 DevTools 🛠
Install mlflow from this PR
For Databricks, use the following command:
Related Issues/PRs
#xxxWhat changes are proposed in this pull request?
Adding support for multi-turn deepeval scorers.
How is this PR tested?
outputs:
Does this PR require documentation update?
Will add a follow-up PR for this.
Release Notes
Is this a user-facing change?
What component(s), interfaces, languages, and integrations does this PR affect?
Components
area/tracking: Tracking Service, tracking client APIs, autologgingarea/models: MLmodel format, model serialization/deserialization, flavorsarea/model-registry: Model Registry service, APIs, and the fluent client calls for Model Registryarea/scoring: MLflow Model server, model deployment tools, Spark UDFsarea/evaluation: MLflow model evaluation features, evaluation metrics, and evaluation workflowsarea/gateway: MLflow AI Gateway client APIs, server, and third-party integrationsarea/prompts: MLflow prompt engineering features, prompt templates, and prompt managementarea/tracing: MLflow Tracing features, tracing APIs, and LLM tracing functionalityarea/projects: MLproject format, project running backendsarea/uiux: Front-end, user experience, plotting, JavaScript, JavaScript dev serverarea/build: Build and test infrastructure for MLflowarea/docs: MLflow documentation pagesHow should the PR be classified in the release notes? Choose one:
rn/none- No description will be included. The PR will be mentioned only by the PR number in the "Small Bugfixes and Documentation Updates" sectionrn/breaking-change- The PR will be mentioned in the "Breaking Changes" sectionrn/feature- A new user-facing feature worth mentioning in the release notesrn/bug-fix- A user-facing bug fix worth mentioning in the release notesrn/documentation- A user-facing documentation change worth mentioning in the release notesShould this PR be included in the next patch release?
Yesshould be selected for bug fixes, documentation updates, and other small changes.Noshould be selected for new features and larger changes. If you're unsure about the release classification of this PR, leave this unchecked to let the maintainers decide.What is a minor/patch release?
Bug fixes, doc updates and new features usually go into minor releases.
Bug fixes and doc updates usually go into patch releases.