[MLflow Demo] Add Prompt demo data#20047
Conversation
🛠 DevTools 🛠
Install mlflow from this PRFor Databricks, use the following command: |
There was a problem hiding this comment.
Pull request overview
This PR adds demo prompt data to the MLflow demo data framework, creating realistic examples of prompt evolution across three different domains (customer support, document summarization, and code review). The prompts demonstrate versioning, alias management, and linkage to traces.
Changes:
- Adds 3 demo prompts with 4 versions each showing realistic prompt evolution
- Creates 12 prompt-linked traces that demonstrate prompt-to-trace associations
- Implements PromptsDemoGenerator with full lifecycle management (create, check, delete)
- Adds comprehensive test coverage for prompt generation and validation
- Updates CI workflow to run demo tests in a separate job
Reviewed changes
Copilot reviewed 18 out of 19 changed files in this pull request and generated 5 comments.
Show a summary per file
| File | Description |
|---|---|
| mlflow/demo/data.py | Adds 3 prompt definitions (CUSTOMER_SUPPORT_PROMPT, DOCUMENT_SUMMARIZER_PROMPT, CODE_REVIEWER_PROMPT) with version history and 12 prompt-linked trace examples |
| mlflow/demo/generators/prompts.py | Implements PromptsDemoGenerator for creating, checking, and deleting prompt demo data |
| mlflow/demo/generators/traces.py | Adds _create_prompt_linked_trace method to link traces to specific prompt versions |
| mlflow/demo/generators/init.py | Registers PromptsDemoGenerator with the demo registry |
| tests/demo/test_prompts_generator.py | Unit tests for PromptsDemoGenerator functionality |
| tests/demo/test_demo_integration.py | Integration tests validating prompt generation against a real server |
| .github/workflows/master.yml | Adds separate CI job for demo tests |
💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.
tests/demo/test_traces_generator.py
Outdated
| TracesDemoGenerator.version = 2 | ||
| assert generator.is_generated() is False | ||
|
|
||
| TracesDemoGenerator.version = 1 |
There was a problem hiding this comment.
Missing cleanup after test modifies class variable. The version is reset to 1 at the end, but if the test fails before line 111, it will leave TracesDemoGenerator in an inconsistent state. Use a fixture with proper cleanup like in test_demo_integration.py (lines 48-52).
| EvaluationDemoGenerator.version = 99 | ||
| fresh_generator = EvaluationDemoGenerator() | ||
| assert fresh_generator.is_generated() is False | ||
|
|
||
| EvaluationDemoGenerator.version = 1 |
There was a problem hiding this comment.
Missing cleanup after test modifies class variable. The version is reset to 1 at the end, but if the test fails before line 122, it will leave EvaluationDemoGenerator in an inconsistent state. Use a fixture with proper cleanup like in test_demo_integration.py (lines 56-60).
tests/demo/test_prompts_generator.py
Outdated
| PromptsDemoGenerator.version = 99 | ||
| fresh_generator = PromptsDemoGenerator() | ||
| assert fresh_generator.is_generated() is False | ||
|
|
||
| PromptsDemoGenerator.version = 2 |
There was a problem hiding this comment.
Missing cleanup after test modifies class variable. The version is reset to 2 at the end, but if the test fails before line 105, it will leave PromptsDemoGenerator in an inconsistent state. Use a fixture with proper cleanup like in test_demo_integration.py (lines 64-68).
mlflow/demo/generators/prompts.py
Outdated
| navigation_url="#/prompts", | ||
| ) | ||
|
|
||
| def _create_prompt_with_versions(self, prompt_def) -> int: |
There was a problem hiding this comment.
Missing type hint for prompt_def parameter. Should be annotated as DemoPromptDef for better code clarity and IDE support.
mlflow/demo/data.py
Outdated
|
|
||
| def get_expected_answers() -> dict[str, str]: | ||
| """Build a dict mapping queries to expected responses for evaluation.""" | ||
| return {trace.query.lower(): trace.expected_response for trace in ALL_DEMO_TRACES} |
There was a problem hiding this comment.
Dictionary is rebuilt on every call. Since ALL_DEMO_TRACES is a constant list, consider caching this dictionary at module level to avoid repeated iteration and lowercasing operations.
| def get_expected_answers() -> dict[str, str]: | |
| """Build a dict mapping queries to expected responses for evaluation.""" | |
| return {trace.query.lower(): trace.expected_response for trace in ALL_DEMO_TRACES} | |
| _EXPECTED_ANSWERS_BY_QUERY: dict[str, str] = { | |
| trace.query.lower(): trace.expected_response for trace in ALL_DEMO_TRACES | |
| } | |
| def get_expected_answers() -> dict[str, str]: | |
| """Build a dict mapping queries to expected responses for evaluation.""" | |
| return _EXPECTED_ANSWERS_BY_QUERY |
|
Documentation preview for 5c2543c is available at: More info
|
991dccd to
23cadcf
Compare
23cadcf to
c599580
Compare
39a2634 to
0ad9742
Compare
ae40158 to
b89db10
Compare
b89db10 to
b4bcdc7
Compare
B-Step62
left a comment
There was a problem hiding this comment.
LGTM w/ a minor suggestion
mlflow/demo/data.py
Outdated
| @dataclass | ||
| class PromptVersionDef: | ||
| template: str | list[dict[str, str]] | ||
| commit_message: str | ||
| alias: str | None = None | ||
|
|
||
|
|
||
| @dataclass | ||
| class DemoPromptDef: | ||
| name: str | ||
| versions: list[PromptVersionDef] |
There was a problem hiding this comment.
Can we use the real PromptVersion object?
856dc7b to
e1f4aac
Compare
Signed-off-by: Ben Wilson <benjamin.wilson@databricks.com>
e1f4aac to
5c2543c
Compare
🥞 Stacked PR
Use this link to review incremental changes.
Related Issues/PRs
#xxxWhat changes are proposed in this pull request?
Adds data for simulating prompt evolution, including example traces (with evaluation) on versions of prompts for 3 separate domains.
How is this PR tested?
Does this PR require documentation update?
Release Notes
Is this a user-facing change?
What component(s), interfaces, languages, and integrations does this PR affect?
Components
area/tracking: Tracking Service, tracking client APIs, autologgingarea/models: MLmodel format, model serialization/deserialization, flavorsarea/model-registry: Model Registry service, APIs, and the fluent client calls for Model Registryarea/scoring: MLflow Model server, model deployment tools, Spark UDFsarea/evaluation: MLflow model evaluation features, evaluation metrics, and evaluation workflowsarea/gateway: MLflow AI Gateway client APIs, server, and third-party integrationsarea/prompts: MLflow prompt engineering features, prompt templates, and prompt managementarea/tracing: MLflow Tracing features, tracing APIs, and LLM tracing functionalityarea/projects: MLproject format, project running backendsarea/uiux: Front-end, user experience, plotting, JavaScript, JavaScript dev serverarea/build: Build and test infrastructure for MLflowarea/docs: MLflow documentation pagesHow should the PR be classified in the release notes? Choose one:
rn/none- No description will be included. The PR will be mentioned only by the PR number in the "Small Bugfixes and Documentation Updates" sectionrn/breaking-change- The PR will be mentioned in the "Breaking Changes" sectionrn/feature- A new user-facing feature worth mentioning in the release notesrn/bug-fix- A user-facing bug fix worth mentioning in the release notesrn/documentation- A user-facing documentation change worth mentioning in the release notesShould this PR be included in the next patch release?
Yesshould be selected for bug fixes, documentation updates, and other small changes.Noshould be selected for new features and larger changes. If you're unsure about the release classification of this PR, leave this unchecked to let the maintainers decide.What is a minor/patch release?
Bug fixes, doc updates and new features usually go into minor releases.
Bug fixes and doc updates usually go into patch releases.