[MLflow Demo] Add mlflow demo cli command#20048
Conversation
🛠 DevTools 🛠
Install mlflow from this PRFor Databricks, use the following command: |
There was a problem hiding this comment.
Pull request overview
This PR adds a new mlflow demo CLI command that provides a frictionless way to explore MLflow's GenAI features. The command creates a temporary environment with pre-populated demo data including traces, evaluations, and prompts, then starts a local MLflow server and opens it in a browser.
Changes:
- Adds new
mlflow demoCLI command with port selection and browser control options - Implements demo data generation framework with versioning support for traces, evaluations, and prompts
- Adds utility functions for port availability checking
- Creates comprehensive test suite and integration tests
- Adds HTTP API endpoints for demo data generation and deletion
Reviewed changes
Copilot reviewed 25 out of 26 changed files in this pull request and generated 8 comments.
Show a summary per file
| File | Description |
|---|---|
| mlflow/cli/demo.py | New CLI command implementation that sets up ephemeral environment and launches server |
| mlflow/cli/init.py | Registers the new demo command |
| mlflow/utils/init.py | Adds is_port_available() helper function |
| mlflow/demo/ | Complete demo framework with generators, registry, and base classes |
| mlflow/server/handlers.py | API endpoints for programmatic demo generation/deletion |
| tests/demo/ | Comprehensive test suite including unit, integration, and CLI tests |
| .github/workflows/master.yml | CI configuration for running demo tests |
💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.
tests/demo/test_cli.py
Outdated
| max_results=100, | ||
| ) | ||
|
|
||
| assert len(traces) == 70 # 35 v1 + 35 v2 |
There was a problem hiding this comment.
The expected trace count appears incorrect based on the test data. According to test_traces_generator.py line 98, the total should be 82 traces (35 v1 + 47 v2), not 70. The v2 set includes 12 additional prompt-linked traces that are not present in v1.
| assert len(traces) == 70 # 35 v1 + 35 v2 | |
| assert len(traces) == 82 # 35 v1 + 47 v2 |
mlflow/cli/demo.py
Outdated
| def open_browser(): | ||
| time.sleep(1.5) | ||
| # Open just the base URL - the React app will handle routing | ||
| # Opening with fragment directly causes encoding issues | ||
| webbrowser.open(f"http://127.0.0.1:{port}/") |
There was a problem hiding this comment.
The hardcoded 1.5 second delay before opening the browser may not be sufficient on slower systems or may be unnecessarily long on fast systems. Consider using a more robust approach such as polling the server's health endpoint until it responds successfully, or making the delay configurable.
mlflow/cli/demo.py
Outdated
|
|
||
| from urllib.parse import quote | ||
|
|
||
| encoded_experiment = quote(DEMO_EXPERIMENT_NAME, safe="") |
There was a problem hiding this comment.
The variable experiment_url is constructed but only used for display in a console message. Given the comment on line 88-89 about fragment encoding issues, consider clarifying why this URL is displayed but not used for the actual browser navigation.
| encoded_experiment = quote(DEMO_EXPERIMENT_NAME, safe="") | |
| encoded_experiment = quote(DEMO_EXPERIMENT_NAME, safe="") | |
| # Construct a direct link to the demo experiment traces for display only. | |
| # Note: We intentionally do not use this URL with `webbrowser.open()` because | |
| # opening with a fragment causes encoding issues (see comment below). |
tests/utils/test_utils.py
Outdated
|
|
||
| port = find_free_port() | ||
| assert isinstance(port, int) | ||
| assert 1 <= port <= 65535 |
There was a problem hiding this comment.
The port range validation is incorrect. Valid port numbers range from 1 to 65535, but line 210 checks 1 <= port <= 65535 which would allow port 0. Additionally, privileged ports (1-1023) typically cannot be bound without elevated permissions. The assertion should be 1024 <= port <= 65535 or the test should handle potential permission errors when testing with low port numbers.
| assert 1 <= port <= 65535 | |
| assert 1024 <= port <= 65535 |
mlflow/demo/generators/traces.py
Outdated
| # Also create the EntityAssociation record for proper filtering support | ||
| if trace_id and trace_def.linked_prompt: | ||
| try: | ||
| client = mlflow.MlflowClient() | ||
| prompt_version = client.get_prompt_version( | ||
| name=trace_def.linked_prompt.prompt_name, | ||
| version=str(trace_def.linked_prompt.version), | ||
| ) | ||
| client.link_prompt_versions_to_trace( | ||
| prompt_versions=[prompt_version], | ||
| trace_id=trace_id, | ||
| ) | ||
| except Exception: | ||
| _logger.debug("Failed to link prompt to trace", exc_info=True) |
There was a problem hiding this comment.
The broad except Exception clause silently swallows all errors. While this is intentional for demo data generation resilience, consider catching more specific exceptions (e.g., MlflowException) to avoid masking unexpected errors like AttributeError or KeyError that might indicate code bugs.
| def pass_fail_scorer(inputs, outputs) -> Feedback: | ||
| content = str(inputs) + str(outputs) | ||
| hash_input = f"{content}:{name}" | ||
| hash_val = int(hashlib.md5(hash_input.encode(), usedforsecurity=False).hexdigest()[:8], 16) |
There was a problem hiding this comment.
The usedforsecurity=False parameter is correct for this demo use case (deterministic scoring based on content hashes). However, this parameter was added in Python 3.9. While MLflow likely requires Python 3.9+, consider verifying the minimum supported Python version to ensure compatibility.
| deleted_features = [] | ||
| for name in demo_registry.list_generators(): | ||
| generator = demo_registry.get(name)() | ||
| if generator._data_exists(): |
There was a problem hiding this comment.
The API endpoint is calling a private method _data_exists() on the generator. This breaks encapsulation. Consider exposing a public method or property for this check, or using the existing is_generated() method if appropriate.
| if generator._data_exists(): | |
| if generator.is_generated(): |
| --quiet --requires-ssh --ignore-flavors \ | ||
| --ignore=tests/examples \ | ||
| --ignore=tests/evaluate \ | ||
| --ignore=tests/genai \ |
There was a problem hiding this comment.
The demo tests are excluded from the main test job but run in a separate job. Consider adding a comment explaining why demo tests are separated (e.g., for parallelization, different dependencies, or to manage test execution time).
| --ignore=tests/genai \ | |
| --ignore=tests/genai \ | |
| # Demo tests are excluded from the main test job and executed in a separate job to keep this | |
| # job's runtime manageable and to isolate their additional dependencies. |
|
Documentation preview for f424136 is available at: More info
|
6f6e5cd to
a6f6926
Compare
.github/workflows/master.yml
Outdated
| pytest tests/genai --ignore tests/genai/test_genai_import_without_agent_sdk.py \ | ||
| --ignore tests/genai/optimize --ignore tests/genai/prompts | ||
|
|
||
| demo: |
There was a problem hiding this comment.
is there a reason we need to separate this job?
There was a problem hiding this comment.
Good call - there isn't a reason. I've updated the CI def to include it in the Python tests. Thanks for the call out!
a6f6926 to
8a97c4b
Compare
70a0aa1 to
1495094
Compare
6178743 to
5b91a6f
Compare
04d0776 to
6858f52
Compare
mlflow/cli/demo.py
Outdated
| if value is None: | ||
| return None | ||
| if not value.startswith(("http://", "https://")): | ||
| raise click.BadParameter("Tracking URI must start with 'http://' or 'https://'") |
There was a problem hiding this comment.
Why does this need to be http? Does demo generation logic break with sql tracking URI?
There was a problem hiding this comment.
I actually wasn't even thinking of a local tracking server haha. Great point!
mlflow/cli/demo.py
Outdated
| if tracking_uri is None: | ||
| tracking_uri = _get_tracking_uri_interactive(port) | ||
|
|
||
| if tracking_uri == _NEW_SERVER_SENTINEL: |
There was a problem hiding this comment.
Do we need _NEW_SERVER_SENTINEL? Does None work?
There was a problem hiding this comment.
Nah None will work. Updated!
| import mlflow | ||
| from mlflow.demo import generate_all_demos | ||
| from mlflow.demo.base import DEMO_EXPERIMENT_NAME |
There was a problem hiding this comment.
Yeah otherwise the eager load chain creates a huge import dependency that really messes up CI.
mlflow/cli/demo.py
Outdated
| with tempfile.TemporaryDirectory() as tmpdir: | ||
| tmpdir_path = Path(tmpdir) | ||
| db_path = tmpdir_path / "mlflow.db" | ||
| artifact_path = tmpdir_path / "artifacts" | ||
| artifact_path.mkdir() |
There was a problem hiding this comment.
Is the intention of using temporary directory is not to persist the demo data? I wonder if it is useful to keep the demo data around, otherwise user need to regenerate it again if they stop the demo command once.
There was a problem hiding this comment.
There wasn't a strong reason to do it other than to not pollute a user's machine if they just wanted to check it out. I wasn't strongly opinionated either way. Updated to use a reserved directory name for persistence.
|
|
||
| @catch_mlflow_exception | ||
| @_disable_if_artifacts_only | ||
| def _generate_demo(): |
There was a problem hiding this comment.
Shall we add a parameter for list of features to generate demo, with default to all?
30b2c83 to
095ef9a
Compare
mlflow/cli/demo.py
Outdated
| demo_dir = Path.home() / ".mlflow" / "demo" | ||
| demo_dir.mkdir(parents=True, exist_ok=True) |
There was a problem hiding this comment.
@BenWilson2 Is there any reason for not using the current directory? I think it is confusing the mlflow server and mlflow demo behaves differently in terms of where to create database.
There was a problem hiding this comment.
Excellent point - there's really no need to have a different path here. Updated to use the standard path
…plates - Add timestamp distribution across 7 days (v1 in days 0-3.5, v2 in days 3.5-7) - Add token count estimation as span attributes (SpanAttributeKey.CHAT_USAGE) - Add prompt-based traces with template rendering and variables - Restructure sessions to have 2-4 turns each across 3 sessions - Fix trace metadata by using InMemoryTraceManager directly - Update test expectations for new trace counts (34 total: 4 RAG, 4 agent, 12 prompt, 14 session) Co-Authored-By: Claude <noreply@anthropic.com> Signed-off-by: Ben Wilson <benjamin.wilson@databricks.com>
…plates - Add timestamp distribution across 7 days (v1 in days 0-3.5, v2 in days 3.5-7) - Add token count estimation as span attributes (SpanAttributeKey.CHAT_USAGE) - Add prompt-based traces with template rendering and variables - Restructure sessions to have 2-4 turns each across 3 sessions - Fix trace metadata by using InMemoryTraceManager directly - Update test expectations for new trace counts (34 total: 4 RAG, 4 agent, 12 prompt, 14 session) Co-Authored-By: Claude <noreply@anthropic.com> Signed-off-by: Ben Wilson <benjamin.wilson@databricks.com>
Signed-off-by: Ben Wilson <benjamin.wilson@databricks.com>
Signed-off-by: Ben Wilson <benjamin.wilson@databricks.com>
Signed-off-by: Ben Wilson <benjamin.wilson@databricks.com>
Signed-off-by: Ben Wilson <benjamin.wilson@databricks.com>
095ef9a to
f424136
Compare
Signed-off-by: Ben Wilson <benjamin.wilson@databricks.com> Co-authored-by: Claude <noreply@anthropic.com>
Signed-off-by: Ben Wilson <benjamin.wilson@databricks.com> Co-authored-by: Claude <noreply@anthropic.com>
Signed-off-by: Ben Wilson <benjamin.wilson@databricks.com> Co-authored-by: Claude <noreply@anthropic.com>
🥞 Stacked PR
Use this link to review incremental changes.
Related Issues/PRs
#xxxWhat changes are proposed in this pull request?
Adds a new
mlflow democli command that:There is log suppression activated for the ephemeral demo server to reduce the noise in stdout. Logging can be turned on by setting the --debug flag when starting the demo server.
There is an option to start up the demo to point to an existing tracking server. A prompt is included when running the command (default "no") about whether an existing running tracking server is available. If 'y' is selected, an additional prompt for the tracking server URL is shown that will permit generating the demos and providing a link to see the demo experiment directly.
How is this PR tested?
Does this PR require documentation update?
Release Notes
Is this a user-facing change?
What component(s), interfaces, languages, and integrations does this PR affect?
Components
area/tracking: Tracking Service, tracking client APIs, autologgingarea/models: MLmodel format, model serialization/deserialization, flavorsarea/model-registry: Model Registry service, APIs, and the fluent client calls for Model Registryarea/scoring: MLflow Model server, model deployment tools, Spark UDFsarea/evaluation: MLflow model evaluation features, evaluation metrics, and evaluation workflowsarea/gateway: MLflow AI Gateway client APIs, server, and third-party integrationsarea/prompts: MLflow prompt engineering features, prompt templates, and prompt managementarea/tracing: MLflow Tracing features, tracing APIs, and LLM tracing functionalityarea/projects: MLproject format, project running backendsarea/uiux: Front-end, user experience, plotting, JavaScript, JavaScript dev serverarea/build: Build and test infrastructure for MLflowarea/docs: MLflow documentation pagesHow should the PR be classified in the release notes? Choose one:
rn/none- No description will be included. The PR will be mentioned only by the PR number in the "Small Bugfixes and Documentation Updates" sectionrn/breaking-change- The PR will be mentioned in the "Breaking Changes" sectionrn/feature- A new user-facing feature worth mentioning in the release notesrn/bug-fix- A user-facing bug fix worth mentioning in the release notesrn/documentation- A user-facing documentation change worth mentioning in the release notesShould this PR be included in the next patch release?
Yesshould be selected for bug fixes, documentation updates, and other small changes.Noshould be selected for new features and larger changes. If you're unsure about the release classification of this PR, leave this unchecked to let the maintainers decide.What is a minor/patch release?
Bug fixes, doc updates and new features usually go into minor releases.
Bug fixes and doc updates usually go into patch releases.