Skip to content

Log prompt version link to an active span#18890

Merged
TomeHirata merged 3 commits intomlflow:masterfrom
TomeHirata:feat/span-prompt/link
Dec 1, 2025
Merged

Log prompt version link to an active span#18890
TomeHirata merged 3 commits intomlflow:masterfrom
TomeHirata:feat/span-prompt/link

Conversation

@TomeHirata
Copy link
Collaborator

@TomeHirata TomeHirata commented Nov 18, 2025

🛠 DevTools 🛠

Open in GitHub Codespaces

Install mlflow from this PR

# mlflow
pip install git+https://github.com/mlflow/mlflow.git@refs/pull/18890/merge
# mlflow-skinny
pip install git+https://github.com/mlflow/mlflow.git@refs/pull/18890/merge#subdirectory=libs/skinny

For Databricks, use the following command:

%sh curl -LsSf https://raw.githubusercontent.com/mlflow/mlflow/HEAD/dev/install-skinny.sh | sh -s pull/18890/merge

Related Issues/PRs

n/a

What changes are proposed in this pull request?

Log prompt version to an active span when load_prompt is called.

How is this PR tested?

  • Existing unit/integration tests
  • New unit/integration tests
  • Manual tests

Does this PR require documentation update?

  • No. You can skip the rest of this section.
  • Yes. I've updated:
    • Examples
    • API references
    • Instructions

Release Notes

Is this a user-facing change?

  • No. You can skip the rest of this section.
  • Yes. Give a description of this change to be included in the release notes for MLflow users.

What component(s), interfaces, languages, and integrations does this PR affect?

Components

  • area/tracking: Tracking Service, tracking client APIs, autologging
  • area/models: MLmodel format, model serialization/deserialization, flavors
  • area/model-registry: Model Registry service, APIs, and the fluent client calls for Model Registry
  • area/scoring: MLflow Model server, model deployment tools, Spark UDFs
  • area/evaluation: MLflow model evaluation features, evaluation metrics, and evaluation workflows
  • area/gateway: MLflow AI Gateway client APIs, server, and third-party integrations
  • area/prompts: MLflow prompt engineering features, prompt templates, and prompt management
  • area/tracing: MLflow Tracing features, tracing APIs, and LLM tracing functionality
  • area/projects: MLproject format, project running backends
  • area/uiux: Front-end, user experience, plotting, JavaScript, JavaScript dev server
  • area/build: Build and test infrastructure for MLflow
  • area/docs: MLflow documentation pages

How should the PR be classified in the release notes? Choose one:

  • rn/none - No description will be included. The PR will be mentioned only by the PR number in the "Small Bugfixes and Documentation Updates" section
  • rn/breaking-change - The PR will be mentioned in the "Breaking Changes" section
  • rn/feature - A new user-facing feature worth mentioning in the release notes
  • rn/bug-fix - A user-facing bug fix worth mentioning in the release notes
  • rn/documentation - A user-facing documentation change worth mentioning in the release notes

Should this PR be included in the next patch release?

Yes should be selected for bug fixes, documentation updates, and other small changes. No should be selected for new features and larger changes. If you're unsure about the release classification of this PR, leave this unchecked to let the maintainers decide.

What is a minor/patch release?
  • Minor release: a release that increments the second part of the version number (e.g., 1.2.0 -> 1.3.0).
    Bug fixes, doc updates and new features usually go into minor releases.
  • Patch release: a release that increments the third part of the version number (e.g., 1.2.0 -> 1.2.1).
    Bug fixes and doc updates usually go into patch releases.
  • Yes (this PR will be cherry-picked and included in the next patch release)
  • No (this PR will be included in the next minor release)

Signed-off-by: Tomu Hirata <tomu.hirata@gmail.com>
- Replace PROMPT_NAME and PROMPT_VERSION attributes with LINKED_PROMPTS in tracing constants.
- Modify load_prompt function to set linked prompts as a JSON list of prompt name and version.
- Enhance tests to validate linked prompts functionality for single and multiple prompts within the same span.

Signed-off-by: Tomu Hirata <tomu.hirata@gmail.com>
@github-actions github-actions bot added v3.6.1 area/prompts MLflow Prompt Registry and Optimization area/tracing MLflow Tracing and its integrations rn/none List under Small Changes in Changelogs. labels Nov 18, 2025
@TomeHirata TomeHirata added the team-review Trigger a team review request label Nov 18, 2025
@github-actions
Copy link
Contributor

github-actions bot commented Nov 18, 2025

Documentation preview for 871089e is available at:

More info
  • Ignore this comment if this PR does not change the documentation.
  • The preview is updated when a new commit is pushed to this PR.
  • This comment was created by this workflow run.
  • The documentation was built by this workflow run.

@B-Step62 B-Step62 self-assigned this Nov 19, 2025
Copy link
Collaborator

@B-Step62 B-Step62 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM with one suggestion to improve detection coverage.

prompt=prompt,
)

# Set prompt version information as span attributes if there's an active span
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Q: Do we aslo want to trigger for this in prompt.format, similarly to the optimization? Feel free to address in a follow-up.

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I thought about it, but I think we should recommend users call load_prompt during inference as well after introducing smart caching. And then if users call load_prompt in inference, triggering the linkage in load_prompt should be sufficient.

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I see, I think calling load_prompt works in DIY case, but does it work for autologging? For example, can we check if this works with popular ones like LangGraph/OpenAI agents/Pydantic AI..? My hunch is these frameworks need prompt pre-loaded when constructing the agents not at invocation.

@TomeHirata TomeHirata enabled auto-merge December 1, 2025 07:19
@TomeHirata TomeHirata added this pull request to the merge queue Dec 1, 2025
Merged via the queue into mlflow:master with commit d8f8c1a Dec 1, 2025
55 checks passed
@TomeHirata TomeHirata deleted the feat/span-prompt/link branch December 1, 2025 08:02
BenWilson2 pushed a commit to BenWilson2/mlflow that referenced this pull request Dec 4, 2025
Signed-off-by: Tomu Hirata <tomu.hirata@gmail.com>
BenWilson2 pushed a commit that referenced this pull request Dec 4, 2025
Signed-off-by: Tomu Hirata <tomu.hirata@gmail.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

area/prompts MLflow Prompt Registry and Optimization area/tracing MLflow Tracing and its integrations rn/none List under Small Changes in Changelogs. team-review Trigger a team review request v3.6.1 v3.7.0

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants