Log prompt version link to an active span#18890
Conversation
Signed-off-by: Tomu Hirata <tomu.hirata@gmail.com>
- Replace PROMPT_NAME and PROMPT_VERSION attributes with LINKED_PROMPTS in tracing constants. - Modify load_prompt function to set linked prompts as a JSON list of prompt name and version. - Enhance tests to validate linked prompts functionality for single and multiple prompts within the same span. Signed-off-by: Tomu Hirata <tomu.hirata@gmail.com>
|
Documentation preview for 871089e is available at: More info
|
B-Step62
left a comment
There was a problem hiding this comment.
LGTM with one suggestion to improve detection coverage.
| prompt=prompt, | ||
| ) | ||
|
|
||
| # Set prompt version information as span attributes if there's an active span |
There was a problem hiding this comment.
Q: Do we aslo want to trigger for this in prompt.format, similarly to the optimization? Feel free to address in a follow-up.
There was a problem hiding this comment.
I thought about it, but I think we should recommend users call load_prompt during inference as well after introducing smart caching. And then if users call load_prompt in inference, triggering the linkage in load_prompt should be sufficient.
There was a problem hiding this comment.
I see, I think calling load_prompt works in DIY case, but does it work for autologging? For example, can we check if this works with popular ones like LangGraph/OpenAI agents/Pydantic AI..? My hunch is these frameworks need prompt pre-loaded when constructing the agents not at invocation.
Signed-off-by: Tomu Hirata <tomu.hirata@gmail.com>
Signed-off-by: Tomu Hirata <tomu.hirata@gmail.com>
🛠 DevTools 🛠
Install mlflow from this PR
For Databricks, use the following command:
Related Issues/PRs
n/a
What changes are proposed in this pull request?
Log prompt version to an active span when
load_promptis called.How is this PR tested?
Does this PR require documentation update?
Release Notes
Is this a user-facing change?
What component(s), interfaces, languages, and integrations does this PR affect?
Components
area/tracking: Tracking Service, tracking client APIs, autologgingarea/models: MLmodel format, model serialization/deserialization, flavorsarea/model-registry: Model Registry service, APIs, and the fluent client calls for Model Registryarea/scoring: MLflow Model server, model deployment tools, Spark UDFsarea/evaluation: MLflow model evaluation features, evaluation metrics, and evaluation workflowsarea/gateway: MLflow AI Gateway client APIs, server, and third-party integrationsarea/prompts: MLflow prompt engineering features, prompt templates, and prompt managementarea/tracing: MLflow Tracing features, tracing APIs, and LLM tracing functionalityarea/projects: MLproject format, project running backendsarea/uiux: Front-end, user experience, plotting, JavaScript, JavaScript dev serverarea/build: Build and test infrastructure for MLflowarea/docs: MLflow documentation pagesHow should the PR be classified in the release notes? Choose one:
rn/none- No description will be included. The PR will be mentioned only by the PR number in the "Small Bugfixes and Documentation Updates" sectionrn/breaking-change- The PR will be mentioned in the "Breaking Changes" sectionrn/feature- A new user-facing feature worth mentioning in the release notesrn/bug-fix- A user-facing bug fix worth mentioning in the release notesrn/documentation- A user-facing documentation change worth mentioning in the release notesShould this PR be included in the next patch release?
Yesshould be selected for bug fixes, documentation updates, and other small changes.Noshould be selected for new features and larger changes. If you're unsure about the release classification of this PR, leave this unchecked to let the maintainers decide.What is a minor/patch release?
Bug fixes, doc updates and new features usually go into minor releases.
Bug fixes and doc updates usually go into patch releases.