Conversation
Signed-off-by: joelrobin18 <joelrobin1818@gmail.com>
|
Documentation preview for 11e2ed1 is available at: Changed Pages (1) More info
|
Signed-off-by: joelrobin18 <joelrobin1818@gmail.com>
|
|
||
| # run_stream_sync was added in pydantic-ai 1.10.0 | ||
| if hasattr(Agent, "run_stream_sync"): | ||
| agent_methods.append("run_stream_sync") |
There was a problem hiding this comment.
Q: The run_stream_sync just calls run_stream under the hood with exact same inputs and outputs. I'm wondering if we only need to patch run_stream and users can see whatever information they want to see. Having another run_stream_sync span with same info might be redundant.
There was a problem hiding this comment.
Even though run_stream_sync simply calls run_stream under the hood, patching only run_stream is not enough. Because the async generator pauses at the first yield, the context manager never exits, which means spans are started but never properly closed. If we dont patch the run_stream_sync, the outputs from the code wont be patches which results in incomplete trace data
Code:
def run_stream_sync(self, ...):
async def _consume_stream():
async with self.run_stream(...) as stream_result: #Context manager starts here
yield stream_result #Generator pauses here
async_result = _utils.get_event_loop().run_until_complete(anext(_consume_stream()))
return result.StreamedRunResultSync(async_result)|
|
||
|
|
||
| @pytest.fixture | ||
| def test_model_agent(): |
There was a problem hiding this comment.
Can we use the existing simple_agent and agent_with_tool?
| assert len(traces) == 1 | ||
| spans = traces[0].data.spans | ||
|
|
||
| run_stream_span = next((s for s in spans if s.name == "Agent.run_stream"), None) |
There was a problem hiding this comment.
Can we assert if the necessary spans like LLM spans are included in the trace, like the other test cases?
|
|
||
| if _is_async_context_manager_factory(method): | ||
| _patch_streaming_method(cls, method_name, patched_async_stream_call) | ||
| elif _returns_sync_streamed_result(method): |
There was a problem hiding this comment.
I tested run_stream_sync locally with a simple agent but it does not capture child spans. It might be related to how they run the async code inside sync function?
from pydantic_ai import Agent
agent = Agent('openai:gpt-4o')
response = agent.run_stream_sync('What is the capital of the UK?')
print(response.get_output())
There was a problem hiding this comment.
Yes, This is due to how they call run_stream inside the run_stream_sync function. run_stream is decorated with @asynccontextmanager. When called from run_stream_sync, the async generator pauses at yield stream_result. run_stream.aexit never executes, leaving spans without any parent trace. Thus child spans are not captured here.
Same reason as this: #19118 (comment)
Signed-off-by: joelrobin18 <joelrobin1818@gmail.com>
Signed-off-by: joelrobin18 <joelrobin1818@gmail.com>
Signed-off-by: joelrobin18 <joelrobin1818@gmail.com>
Signed-off-by: joelrobin18 <joelrobin1818@gmail.com>
|
@joelrobin18 @B-Step62 thank you very much for the work on this! Testing with my implementation in Databricks mlflow ( |
Great point. Capturing streaming event in authoring framework is a bit overkilling given that it propagates up to the span hierarchy and they are mostly same as the token stream that LLM generates. As you mentioned, even captured events in LLM span might not be used by many people, then we may update it to opt-in in the future to reduce the trace size. For now, let's keep the Pydantic-ai autologging behavior simple. |
Signed-off-by: joelrobin18 <joelrobin1818@gmail.com>
Signed-off-by: joelrobin18 <joelrobin1818@gmail.com>









🛠 DevTools 🛠
Install mlflow from this PR
For Databricks, use the following command:
Related Issues/PRs
Fixes #18999
What changes are proposed in this pull request?
PydanticAI Streaming Support
How is this PR tested?
Does this PR require documentation update?
Release Notes
Is this a user-facing change?
What component(s), interfaces, languages, and integrations does this PR affect?
Components
area/tracking: Tracking Service, tracking client APIs, autologgingarea/models: MLmodel format, model serialization/deserialization, flavorsarea/model-registry: Model Registry service, APIs, and the fluent client calls for Model Registryarea/scoring: MLflow Model server, model deployment tools, Spark UDFsarea/evaluation: MLflow model evaluation features, evaluation metrics, and evaluation workflowsarea/gateway: MLflow AI Gateway client APIs, server, and third-party integrationsarea/prompts: MLflow prompt engineering features, prompt templates, and prompt managementarea/tracing: MLflow Tracing features, tracing APIs, and LLM tracing functionalityarea/projects: MLproject format, project running backendsarea/uiux: Front-end, user experience, plotting, JavaScript, JavaScript dev serverarea/build: Build and test infrastructure for MLflowarea/docs: MLflow documentation pagesHow should the PR be classified in the release notes? Choose one:
rn/none- No description will be included. The PR will be mentioned only by the PR number in the "Small Bugfixes and Documentation Updates" sectionrn/breaking-change- The PR will be mentioned in the "Breaking Changes" sectionrn/feature- A new user-facing feature worth mentioning in the release notesrn/bug-fix- A user-facing bug fix worth mentioning in the release notesrn/documentation- A user-facing documentation change worth mentioning in the release notesShould this PR be included in the next patch release?
Yesshould be selected for bug fixes, documentation updates, and other small changes.Noshould be selected for new features and larger changes. If you're unsure about the release classification of this PR, leave this unchecked to let the maintainers decide.What is a minor/patch release?
Bug fixes, doc updates and new features usually go into minor releases.
Bug fixes and doc updates usually go into patch releases.