Skip to content

Commit 30c9de6

Browse files
feat(ai): experimental callbacks for streamText (#12708)
## Background as a follow up to #12654, we will introduce the same callbacks for streamText function ## Summary - added callback `experimental_onStart` that exposes the data/events that happen at the very beginning - added callback `experimental_onStepStart` - added callback `experimental_onToolCallStart` - added callback `experimental_onToolCallFinish` ## Manual Verification ## Checklist - [x] Tests have been added / updated (for bug fixes / features) - [x] Documentation has been added / updated (for bug fixes / features) - [x] A _patch_ changeset for relevant packages has been added (for bug fixes / features - run `pnpm changeset` in the project root) - [x] I have reviewed this pull request (self-review) ## Future Work - add similar callbacks for the agent loop
1 parent 3395a75 commit 30c9de6

File tree

12 files changed

+2897
-408
lines changed

12 files changed

+2897
-408
lines changed

.changeset/clean-boxes-tie.md

Lines changed: 5 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,5 @@
1+
---
2+
'ai': patch
3+
---
4+
5+
feat(ai): experimental callbacks for streamText

content/docs/03-ai-sdk-core/05-generating-text.mdx

Lines changed: 56 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -297,6 +297,62 @@ const result = streamText({
297297
});
298298
```
299299

300+
### Lifecycle callbacks (experimental)
301+
302+
<Note type="warning">
303+
Experimental callbacks are subject to breaking changes in incremental package
304+
releases.
305+
</Note>
306+
307+
`streamText` provides several experimental lifecycle callbacks that let you hook into different phases of the streaming process.
308+
These are useful for logging, observability, debugging, and custom telemetry.
309+
Errors thrown inside these callbacks are silently caught and do not break the streaming flow.
310+
311+
```tsx
312+
import { streamText } from 'ai';
313+
__PROVIDER_IMPORT__;
314+
315+
const result = streamText({
316+
model: __MODEL__,
317+
prompt: 'What is the weather in San Francisco?',
318+
tools: {
319+
// ... your tools
320+
},
321+
322+
experimental_onStart({ model, system, prompt, messages }) {
323+
console.log('Streaming started', { model, prompt });
324+
},
325+
326+
experimental_onStepStart({ stepNumber, model, messages }) {
327+
console.log(`Step ${stepNumber} starting`, { model: model.modelId });
328+
},
329+
330+
experimental_onToolCallStart({ toolCall }) {
331+
console.log(`Tool call starting: ${toolCall.toolName}`, {
332+
toolCallId: toolCall.toolCallId,
333+
});
334+
},
335+
336+
experimental_onToolCallFinish({ toolCall, durationMs, success, error }) {
337+
console.log(`Tool call finished: ${toolCall.toolName} (${durationMs}ms)`, {
338+
success,
339+
});
340+
},
341+
342+
onStepFinish({ finishReason, usage }) {
343+
console.log('Step finished', { finishReason, usage });
344+
},
345+
});
346+
```
347+
348+
The available lifecycle callbacks are:
349+
350+
- **`experimental_onStart`**: Called once when the `streamText` operation begins, before any LLM calls. Receives model info, prompt, settings, and telemetry metadata.
351+
- **`experimental_onStepStart`**: Called before each step (LLM call). Receives the step number, model, messages being sent, tools, and prior steps.
352+
- **`experimental_onToolCallStart`**: Called right before a tool's `execute` function runs. Receives the tool call object, messages, and context.
353+
- **`experimental_onToolCallFinish`**: Called right after a tool's `execute` function completes or errors. Receives the tool call object, `durationMs`, and a discriminated union with `success`/`output` or `success`/`error`.
354+
- **`onStepFinish`**: Called after each step finishes. Receives the finish reason, usage, and other step details.
355+
300356
### `fullStream` property
301357

302358
You can read a stream with all events using the `fullStream` property.

0 commit comments

Comments
 (0)