Skip to content

Commit ed6876b

Browse files
chore(ai): remove all experimental embed events (#13693)
## Background the changes from #13478 were backported when they shouldn't have been. ## Summary cherry pick and revert all the changes ## Manual Verification na ## Checklist - [x] Tests have been added / updated (for bug fixes / features) - [x] Documentation has been added / updated (for bug fixes / features) - [x] A _patch_ changeset for relevant packages has been added (for bug fixes / features - run `pnpm changeset` in the project root) - [x] I have reviewed this pull request (self-review)
1 parent b1cff60 commit ed6876b

File tree

16 files changed

+246
-1642
lines changed

16 files changed

+246
-1642
lines changed

.changeset/cyan-months-do.md

Lines changed: 5 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,5 @@
1+
---
2+
'ai': patch
3+
---
4+
5+
chore(ai): remove all experimental embed events

content/docs/03-ai-sdk-core/65-event-listeners.mdx

Lines changed: 10 additions & 213 deletions
Original file line numberDiff line numberDiff line change
@@ -1,15 +1,15 @@
11
---
22
title: Event Callbacks
3-
description: Subscribe to lifecycle events in generateText, streamText, embed, and embedMany calls
3+
description: Subscribe to lifecycle events in generateText and streamText calls
44
---
55

66
# Event Callbacks
77

8-
The AI SDK provides per-call event callbacks that you can pass to `generateText`, `streamText`, `embed`, and `embedMany` to observe lifecycle events. This is useful for building observability tools, logging systems, analytics, and debugging utilities.
8+
The AI SDK provides per-call event callbacks that you can pass to `generateText` and `streamText` to observe lifecycle events. This is useful for building observability tools, logging systems, analytics, and debugging utilities.
99

1010
## Basic Usage
1111

12-
Pass callbacks directly to `generateText`, `streamText`, `embed`, or `embedMany`:
12+
Pass callbacks directly to `generateText` or `streamText`:
1313

1414
```ts
1515
import { generateText } from 'ai';
@@ -28,8 +28,6 @@ const result = await generateText({
2828

2929
## Available Callbacks
3030

31-
### `generateText` / `streamText`
32-
3331
<PropertiesTable
3432
content={[
3533
{
@@ -67,30 +65,9 @@ const result = await generateText({
6765
]}
6866
/>
6967

70-
### `embed` / `embedMany`
71-
72-
<PropertiesTable
73-
content={[
74-
{
75-
name: 'experimental_onStart',
76-
type: '(event: EmbedOnStartEvent) => void | Promise<void>',
77-
description:
78-
'Called when the embedding operation begins, before the embedding model is called.',
79-
},
80-
{
81-
name: 'experimental_onFinish',
82-
type: '(event: EmbedOnFinishEvent) => void | Promise<void>',
83-
description:
84-
'Called when the embedding operation completes, after the embedding model returns.',
85-
},
86-
]}
87-
/>
88-
8968
## Event Reference
9069

91-
### `generateText` / `streamText`
92-
93-
#### `experimental_onStart`
70+
### `experimental_onStart`
9471

9572
Called when the generation operation begins, before any LLM calls are made.
9673

@@ -244,7 +221,7 @@ const result = await generateText({
244221
]}
245222
/>
246223

247-
#### `experimental_onStepStart`
224+
### `experimental_onStepStart`
248225

249226
Called before each step (LLM call) begins. Useful for tracking multi-step generations.
250227

@@ -358,7 +335,7 @@ const result = await generateText({
358335
]}
359336
/>
360337

361-
#### `experimental_onToolCallStart`
338+
### `experimental_onToolCallStart`
362339

363340
Called before a tool's `execute` function runs.
364341

@@ -450,7 +427,7 @@ const result = await generateText({
450427
]}
451428
/>
452429

453-
#### `experimental_onToolCallFinish`
430+
### `experimental_onToolCallFinish`
454431

455432
Called after a tool's `execute` function completes or errors. Uses a discriminated union on the `success` field.
456433

@@ -571,7 +548,7 @@ const result = await generateText({
571548
]}
572549
/>
573550

574-
#### `onStepFinish`
551+
### `onStepFinish`
575552

576553
Called after each step (LLM call) completes. Provides the full `StepResult`.
577554

@@ -712,7 +689,7 @@ const result = await generateText({
712689
]}
713690
/>
714691

715-
#### `onFinish`
692+
### `onFinish`
716693

717694
Called when the entire generation completes (all steps finished). Includes aggregated data.
718695

@@ -864,164 +841,6 @@ const result = await generateText({
864841
]}
865842
/>
866843

867-
### `embed` / `embedMany`
868-
869-
#### `experimental_onStart`
870-
871-
Called when the embedding operation begins, before the embedding model is called. Both `embed` and `embedMany` share the same event interface; the `operationId` field distinguishes them (`'ai.embed'` vs `'ai.embedMany'`), and the `value` field is a single string for `embed` or an array of strings for `embedMany`.
872-
873-
```ts
874-
import { embed } from 'ai';
875-
876-
const result = await embed({
877-
model: openai.embedding('text-embedding-3-small'),
878-
value: 'sunny day at the beach',
879-
experimental_onStart: event => {
880-
console.log('Operation:', event.operationId);
881-
console.log('Model:', event.model.modelId);
882-
},
883-
});
884-
```
885-
886-
<PropertiesTable
887-
content={[
888-
{
889-
name: 'callId',
890-
type: 'string',
891-
description: 'Unique identifier for this embed call.',
892-
},
893-
{
894-
name: 'operationId',
895-
type: 'string',
896-
description:
897-
"Identifies the operation type ('ai.embed' or 'ai.embedMany').",
898-
},
899-
{
900-
name: 'model',
901-
type: '{ provider: string; modelId: string }',
902-
description: 'The embedding model being used.',
903-
},
904-
{
905-
name: 'value',
906-
type: 'string | Array<string>',
907-
description:
908-
'The value(s) being embedded. A single string for embed, or an array for embedMany.',
909-
},
910-
{
911-
name: 'maxRetries',
912-
type: 'number',
913-
description: 'Maximum number of retries for failed requests.',
914-
},
915-
{
916-
name: 'abortSignal',
917-
type: 'AbortSignal | undefined',
918-
description: 'Abort signal for cancelling the operation.',
919-
},
920-
{
921-
name: 'headers',
922-
type: 'Record<string, string | undefined> | undefined',
923-
description: 'Additional HTTP headers sent with the request.',
924-
},
925-
{
926-
name: 'providerOptions',
927-
type: 'ProviderOptions | undefined',
928-
description: 'Additional provider-specific options.',
929-
},
930-
{
931-
name: 'functionId',
932-
type: 'string | undefined',
933-
description:
934-
'Identifier from telemetry settings for grouping related operations.',
935-
},
936-
{
937-
name: 'metadata',
938-
type: 'Record<string, JSONValue> | undefined',
939-
description: 'Additional metadata from telemetry settings.',
940-
},
941-
]}
942-
/>
943-
944-
#### `experimental_onFinish`
945-
946-
Called when the embedding operation completes. For `embed`, `embedding` is a single vector and `response` is a single response object. For `embedMany`, `embedding` is an array of vectors and `response` is an array of response objects (one per chunk).
947-
948-
```ts
949-
import { embedMany } from 'ai';
950-
951-
const result = await embedMany({
952-
model: openai.embedding('text-embedding-3-small'),
953-
values: ['sunny day at the beach', 'rainy afternoon in the city'],
954-
experimental_onFinish: event => {
955-
console.log('Operation:', event.operationId);
956-
console.log('Usage:', event.usage);
957-
},
958-
});
959-
```
960-
961-
<PropertiesTable
962-
content={[
963-
{
964-
name: 'callId',
965-
type: 'string',
966-
description: 'Unique identifier for this embed call.',
967-
},
968-
{
969-
name: 'operationId',
970-
type: 'string',
971-
description:
972-
"Identifies the operation type ('ai.embed' or 'ai.embedMany').",
973-
},
974-
{
975-
name: 'model',
976-
type: '{ provider: string; modelId: string }',
977-
description: 'The embedding model that was used.',
978-
},
979-
{
980-
name: 'value',
981-
type: 'string | Array<string>',
982-
description: 'The value(s) that were embedded.',
983-
},
984-
{
985-
name: 'embedding',
986-
type: 'Embedding | Array<Embedding>',
987-
description:
988-
'The resulting embedding(s). A single vector for embed, or an array for embedMany.',
989-
},
990-
{
991-
name: 'usage',
992-
type: 'EmbeddingModelUsage',
993-
description: 'Token usage for the embedding operation.',
994-
},
995-
{
996-
name: 'warnings',
997-
type: 'Array<Warning>',
998-
description: 'Warnings from the embedding model.',
999-
},
1000-
{
1001-
name: 'providerMetadata',
1002-
type: 'ProviderMetadata | undefined',
1003-
description: 'Optional provider-specific metadata.',
1004-
},
1005-
{
1006-
name: 'response',
1007-
type: '{ headers?: Record<string, string>; body?: unknown } | Array<{ headers?: Record<string, string>; body?: unknown } | undefined> | undefined',
1008-
description:
1009-
'Response data. A single response for embed, or an array for embedMany (one per chunk).',
1010-
},
1011-
{
1012-
name: 'functionId',
1013-
type: 'string | undefined',
1014-
description:
1015-
'Identifier from telemetry settings for grouping related operations.',
1016-
},
1017-
{
1018-
name: 'metadata',
1019-
type: 'Record<string, JSONValue> | undefined',
1020-
description: 'Additional metadata from telemetry settings.',
1021-
},
1022-
]}
1023-
/>
1024-
1025844
## Use Cases
1026845

1027846
### Logging and Debugging
@@ -1080,31 +899,9 @@ const result = await generateText({
1080899
});
1081900
```
1082901

1083-
### Embedding Observability
1084-
1085-
```ts
1086-
import { embedMany } from 'ai';
1087-
1088-
const result = await embedMany({
1089-
model: openai.embedding('text-embedding-3-small'),
1090-
values: ['sunny day at the beach', 'rainy afternoon in the city'],
1091-
experimental_onStart: event => {
1092-
console.log(`Embedding started (${event.operationId})`, {
1093-
model: event.model.modelId,
1094-
valueCount: Array.isArray(event.value) ? event.value.length : 1,
1095-
});
1096-
},
1097-
experimental_onFinish: event => {
1098-
console.log(`Embedding complete (${event.operationId})`, {
1099-
tokens: event.usage.tokens,
1100-
});
1101-
},
1102-
});
1103-
```
1104-
1105902
## Error Handling
1106903

1107-
Errors thrown inside callbacks are caught and do not break the generation or embedding flow. This ensures that monitoring code cannot disrupt your application:
904+
Errors thrown inside callbacks are caught and do not break the generation flow. This ensures that monitoring code cannot disrupt your application:
1108905

1109906
```ts
1110907
const result = await generateText({

0 commit comments

Comments
 (0)