Skip to content

Commit dad775f

Browse files
authored
feat (ai/core): add finish event and avg output tokens per second (telemetry) (#2872)
1 parent 4f1530f commit dad775f

File tree

12 files changed

+229
-49
lines changed

12 files changed

+229
-49
lines changed

.changeset/hot-snails-clean.md

Lines changed: 5 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,5 @@
1+
---
2+
'ai': patch
3+
---
4+
5+
feat (ai/core): add finish event and avg output tokens per second (telemetry)

content/docs/03-ai-sdk-core/60-telemetry.mdx

Lines changed: 25 additions & 22 deletions
Original file line numberDiff line numberDiff line change
@@ -66,19 +66,19 @@ const result = await generateText({
6666
- `operation.name`: `ai.generateText` and the functionId that was set through `telemetry.functionId`
6767
- `ai.operationId`: `"ai.generateText"`
6868
- `ai.prompt`: the prompt that was used when calling `generateText`
69-
- `ai.result.text`: the text that was generated
70-
- `ai.result.toolCalls`: the tool calls that were made as part of the generation (stringified JSON)
71-
- `ai.finishReason`: the reason why the generation finished
69+
- `ai.response.text`: the text that was generated
70+
- `ai.response.toolCalls`: the tool calls that were made as part of the generation (stringified JSON)
71+
- `ai.response.finishReason`: the reason why the generation finished
7272
- `ai.settings.maxToolRoundtrips`: the maximum number of tool roundtrips that were set
7373
- `ai.generateText.doGenerate`: a provider doGenerate call. It can contain `ai.toolCall` spans.
7474
It contains the [basic LLM span information](#basic-llm-span-information) and the following attributes:
7575
- `operation.name`: `ai.generateText.doGenerate` and the functionId that was set through `telemetry.functionId`
7676
- `ai.operationId`: `"ai.generateText.doGenerate"`
7777
- `ai.prompt.format`: the format of the prompt
7878
- `ai.prompt.messages`: the messages that were passed into the provider
79-
- `ai.result.text`: the text that was generated
80-
- `ai.result.toolCalls`: the tool calls that were made as part of the generation (stringified JSON)
81-
- `ai.finishReason`: the reason why the generation finished
79+
- `ai.response.text`: the text that was generated
80+
- `ai.response.toolCalls`: the tool calls that were made as part of the generation (stringified JSON)
81+
- `ai.response.finishReason`: the reason why the generation finished
8282
- `ai.toolCall`: a tool call that is made as part of the generateText call. See [Tool call spans](#tool-call-spans) for more details.
8383

8484
### streamText function
@@ -90,9 +90,9 @@ const result = await generateText({
9090
- `operation.name`: `ai.streamText` and the functionId that was set through `telemetry.functionId`
9191
- `ai.operationId`: `"ai.streamText"`
9292
- `ai.prompt`: the prompt that was used when calling `streamText`
93-
- `ai.result.text`: the text that was generated
94-
- `ai.result.toolCalls`: the tool calls that were made as part of the generation (stringified JSON)
95-
- `ai.finishReason`: the reason why the generation finished
93+
- `ai.response.text`: the text that was generated
94+
- `ai.response.toolCalls`: the tool calls that were made as part of the generation (stringified JSON)
95+
- `ai.response.finishReason`: the reason why the generation finished
9696
- `ai.streamText.doStream`: a provider doStream call.
9797
This span contains an `ai.stream.firstChunk` event and `ai.toolCall` spans.
9898
It contains the [basic LLM span information](#basic-llm-span-information) and the following attributes:
@@ -101,13 +101,16 @@ const result = await generateText({
101101
- `ai.operationId`: `"ai.streamText.doStream"`
102102
- `ai.prompt.format`: the format of the prompt
103103
- `ai.prompt.messages`: the messages that were passed into the provider
104-
- `ai.result.text`: the text that was generated
105-
- `ai.result.toolCalls`: the tool calls that were made as part of the generation (stringified JSON)
106-
- `ai.finishReason`: the reason why the generation finished
107-
- `ai.stream.msToFirstChunk`: the time it took to receive the first chunk
104+
- `ai.response.text`: the text that was generated
105+
- `ai.response.toolCalls`: the tool calls that were made as part of the generation (stringified JSON)
106+
- `ai.response.msToFirstChunk`: the time it took to receive the first chunk in milliseconds
107+
- `ai.response.msToFinish`: the time it took to receive the finish part of the LLM stream in milliseconds
108+
- `ai.response.avgCompletionTokensPerSecond`: the average number of completion tokens per second
109+
- `ai.response.finishReason`: the reason why the generation finished
108110

109111
- `ai.stream.firstChunk` (event): an event that is emitted when the first chunk of the stream is received.
110-
- `ai.stream.msToFirstChunk`: the time it took to receive the first chunk
112+
- `ai.response.msToFirstChunk`: the time it took to receive the first chunk
113+
- `ai.stream.finish` (event): an event that is emitted when the finish part of the LLM stream is received.
111114
- `ai.toolCall`: a tool call that is made as part of the generateText call. See [Tool call spans](#tool-call-spans) for more details.
112115

113116
It also records a `ai.stream.firstChunk` event when the first chunk of the stream is received.
@@ -124,7 +127,7 @@ It also records a `ai.stream.firstChunk` event when the first chunk of the strea
124127
- `ai.schema`: Stringified JSON schema version of the schema that was passed into the `generateObject` function
125128
- `ai.schema.name`: the name of the schema that was passed into the `generateObject` function
126129
- `ai.schema.description`: the description of the schema that was passed into the `generateObject` function
127-
- `ai.result.object`: the object that was generated (stringified JSON)
130+
- `ai.response.object`: the object that was generated (stringified JSON)
128131
- `ai.settings.mode`: the object generation mode, e.g. `json`
129132
- `ai.settings.output`: the output type that was used, e.g. `object` or `no-schema`
130133
- `ai.generateObject.doGenerate`: a provider doGenerate call.
@@ -133,9 +136,9 @@ It also records a `ai.stream.firstChunk` event when the first chunk of the strea
133136
- `ai.operationId`: `"ai.generateObject.doGenerate"`
134137
- `ai.prompt.format`: the format of the prompt
135138
- `ai.prompt.messages`: the messages that were passed into the provider
136-
- `ai.result.object`: the object that was generated (stringified JSON)
139+
- `ai.response.object`: the object that was generated (stringified JSON)
137140
- `ai.settings.mode`: the object generation mode
138-
- `ai.finishReason`: the reason why the generation finished
141+
- `ai.response.finishReason`: the reason why the generation finished
139142

140143
### streamObject function
141144

@@ -149,7 +152,7 @@ It also records a `ai.stream.firstChunk` event when the first chunk of the strea
149152
- `ai.schema`: Stringified JSON schema version of the schema that was passed into the `streamObject` function
150153
- `ai.schema.name`: the name of the schema that was passed into the `streamObject` function
151154
- `ai.schema.description`: the description of the schema that was passed into the `streamObject` function
152-
- `ai.result.object`: the object that was generated (stringified JSON)
155+
- `ai.response.object`: the object that was generated (stringified JSON)
153156
- `ai.settings.mode`: the object generation mode, e.g. `json`
154157
- `ai.settings.output`: the output type that was used, e.g. `object` or `no-schema`
155158
- `ai.streamObject.doStream`: a provider doStream call.
@@ -159,12 +162,12 @@ It also records a `ai.stream.firstChunk` event when the first chunk of the strea
159162
- `ai.operationId`: `"ai.streamObject.doStream"`
160163
- `ai.prompt.format`: the format of the prompt
161164
- `ai.prompt.messages`: the messages that were passed into the provider
162-
- `ai.result.object`: the object that was generated (stringified JSON)
163165
- `ai.settings.mode`: the object generation mode
164-
- `ai.finishReason`: the reason why the generation finished
165-
- `ai.stream.msToFirstChunk`: the time it took to receive the first chunk
166+
- `ai.response.object`: the object that was generated (stringified JSON)
167+
- `ai.response.msToFirstChunk`: the time it took to receive the first chunk
168+
- `ai.response.finishReason`: the reason why the generation finished
166169
- `ai.stream.firstChunk` (event): an event that is emitted when the first chunk of the stream is received.
167-
- `ai.stream.msToFirstChunk`: the time it took to receive the first chunk
170+
- `ai.response.msToFirstChunk`: the time it took to receive the first chunk
168171

169172
### embed function
170173

packages/ai/core/generate-object/__snapshots__/generate-object.test.ts.snap

Lines changed: 12 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -8,6 +8,7 @@ exports[`telemetry > should not record telemetry inputs / outputs when disabled
88
"ai.model.id": "mock-model-id",
99
"ai.model.provider": "mock-provider",
1010
"ai.operationId": "ai.generateObject",
11+
"ai.response.finishReason": "stop",
1112
"ai.settings.mode": "json",
1213
"ai.settings.output": "object",
1314
"ai.usage.completionTokens": 20,
@@ -23,6 +24,7 @@ exports[`telemetry > should not record telemetry inputs / outputs when disabled
2324
"ai.model.id": "mock-model-id",
2425
"ai.model.provider": "mock-provider",
2526
"ai.operationId": "ai.generateObject.doGenerate",
27+
"ai.response.finishReason": "stop",
2628
"ai.settings.mode": "json",
2729
"ai.usage.completionTokens": 20,
2830
"ai.usage.promptTokens": 10,
@@ -49,6 +51,7 @@ exports[`telemetry > should not record telemetry inputs / outputs when disabled
4951
"ai.model.id": "mock-model-id",
5052
"ai.model.provider": "mock-provider",
5153
"ai.operationId": "ai.generateObject",
54+
"ai.response.finishReason": "stop",
5255
"ai.settings.mode": "tool",
5356
"ai.settings.output": "object",
5457
"ai.usage.completionTokens": 20,
@@ -64,6 +67,7 @@ exports[`telemetry > should not record telemetry inputs / outputs when disabled
6467
"ai.model.id": "mock-model-id",
6568
"ai.model.provider": "mock-provider",
6669
"ai.operationId": "ai.generateObject.doGenerate",
70+
"ai.response.finishReason": "stop",
6771
"ai.settings.mode": "tool",
6872
"ai.usage.completionTokens": 20,
6973
"ai.usage.promptTokens": 10,
@@ -93,6 +97,8 @@ exports[`telemetry > should record telemetry data when enabled with mode "json"
9397
"ai.prompt": "{"prompt":"prompt"}",
9498
"ai.request.headers.header1": "value1",
9599
"ai.request.headers.header2": "value2",
100+
"ai.response.finishReason": "stop",
101+
"ai.response.object": "{"content":"Hello, world!"}",
96102
"ai.result.object": "{"content":"Hello, world!"}",
97103
"ai.schema": "{"type":"object","properties":{"content":{"type":"string"}},"required":["content"],"additionalProperties":false,"$schema":"http://json-schema.org/draft-07/schema#"}",
98104
"ai.schema.description": "test description",
@@ -125,6 +131,8 @@ exports[`telemetry > should record telemetry data when enabled with mode "json"
125131
"ai.prompt.messages": "[{"role":"system","content":"JSON schema:\\n{\\"type\\":\\"object\\",\\"properties\\":{\\"content\\":{\\"type\\":\\"string\\"}},\\"required\\":[\\"content\\"],\\"additionalProperties\\":false,\\"$schema\\":\\"http://json-schema.org/draft-07/schema#\\"}\\nYou MUST answer with a JSON object that matches the JSON schema above."},{"role":"user","content":[{"type":"text","text":"prompt"}]}]",
126132
"ai.request.headers.header1": "value1",
127133
"ai.request.headers.header2": "value2",
134+
"ai.response.finishReason": "stop",
135+
"ai.response.object": "{ "content": "Hello, world!" }",
128136
"ai.result.object": "{ "content": "Hello, world!" }",
129137
"ai.settings.frequencyPenalty": 0.3,
130138
"ai.settings.mode": "json",
@@ -169,6 +177,8 @@ exports[`telemetry > should record telemetry data when enabled with mode "tool"
169177
"ai.prompt": "{"prompt":"prompt"}",
170178
"ai.request.headers.header1": "value1",
171179
"ai.request.headers.header2": "value2",
180+
"ai.response.finishReason": "stop",
181+
"ai.response.object": "{"content":"Hello, world!"}",
172182
"ai.result.object": "{"content":"Hello, world!"}",
173183
"ai.schema": "{"type":"object","properties":{"content":{"type":"string"}},"required":["content"],"additionalProperties":false,"$schema":"http://json-schema.org/draft-07/schema#"}",
174184
"ai.schema.description": "test description",
@@ -201,6 +211,8 @@ exports[`telemetry > should record telemetry data when enabled with mode "tool"
201211
"ai.prompt.messages": "[{"role":"user","content":[{"type":"text","text":"prompt"}]}]",
202212
"ai.request.headers.header1": "value1",
203213
"ai.request.headers.header2": "value2",
214+
"ai.response.finishReason": "stop",
215+
"ai.response.object": "{ "content": "Hello, world!" }",
204216
"ai.result.object": "{ "content": "Hello, world!" }",
205217
"ai.settings.frequencyPenalty": 0.3,
206218
"ai.settings.mode": "tool",

packages/ai/core/generate-object/__snapshots__/stream-object.test.ts.snap

Lines changed: 8 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -24,6 +24,7 @@ exports[`telemetry > should not record telemetry inputs / outputs when disabled
2424
"ai.model.id": "mock-model-id",
2525
"ai.model.provider": "mock-provider",
2626
"ai.operationId": "ai.streamObject.doStream",
27+
"ai.response.finishReason": "stop",
2728
"ai.settings.mode": "json",
2829
"ai.stream.msToFirstChunk": 0,
2930
"ai.usage.completionTokens": 10,
@@ -72,6 +73,7 @@ exports[`telemetry > should not record telemetry inputs / outputs when disabled
7273
"ai.model.id": "mock-model-id",
7374
"ai.model.provider": "mock-provider",
7475
"ai.operationId": "ai.streamObject.doStream",
76+
"ai.response.finishReason": "stop",
7577
"ai.settings.mode": "tool",
7678
"ai.stream.msToFirstChunk": 0,
7779
"ai.usage.completionTokens": 10,
@@ -108,6 +110,7 @@ exports[`telemetry > should record telemetry data when enabled with mode "json"
108110
"ai.prompt": "{"prompt":"prompt"}",
109111
"ai.request.headers.header1": "value1",
110112
"ai.request.headers.header2": "value2",
113+
"ai.response.object": "{"content":"Hello, world!"}",
111114
"ai.result.object": "{"content":"Hello, world!"}",
112115
"ai.schema": "{"type":"object","properties":{"content":{"type":"string"}},"required":["content"],"additionalProperties":false,"$schema":"http://json-schema.org/draft-07/schema#"}",
113116
"ai.schema.description": "test description",
@@ -140,6 +143,8 @@ exports[`telemetry > should record telemetry data when enabled with mode "json"
140143
"ai.prompt.messages": "[{"role":"system","content":"JSON schema:\\n{\\"type\\":\\"object\\",\\"properties\\":{\\"content\\":{\\"type\\":\\"string\\"}},\\"required\\":[\\"content\\"],\\"additionalProperties\\":false,\\"$schema\\":\\"http://json-schema.org/draft-07/schema#\\"}\\nYou MUST answer with a JSON object that matches the JSON schema above."},{"role":"user","content":[{"type":"text","text":"prompt"}]}]",
141144
"ai.request.headers.header1": "value1",
142145
"ai.request.headers.header2": "value2",
146+
"ai.response.finishReason": "stop",
147+
"ai.response.object": "{"content":"Hello, world!"}",
143148
"ai.result.object": "{"content":"Hello, world!"}",
144149
"ai.settings.frequencyPenalty": 0.3,
145150
"ai.settings.mode": "json",
@@ -191,6 +196,7 @@ exports[`telemetry > should record telemetry data when enabled with mode "tool"
191196
"ai.prompt": "{"prompt":"prompt"}",
192197
"ai.request.headers.header1": "value1",
193198
"ai.request.headers.header2": "value2",
199+
"ai.response.object": "{"content":"Hello, world!"}",
194200
"ai.result.object": "{"content":"Hello, world!"}",
195201
"ai.schema": "{"type":"object","properties":{"content":{"type":"string"}},"required":["content"],"additionalProperties":false,"$schema":"http://json-schema.org/draft-07/schema#"}",
196202
"ai.schema.description": "test description",
@@ -223,6 +229,8 @@ exports[`telemetry > should record telemetry data when enabled with mode "tool"
223229
"ai.prompt.messages": "[{"role":"user","content":[{"type":"text","text":"prompt"}]}]",
224230
"ai.request.headers.header1": "value1",
225231
"ai.request.headers.header2": "value2",
232+
"ai.response.finishReason": "stop",
233+
"ai.response.object": "{"content":"Hello, world!"}",
226234
"ai.result.object": "{"content":"Hello, world!"}",
227235
"ai.settings.frequencyPenalty": 0.3,
228236
"ai.settings.mode": "tool",

packages/ai/core/generate-object/generate-object.ts

Lines changed: 20 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -352,10 +352,15 @@ export async function generateObject<SCHEMA, RESULT>({
352352
selectTelemetryAttributes({
353353
telemetry,
354354
attributes: {
355-
'ai.finishReason': result.finishReason,
355+
'ai.response.finishReason': result.finishReason,
356+
'ai.response.object': { output: () => result.text },
357+
356358
'ai.usage.promptTokens': result.usage.promptTokens,
357359
'ai.usage.completionTokens':
358360
result.usage.completionTokens,
361+
362+
// deprecated:
363+
'ai.finishReason': result.finishReason,
359364
'ai.result.object': { output: () => result.text },
360365

361366
// standardized gen-ai llm span attributes:
@@ -457,10 +462,15 @@ export async function generateObject<SCHEMA, RESULT>({
457462
selectTelemetryAttributes({
458463
telemetry,
459464
attributes: {
460-
'ai.finishReason': result.finishReason,
465+
'ai.response.finishReason': result.finishReason,
466+
'ai.response.object': { output: () => objectText },
467+
461468
'ai.usage.promptTokens': result.usage.promptTokens,
462469
'ai.usage.completionTokens':
463470
result.usage.completionTokens,
471+
472+
// deprecated:
473+
'ai.finishReason': result.finishReason,
464474
'ai.result.object': { output: () => objectText },
465475

466476
// standardized gen-ai llm span attributes:
@@ -519,9 +529,16 @@ export async function generateObject<SCHEMA, RESULT>({
519529
selectTelemetryAttributes({
520530
telemetry,
521531
attributes: {
522-
'ai.finishReason': finishReason,
532+
'ai.response.finishReason': finishReason,
533+
'ai.response.object': {
534+
output: () => JSON.stringify(validationResult.value),
535+
},
536+
523537
'ai.usage.promptTokens': usage.promptTokens,
524538
'ai.usage.completionTokens': usage.completionTokens,
539+
540+
// deprecated:
541+
'ai.finishReason': finishReason,
525542
'ai.result.object': {
526543
output: () => JSON.stringify(validationResult.value),
527544
},

packages/ai/core/generate-object/stream-object.ts

Lines changed: 13 additions & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -766,13 +766,18 @@ class DefaultStreamObjectResult<PARTIAL, RESULT, ELEMENT_STREAM>
766766
selectTelemetryAttributes({
767767
telemetry,
768768
attributes: {
769-
'ai.finishReason': finishReason,
770-
'ai.usage.promptTokens': finalUsage.promptTokens,
771-
'ai.usage.completionTokens': finalUsage.completionTokens,
772-
'ai.result.object': {
769+
'ai.response.finishReason': finishReason,
770+
'ai.response.object': {
773771
output: () => JSON.stringify(object),
774772
},
775773

774+
'ai.usage.promptTokens': finalUsage.promptTokens,
775+
'ai.usage.completionTokens': finalUsage.completionTokens,
776+
777+
// deprecated
778+
'ai.finishReason': finishReason,
779+
'ai.result.object': { output: () => JSON.stringify(object) },
780+
776781
// standardized gen-ai llm span attributes:
777782
'gen_ai.usage.input_tokens': finalUsage.promptTokens,
778783
'gen_ai.usage.output_tokens': finalUsage.completionTokens,
@@ -791,9 +796,12 @@ class DefaultStreamObjectResult<PARTIAL, RESULT, ELEMENT_STREAM>
791796
attributes: {
792797
'ai.usage.promptTokens': finalUsage.promptTokens,
793798
'ai.usage.completionTokens': finalUsage.completionTokens,
794-
'ai.result.object': {
799+
'ai.response.object': {
795800
output: () => JSON.stringify(object),
796801
},
802+
803+
// deprecated
804+
'ai.result.object': { output: () => JSON.stringify(object) },
797805
},
798806
}),
799807
);

0 commit comments

Comments
 (0)