Skip to content

Commit 4f1530f

Browse files
authored
feat (ai/core): add OpenTelemetry Semantic Conventions for GenAI operations to v1.27.0 of standard (#2866)
1 parent 4c9ee89 commit 4f1530f

File tree

14 files changed

+219
-67
lines changed

14 files changed

+219
-67
lines changed

.changeset/chilly-jars-help.md

Lines changed: 5 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,5 @@
1+
---
2+
'ai': patch
3+
---
4+
5+
feat (ai/core): add OpenTelemetry Semantic Conventions for GenAI operations to v1.27.0 of standard

content/docs/03-ai-sdk-core/60-telemetry.mdx

Lines changed: 16 additions & 25 deletions
Original file line numberDiff line numberDiff line change
@@ -79,12 +79,6 @@ const result = await generateText({
7979
- `ai.result.text`: the text that was generated
8080
- `ai.result.toolCalls`: the tool calls that were made as part of the generation (stringified JSON)
8181
- `ai.finishReason`: the reason why the generation finished
82-
- standardized [gen_ai attributes](https://opentelemetry.io/docs/specs/semconv/gen-ai/llm-spans/)
83-
- `gen_ai.request.model`: the model that was used
84-
- `gen_ai.response.finish_reasons`: the finish reasons that were returned by the provider
85-
- `gen_ai.system`: the provider that was used
86-
- `gen_ai.usage.completion_tokens`: the number of completion tokens that were used
87-
- `gen_ai.usage.prompt_tokens`: the number of prompt tokens that were used
8882
- `ai.toolCall`: a tool call that is made as part of the generateText call. See [Tool call spans](#tool-call-spans) for more details.
8983

9084
### streamText function
@@ -102,6 +96,7 @@ const result = await generateText({
10296
- `ai.streamText.doStream`: a provider doStream call.
10397
This span contains an `ai.stream.firstChunk` event and `ai.toolCall` spans.
10498
It contains the [basic LLM span information](#basic-llm-span-information) and the following attributes:
99+
105100
- `operation.name`: `ai.streamText.doStream` and the functionId that was set through `telemetry.functionId`
106101
- `ai.operationId`: `"ai.streamText.doStream"`
107102
- `ai.prompt.format`: the format of the prompt
@@ -110,12 +105,7 @@ const result = await generateText({
110105
- `ai.result.toolCalls`: the tool calls that were made as part of the generation (stringified JSON)
111106
- `ai.finishReason`: the reason why the generation finished
112107
- `ai.stream.msToFirstChunk`: the time it took to receive the first chunk
113-
- standardized [gen_ai attributes](https://opentelemetry.io/docs/specs/semconv/gen-ai/llm-spans/)
114-
- `gen_ai.request.model`: the model that was used
115-
- `gen_ai.response.finish_reasons`: the finish reasons that were returned by the provider
116-
- `gen_ai.system`: the provider that was used
117-
- `gen_ai.usage.completion_tokens`: the number of completion tokens that were used
118-
- `gen_ai.usage.prompt_tokens`: the number of prompt tokens that were used
108+
119109
- `ai.stream.firstChunk` (event): an event that is emitted when the first chunk of the stream is received.
120110
- `ai.stream.msToFirstChunk`: the time it took to receive the first chunk
121111
- `ai.toolCall`: a tool call that is made as part of the generateText call. See [Tool call spans](#tool-call-spans) for more details.
@@ -146,12 +136,6 @@ It also records a `ai.stream.firstChunk` event when the first chunk of the strea
146136
- `ai.result.object`: the object that was generated (stringified JSON)
147137
- `ai.settings.mode`: the object generation mode
148138
- `ai.finishReason`: the reason why the generation finished
149-
- standardized [gen_ai attributes](https://opentelemetry.io/docs/specs/semconv/gen-ai/llm-spans/)
150-
- `gen_ai.request.model`: the model that was used
151-
- `gen_ai.response.finish_reasons`: the finish reasons that were returned by the provider
152-
- `gen_ai.system`: the provider that was used
153-
- `gen_ai.usage.completion_tokens`: the number of completion tokens that were used
154-
- `gen_ai.usage.prompt_tokens`: the number of prompt tokens that were used
155139

156140
### streamObject function
157141

@@ -179,12 +163,6 @@ It also records a `ai.stream.firstChunk` event when the first chunk of the strea
179163
- `ai.settings.mode`: the object generation mode
180164
- `ai.finishReason`: the reason why the generation finished
181165
- `ai.stream.msToFirstChunk`: the time it took to receive the first chunk
182-
- standardized [gen_ai attributes](https://opentelemetry.io/docs/specs/semconv/gen-ai/llm-spans/)
183-
- `gen_ai.request.model`: the model that was used
184-
- `gen_ai.response.finish_reasons`: the finish reasons that were returned by the provider
185-
- `gen_ai.system`: the provider that was used
186-
- `gen_ai.usage.completion_tokens`: the number of completion tokens that were used
187-
- `gen_ai.usage.prompt_tokens`: the number of prompt tokens that were used
188166
- `ai.stream.firstChunk` (event): an event that is emitted when the first chunk of the stream is received.
189167
- `ai.stream.msToFirstChunk`: the time it took to receive the first chunk
190168

@@ -229,6 +207,7 @@ It also records a `ai.stream.firstChunk` event when the first chunk of the strea
229207
Many spans that use LLMs (`ai.generateText`, `ai.generateText.doGenerate`, `ai.streamText`, `ai.streamText.doStream`,
230208
`ai.generateObject`, `ai.generateObject.doGenerate`, `ai.streamObject`, `ai.streamObject.doStream`) contain the following attributes:
231209

210+
- `resource.name`: the functionId that was set through `telemetry.functionId`
232211
- `ai.model.id`: the id of the model
233212
- `ai.model.provider`: the provider of the model
234213
- `ai.request.headers.*`: the request headers that were passed in through `headers`
@@ -237,7 +216,19 @@ Many spans that use LLMs (`ai.generateText`, `ai.generateText.doGenerate`, `ai.s
237216
- `ai.telemetry.metadata.*`: the metadata that was passed in through `telemetry.metadata`
238217
- `ai.usage.completionTokens`: the number of completion tokens that were used
239218
- `ai.usage.promptTokens`: the number of prompt tokens that were used
240-
- `resource.name`: the functionId that was set through `telemetry.functionId`
219+
- [Semantic Conventions for GenAI operations](https://opentelemetry.io/docs/specs/semconv/gen-ai/gen-ai-spans/)
220+
- `gen_ai.system`: the provider that was used
221+
- `gen_ai.request.model`: the model that was requested
222+
- `gen_ai.request.temperature`: the temperature that was set
223+
- `gen_ai.request.max_tokens`: the maximum number of tokens that were set
224+
- `gen_ai.request.frequency_penalty`: the frequency penalty that was set
225+
- `gen_ai.request.presence_penalty`: the presence penalty that was set
226+
- `gen_ai.request.top_k`: the topK parameter value that was set
227+
- `gen_ai.request.top_p`: the topP parameter value that was set
228+
- `gen_ai.request.stop_sequences`: the stop sequences
229+
- `gen_ai.response.finish_reasons`: the finish reasons that were returned by the provider
230+
- `gen_ai.usage.input_tokens`: the number of prompt tokens that were used
231+
- `gen_ai.usage.output_tokens`: the number of completion tokens that were used
241232

242233
### Basic embedding span information
243234

packages/ai/core/generate-object/__snapshots__/generate-object.test.ts.snap

Lines changed: 34 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -72,8 +72,8 @@ exports[`telemetry > should not record telemetry inputs / outputs when disabled
7272
"stop",
7373
],
7474
"gen_ai.system": "mock-provider",
75-
"gen_ai.usage.completion_tokens": 20,
76-
"gen_ai.usage.prompt_tokens": 10,
75+
"gen_ai.usage.input_tokens": 10,
76+
"gen_ai.usage.output_tokens": 20,
7777
"operation.name": "ai.generateObject.doGenerate",
7878
},
7979
"events": [],
@@ -97,8 +97,13 @@ exports[`telemetry > should record telemetry data when enabled with mode "json"
9797
"ai.schema": "{"type":"object","properties":{"content":{"type":"string"}},"required":["content"],"additionalProperties":false,"$schema":"http://json-schema.org/draft-07/schema#"}",
9898
"ai.schema.description": "test description",
9999
"ai.schema.name": "test-name",
100+
"ai.settings.frequencyPenalty": 0.3,
100101
"ai.settings.mode": "json",
101102
"ai.settings.output": "object",
103+
"ai.settings.presencePenalty": 0.4,
104+
"ai.settings.temperature": 0.5,
105+
"ai.settings.topK": 0.1,
106+
"ai.settings.topP": 0.2,
102107
"ai.telemetry.functionId": "test-function-id",
103108
"ai.telemetry.metadata.test1": "value1",
104109
"ai.telemetry.metadata.test2": false,
@@ -121,13 +126,23 @@ exports[`telemetry > should record telemetry data when enabled with mode "json"
121126
"ai.request.headers.header1": "value1",
122127
"ai.request.headers.header2": "value2",
123128
"ai.result.object": "{ "content": "Hello, world!" }",
129+
"ai.settings.frequencyPenalty": 0.3,
124130
"ai.settings.mode": "json",
131+
"ai.settings.presencePenalty": 0.4,
132+
"ai.settings.temperature": 0.5,
133+
"ai.settings.topK": 0.1,
134+
"ai.settings.topP": 0.2,
125135
"ai.telemetry.functionId": "test-function-id",
126136
"ai.telemetry.metadata.test1": "value1",
127137
"ai.telemetry.metadata.test2": false,
128138
"ai.usage.completionTokens": 20,
129139
"ai.usage.promptTokens": 10,
140+
"gen_ai.request.frequency_penalty": 0.3,
130141
"gen_ai.request.model": "mock-model-id",
142+
"gen_ai.request.presence_penalty": 0.4,
143+
"gen_ai.request.temperature": 0.5,
144+
"gen_ai.request.top_k": 0.1,
145+
"gen_ai.request.top_p": 0.2,
131146
"gen_ai.response.finish_reasons": [
132147
"stop",
133148
],
@@ -158,8 +173,13 @@ exports[`telemetry > should record telemetry data when enabled with mode "tool"
158173
"ai.schema": "{"type":"object","properties":{"content":{"type":"string"}},"required":["content"],"additionalProperties":false,"$schema":"http://json-schema.org/draft-07/schema#"}",
159174
"ai.schema.description": "test description",
160175
"ai.schema.name": "test-name",
176+
"ai.settings.frequencyPenalty": 0.3,
161177
"ai.settings.mode": "tool",
162178
"ai.settings.output": "object",
179+
"ai.settings.presencePenalty": 0.4,
180+
"ai.settings.temperature": 0.5,
181+
"ai.settings.topK": 0.1,
182+
"ai.settings.topP": 0.2,
163183
"ai.telemetry.functionId": "test-function-id",
164184
"ai.telemetry.metadata.test1": "value1",
165185
"ai.telemetry.metadata.test2": false,
@@ -182,19 +202,29 @@ exports[`telemetry > should record telemetry data when enabled with mode "tool"
182202
"ai.request.headers.header1": "value1",
183203
"ai.request.headers.header2": "value2",
184204
"ai.result.object": "{ "content": "Hello, world!" }",
205+
"ai.settings.frequencyPenalty": 0.3,
185206
"ai.settings.mode": "tool",
207+
"ai.settings.presencePenalty": 0.4,
208+
"ai.settings.temperature": 0.5,
209+
"ai.settings.topK": 0.1,
210+
"ai.settings.topP": 0.2,
186211
"ai.telemetry.functionId": "test-function-id",
187212
"ai.telemetry.metadata.test1": "value1",
188213
"ai.telemetry.metadata.test2": false,
189214
"ai.usage.completionTokens": 20,
190215
"ai.usage.promptTokens": 10,
216+
"gen_ai.request.frequency_penalty": 0.3,
191217
"gen_ai.request.model": "mock-model-id",
218+
"gen_ai.request.presence_penalty": 0.4,
219+
"gen_ai.request.temperature": 0.5,
220+
"gen_ai.request.top_k": 0.1,
221+
"gen_ai.request.top_p": 0.2,
192222
"gen_ai.response.finish_reasons": [
193223
"stop",
194224
],
195225
"gen_ai.system": "mock-provider",
196-
"gen_ai.usage.completion_tokens": 20,
197-
"gen_ai.usage.prompt_tokens": 10,
226+
"gen_ai.usage.input_tokens": 10,
227+
"gen_ai.usage.output_tokens": 20,
198228
"operation.name": "ai.generateObject.doGenerate test-function-id",
199229
"resource.name": "test-function-id",
200230
},

packages/ai/core/generate-object/__snapshots__/stream-object.test.ts.snap

Lines changed: 38 additions & 8 deletions
Original file line numberDiff line numberDiff line change
@@ -33,8 +33,8 @@ exports[`telemetry > should not record telemetry inputs / outputs when disabled
3333
"stop",
3434
],
3535
"gen_ai.system": "mock-provider",
36-
"gen_ai.usage.completion_tokens": 10,
37-
"gen_ai.usage.prompt_tokens": 3,
36+
"gen_ai.usage.input_tokens": 3,
37+
"gen_ai.usage.output_tokens": 10,
3838
"operation.name": "ai.streamObject.doStream",
3939
},
4040
"events": [
@@ -81,8 +81,8 @@ exports[`telemetry > should not record telemetry inputs / outputs when disabled
8181
"stop",
8282
],
8383
"gen_ai.system": "mock-provider",
84-
"gen_ai.usage.completion_tokens": 10,
85-
"gen_ai.usage.prompt_tokens": 3,
84+
"gen_ai.usage.input_tokens": 3,
85+
"gen_ai.usage.output_tokens": 10,
8686
"operation.name": "ai.streamObject.doStream",
8787
},
8888
"events": [
@@ -112,8 +112,13 @@ exports[`telemetry > should record telemetry data when enabled with mode "json"
112112
"ai.schema": "{"type":"object","properties":{"content":{"type":"string"}},"required":["content"],"additionalProperties":false,"$schema":"http://json-schema.org/draft-07/schema#"}",
113113
"ai.schema.description": "test description",
114114
"ai.schema.name": "test-name",
115+
"ai.settings.frequencyPenalty": 0.3,
115116
"ai.settings.mode": "json",
116117
"ai.settings.output": "object",
118+
"ai.settings.presencePenalty": 0.4,
119+
"ai.settings.temperature": 0.5,
120+
"ai.settings.topK": 0.1,
121+
"ai.settings.topP": 0.2,
117122
"ai.telemetry.functionId": "test-function-id",
118123
"ai.telemetry.metadata.test1": "value1",
119124
"ai.telemetry.metadata.test2": false,
@@ -136,20 +141,30 @@ exports[`telemetry > should record telemetry data when enabled with mode "json"
136141
"ai.request.headers.header1": "value1",
137142
"ai.request.headers.header2": "value2",
138143
"ai.result.object": "{"content":"Hello, world!"}",
144+
"ai.settings.frequencyPenalty": 0.3,
139145
"ai.settings.mode": "json",
146+
"ai.settings.presencePenalty": 0.4,
147+
"ai.settings.temperature": 0.5,
148+
"ai.settings.topK": 0.1,
149+
"ai.settings.topP": 0.2,
140150
"ai.stream.msToFirstChunk": 0,
141151
"ai.telemetry.functionId": "test-function-id",
142152
"ai.telemetry.metadata.test1": "value1",
143153
"ai.telemetry.metadata.test2": false,
144154
"ai.usage.completionTokens": 10,
145155
"ai.usage.promptTokens": 3,
156+
"gen_ai.request.frequency_penalty": 0.3,
146157
"gen_ai.request.model": "mock-model-id",
158+
"gen_ai.request.presence_penalty": 0.4,
159+
"gen_ai.request.temperature": 0.5,
160+
"gen_ai.request.top_k": 0.1,
161+
"gen_ai.request.top_p": 0.2,
147162
"gen_ai.response.finish_reasons": [
148163
"stop",
149164
],
150165
"gen_ai.system": "mock-provider",
151-
"gen_ai.usage.completion_tokens": 10,
152-
"gen_ai.usage.prompt_tokens": 3,
166+
"gen_ai.usage.input_tokens": 3,
167+
"gen_ai.usage.output_tokens": 10,
153168
"operation.name": "ai.streamObject.doStream test-function-id",
154169
"resource.name": "test-function-id",
155170
},
@@ -180,8 +195,13 @@ exports[`telemetry > should record telemetry data when enabled with mode "tool"
180195
"ai.schema": "{"type":"object","properties":{"content":{"type":"string"}},"required":["content"],"additionalProperties":false,"$schema":"http://json-schema.org/draft-07/schema#"}",
181196
"ai.schema.description": "test description",
182197
"ai.schema.name": "test-name",
198+
"ai.settings.frequencyPenalty": 0.3,
183199
"ai.settings.mode": "tool",
184200
"ai.settings.output": "object",
201+
"ai.settings.presencePenalty": 0.4,
202+
"ai.settings.temperature": 0.5,
203+
"ai.settings.topK": 0.1,
204+
"ai.settings.topP": 0.2,
185205
"ai.telemetry.functionId": "test-function-id",
186206
"ai.telemetry.metadata.test1": "value1",
187207
"ai.telemetry.metadata.test2": false,
@@ -204,20 +224,30 @@ exports[`telemetry > should record telemetry data when enabled with mode "tool"
204224
"ai.request.headers.header1": "value1",
205225
"ai.request.headers.header2": "value2",
206226
"ai.result.object": "{"content":"Hello, world!"}",
227+
"ai.settings.frequencyPenalty": 0.3,
207228
"ai.settings.mode": "tool",
229+
"ai.settings.presencePenalty": 0.4,
230+
"ai.settings.temperature": 0.5,
231+
"ai.settings.topK": 0.1,
232+
"ai.settings.topP": 0.2,
208233
"ai.stream.msToFirstChunk": 0,
209234
"ai.telemetry.functionId": "test-function-id",
210235
"ai.telemetry.metadata.test1": "value1",
211236
"ai.telemetry.metadata.test2": false,
212237
"ai.usage.completionTokens": 10,
213238
"ai.usage.promptTokens": 3,
239+
"gen_ai.request.frequency_penalty": 0.3,
214240
"gen_ai.request.model": "mock-model-id",
241+
"gen_ai.request.presence_penalty": 0.4,
242+
"gen_ai.request.temperature": 0.5,
243+
"gen_ai.request.top_k": 0.1,
244+
"gen_ai.request.top_p": 0.2,
215245
"gen_ai.response.finish_reasons": [
216246
"stop",
217247
],
218248
"gen_ai.system": "mock-provider",
219-
"gen_ai.usage.completion_tokens": 10,
220-
"gen_ai.usage.prompt_tokens": 3,
249+
"gen_ai.usage.input_tokens": 3,
250+
"gen_ai.usage.output_tokens": 10,
221251
"operation.name": "ai.streamObject.doStream test-function-id",
222252
"resource.name": "test-function-id",
223253
},

packages/ai/core/generate-object/generate-object.test.ts

Lines changed: 10 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -584,6 +584,11 @@ describe('telemetry', () => {
584584
schemaDescription: 'test description',
585585
mode: 'json',
586586
prompt: 'prompt',
587+
topK: 0.1,
588+
topP: 0.2,
589+
frequencyPenalty: 0.3,
590+
presencePenalty: 0.4,
591+
temperature: 0.5,
587592
headers: {
588593
header1: 'value1',
589594
header2: 'value2',
@@ -621,6 +626,11 @@ describe('telemetry', () => {
621626
schemaDescription: 'test description',
622627
mode: 'tool',
623628
prompt: 'prompt',
629+
topK: 0.1,
630+
topP: 0.2,
631+
frequencyPenalty: 0.3,
632+
presencePenalty: 0.4,
633+
temperature: 0.5,
624634
headers: {
625635
header1: 'value1',
626636
header2: 'value2',

packages/ai/core/generate-object/generate-object.ts

Lines changed: 10 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -317,10 +317,13 @@ export async function generateObject<SCHEMA, RESULT>({
317317
'ai.settings.mode': mode,
318318

319319
// standardized gen-ai llm span attributes:
320-
'gen_ai.request.model': model.modelId,
321320
'gen_ai.system': model.provider,
321+
'gen_ai.request.model': model.modelId,
322+
'gen_ai.request.frequency_penalty': settings.frequencyPenalty,
322323
'gen_ai.request.max_tokens': settings.maxTokens,
324+
'gen_ai.request.presence_penalty': settings.presencePenalty,
323325
'gen_ai.request.temperature': settings.temperature,
326+
'gen_ai.request.top_k': settings.topK,
324327
'gen_ai.request.top_p': settings.topP,
325328
},
326329
}),
@@ -413,10 +416,13 @@ export async function generateObject<SCHEMA, RESULT>({
413416
'ai.settings.mode': mode,
414417

415418
// standardized gen-ai llm span attributes:
416-
'gen_ai.request.model': model.modelId,
417419
'gen_ai.system': model.provider,
420+
'gen_ai.request.model': model.modelId,
421+
'gen_ai.request.frequency_penalty': settings.frequencyPenalty,
418422
'gen_ai.request.max_tokens': settings.maxTokens,
423+
'gen_ai.request.presence_penalty': settings.presencePenalty,
419424
'gen_ai.request.temperature': settings.temperature,
425+
'gen_ai.request.top_k': settings.topK,
420426
'gen_ai.request.top_p': settings.topP,
421427
},
422428
}),
@@ -459,8 +465,8 @@ export async function generateObject<SCHEMA, RESULT>({
459465

460466
// standardized gen-ai llm span attributes:
461467
'gen_ai.response.finish_reasons': [result.finishReason],
462-
'gen_ai.usage.prompt_tokens': result.usage.promptTokens,
463-
'gen_ai.usage.completion_tokens':
468+
'gen_ai.usage.input_tokens': result.usage.promptTokens,
469+
'gen_ai.usage.output_tokens':
464470
result.usage.completionTokens,
465471
},
466472
}),

packages/ai/core/generate-object/stream-object.test.ts

Lines changed: 10 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -1394,6 +1394,11 @@ describe('telemetry', () => {
13941394
schemaDescription: 'test description',
13951395
mode: 'json',
13961396
prompt: 'prompt',
1397+
topK: 0.1,
1398+
topP: 0.2,
1399+
frequencyPenalty: 0.3,
1400+
presencePenalty: 0.4,
1401+
temperature: 0.5,
13971402
headers: {
13981403
header1: 'value1',
13991404
header2: 'value2',
@@ -1476,6 +1481,11 @@ describe('telemetry', () => {
14761481
schemaDescription: 'test description',
14771482
mode: 'tool',
14781483
prompt: 'prompt',
1484+
topK: 0.1,
1485+
topP: 0.2,
1486+
frequencyPenalty: 0.3,
1487+
presencePenalty: 0.4,
1488+
temperature: 0.5,
14791489
headers: {
14801490
header1: 'value1',
14811491
header2: 'value2',

0 commit comments

Comments
 (0)