@@ -66,19 +66,19 @@ const result = await generateText({
6666 - ` operation.name ` : ` ai.generateText ` and the functionId that was set through ` telemetry.functionId `
6767 - ` ai.operationId ` : ` "ai.generateText" `
6868 - ` ai.prompt ` : the prompt that was used when calling ` generateText `
69- - ` ai.result .text ` : the text that was generated
70- - ` ai.result .toolCalls ` : the tool calls that were made as part of the generation (stringified JSON)
71- - ` ai.finishReason ` : the reason why the generation finished
69+ - ` ai.response .text ` : the text that was generated
70+ - ` ai.response .toolCalls ` : the tool calls that were made as part of the generation (stringified JSON)
71+ - ` ai.response. finishReason ` : the reason why the generation finished
7272 - ` ai.settings.maxToolRoundtrips ` : the maximum number of tool roundtrips that were set
7373- ` ai.generateText.doGenerate ` : a provider doGenerate call. It can contain ` ai.toolCall ` spans.
7474 It contains the [ basic LLM span information] ( #basic-llm-span-information ) and the following attributes:
7575 - ` operation.name ` : ` ai.generateText.doGenerate ` and the functionId that was set through ` telemetry.functionId `
7676 - ` ai.operationId ` : ` "ai.generateText.doGenerate" `
7777 - ` ai.prompt.format ` : the format of the prompt
7878 - ` ai.prompt.messages ` : the messages that were passed into the provider
79- - ` ai.result .text ` : the text that was generated
80- - ` ai.result .toolCalls ` : the tool calls that were made as part of the generation (stringified JSON)
81- - ` ai.finishReason ` : the reason why the generation finished
79+ - ` ai.response .text ` : the text that was generated
80+ - ` ai.response .toolCalls ` : the tool calls that were made as part of the generation (stringified JSON)
81+ - ` ai.response. finishReason ` : the reason why the generation finished
8282- ` ai.toolCall ` : a tool call that is made as part of the generateText call. See [ Tool call spans] ( #tool-call-spans ) for more details.
8383
8484### streamText function
@@ -90,9 +90,9 @@ const result = await generateText({
9090 - ` operation.name ` : ` ai.streamText ` and the functionId that was set through ` telemetry.functionId `
9191 - ` ai.operationId ` : ` "ai.streamText" `
9292 - ` ai.prompt ` : the prompt that was used when calling ` streamText `
93- - ` ai.result .text ` : the text that was generated
94- - ` ai.result .toolCalls ` : the tool calls that were made as part of the generation (stringified JSON)
95- - ` ai.finishReason ` : the reason why the generation finished
93+ - ` ai.response .text ` : the text that was generated
94+ - ` ai.response .toolCalls ` : the tool calls that were made as part of the generation (stringified JSON)
95+ - ` ai.response. finishReason ` : the reason why the generation finished
9696- ` ai.streamText.doStream ` : a provider doStream call.
9797 This span contains an ` ai.stream.firstChunk ` event and ` ai.toolCall ` spans.
9898 It contains the [ basic LLM span information] ( #basic-llm-span-information ) and the following attributes:
@@ -101,13 +101,16 @@ const result = await generateText({
101101 - ` ai.operationId ` : ` "ai.streamText.doStream" `
102102 - ` ai.prompt.format ` : the format of the prompt
103103 - ` ai.prompt.messages ` : the messages that were passed into the provider
104- - ` ai.result.text ` : the text that was generated
105- - ` ai.result.toolCalls ` : the tool calls that were made as part of the generation (stringified JSON)
106- - ` ai.finishReason ` : the reason why the generation finished
107- - ` ai.stream.msToFirstChunk ` : the time it took to receive the first chunk
104+ - ` ai.response.text ` : the text that was generated
105+ - ` ai.response.toolCalls ` : the tool calls that were made as part of the generation (stringified JSON)
106+ - ` ai.response.msToFirstChunk ` : the time it took to receive the first chunk in milliseconds
107+ - ` ai.response.msToFinish ` : the time it took to receive the finish part of the LLM stream in milliseconds
108+ - ` ai.response.avgCompletionTokensPerSecond ` : the average number of completion tokens per second
109+ - ` ai.response.finishReason ` : the reason why the generation finished
108110
109111- ` ai.stream.firstChunk ` (event): an event that is emitted when the first chunk of the stream is received.
110- - ` ai.stream.msToFirstChunk ` : the time it took to receive the first chunk
112+ - ` ai.response.msToFirstChunk ` : the time it took to receive the first chunk
113+ - ` ai.stream.finish ` (event): an event that is emitted when the finish part of the LLM stream is received.
111114- ` ai.toolCall ` : a tool call that is made as part of the generateText call. See [ Tool call spans] ( #tool-call-spans ) for more details.
112115
113116It also records a ` ai.stream.firstChunk ` event when the first chunk of the stream is received.
@@ -124,7 +127,7 @@ It also records a `ai.stream.firstChunk` event when the first chunk of the strea
124127 - ` ai.schema ` : Stringified JSON schema version of the schema that was passed into the ` generateObject ` function
125128 - ` ai.schema.name ` : the name of the schema that was passed into the ` generateObject ` function
126129 - ` ai.schema.description ` : the description of the schema that was passed into the ` generateObject ` function
127- - ` ai.result .object ` : the object that was generated (stringified JSON)
130+ - ` ai.response .object ` : the object that was generated (stringified JSON)
128131 - ` ai.settings.mode ` : the object generation mode, e.g. ` json `
129132 - ` ai.settings.output ` : the output type that was used, e.g. ` object ` or ` no-schema `
130133- ` ai.generateObject.doGenerate ` : a provider doGenerate call.
@@ -133,9 +136,9 @@ It also records a `ai.stream.firstChunk` event when the first chunk of the strea
133136 - ` ai.operationId ` : ` "ai.generateObject.doGenerate" `
134137 - ` ai.prompt.format ` : the format of the prompt
135138 - ` ai.prompt.messages ` : the messages that were passed into the provider
136- - ` ai.result .object ` : the object that was generated (stringified JSON)
139+ - ` ai.response .object ` : the object that was generated (stringified JSON)
137140 - ` ai.settings.mode ` : the object generation mode
138- - ` ai.finishReason ` : the reason why the generation finished
141+ - ` ai.response. finishReason ` : the reason why the generation finished
139142
140143### streamObject function
141144
@@ -149,7 +152,7 @@ It also records a `ai.stream.firstChunk` event when the first chunk of the strea
149152 - ` ai.schema ` : Stringified JSON schema version of the schema that was passed into the ` streamObject ` function
150153 - ` ai.schema.name ` : the name of the schema that was passed into the ` streamObject ` function
151154 - ` ai.schema.description ` : the description of the schema that was passed into the ` streamObject ` function
152- - ` ai.result .object ` : the object that was generated (stringified JSON)
155+ - ` ai.response .object ` : the object that was generated (stringified JSON)
153156 - ` ai.settings.mode ` : the object generation mode, e.g. ` json `
154157 - ` ai.settings.output ` : the output type that was used, e.g. ` object ` or ` no-schema `
155158- ` ai.streamObject.doStream ` : a provider doStream call.
@@ -159,12 +162,12 @@ It also records a `ai.stream.firstChunk` event when the first chunk of the strea
159162 - ` ai.operationId ` : ` "ai.streamObject.doStream" `
160163 - ` ai.prompt.format ` : the format of the prompt
161164 - ` ai.prompt.messages ` : the messages that were passed into the provider
162- - ` ai.result.object ` : the object that was generated (stringified JSON)
163165 - ` ai.settings.mode ` : the object generation mode
164- - ` ai.finishReason ` : the reason why the generation finished
165- - ` ai.stream.msToFirstChunk ` : the time it took to receive the first chunk
166+ - ` ai.response.object ` : the object that was generated (stringified JSON)
167+ - ` ai.response.msToFirstChunk ` : the time it took to receive the first chunk
168+ - ` ai.response.finishReason ` : the reason why the generation finished
166169- ` ai.stream.firstChunk ` (event): an event that is emitted when the first chunk of the stream is received.
167- - ` ai.stream .msToFirstChunk ` : the time it took to receive the first chunk
170+ - ` ai.response .msToFirstChunk ` : the time it took to receive the first chunk
168171
169172### embed function
170173
0 commit comments