feat(attributes): Add core web web vital value attributes#229
Merged
Conversation
This comment was marked as outdated.
This comment was marked as outdated.
cleptric
approved these changes
Jan 22, 2026
fcc2171 to
c248972
Compare
c248972 to
18ea0e6
Compare
lcian
approved these changes
Jan 23, 2026
18ea0e6 to
4dd8316
Compare
Semver Impact of This PR🟡 Minor (new features) 📋 Changelog PreviewThis is how your changes will appear in the changelog. New Features ✨Attributes
Other
Bug Fixes 🐛
Documentation 📚
Internal Changes 🔧Deps
Deps Dev
Other
🤖 This preview updates automatically when you update the PR. |
There was a problem hiding this comment.
Cursor Bugbot has reviewed your changes and found 1 potential issue.
Bugbot Autofix prepared a fix for the issue found in the latest run.
- ✅ Fixed: Python
lcpentry placed out of alphabetical order- Added sorting logic to Python generator to match JavaScript generator, ensuring attributes are alphabetically ordered by constant name.
Or push these changes by commenting:
@cursor push 57b90b025b
Preview (57b90b025b)
diff --git a/python/src/sentry_conventions/attributes.py b/python/src/sentry_conventions/attributes.py
--- a/python/src/sentry_conventions/attributes.py
+++ b/python/src/sentry_conventions/attributes.py
@@ -5,13 +5,10 @@
import warnings
from dataclasses import dataclass
from enum import Enum
-from typing import Dict, List, Literal, Optional, TypedDict, Union
+from typing import List, Union, Literal, Optional, Dict, TypedDict
-AttributeValue = Union[
- str, int, float, bool, List[str], List[int], List[float], List[bool]
-]
+AttributeValue = Union[str, int, float, bool, List[str], List[int], List[float], List[bool]]
-
class AttributeType(Enum):
STRING = "string"
BOOLEAN = "boolean"
@@ -23,84 +20,75 @@
DOUBLE_ARRAY = "double[]"
ANY = "any"
-
class IsPii(Enum):
TRUE = "true"
FALSE = "false"
MAYBE = "maybe"
-
@dataclass
class PiiInfo:
"""Holds information about PII in an attribute's values."""
-
isPii: IsPii
reason: Optional[str] = None
-
class DeprecationStatus(Enum):
BACKFILL = "backfill"
NORMALIZE = "normalize"
-
@dataclass
class DeprecationInfo:
"""Holds information about a deprecation."""
-
replacement: Optional[str] = None
reason: Optional[str] = None
status: Optional[DeprecationStatus] = None
-
@dataclass
class ChangelogEntry:
"""A changelog entry tracking a change to an attribute."""
version: str
"""The sentry-conventions release version"""
-
+
prs: Optional[List[int]] = None
"""GitHub PR numbers"""
-
+
description: Optional[str] = None
"""Optional description of what changed"""
-
@dataclass
class AttributeMetadata:
"""The metadata for an attribute."""
brief: str
"""A description of the attribute"""
-
+
type: AttributeType
"""The type of the attribute value"""
-
+
pii: PiiInfo
"""If an attribute can have pii. Is either true, false or maybe. Optionally include a reason about why it has PII or not"""
-
+
is_in_otel: bool
"""Whether the attribute is defined in OpenTelemetry Semantic Conventions"""
-
+
has_dynamic_suffix: Optional[bool] = None
"""If an attribute has a dynamic suffix, for example http.response.header.<key> where <key> is dynamic"""
-
+
example: Optional[AttributeValue] = None
"""An example value of the attribute"""
-
+
deprecation: Optional[DeprecationInfo] = None
"""If an attribute was deprecated, and what it was replaced with"""
-
+
aliases: Optional[List[str]] = None
"""If there are attributes that alias to this attribute"""
-
+
sdks: Optional[List[str]] = None
"""If an attribute is SDK specific, list the SDKs that use this attribute. This is not an exhaustive list, there might be SDKs that send this attribute that are is not documented here."""
-
+
changelog: Optional[List[ChangelogEntry]] = None
"""Changelog entries tracking how this attribute has changed across versions"""
-
class _AttributeNamesMeta(type):
_deprecated_names = {
"AI_CITATIONS",
@@ -214,7 +202,6 @@
)
return super().__getattribute__(name)
-
class ATTRIBUTE_NAMES(metaclass=_AttributeNamesMeta):
"""Contains all attribute names as class attributes with their documentation."""
@@ -230,9 +217,7 @@
"""
# Path: model/attributes/ai/ai__completion_tokens__used.json
- AI_COMPLETION_TOKENS_USED: Literal["ai.completion_tokens.used"] = (
- "ai.completion_tokens.used"
- )
+ AI_COMPLETION_TOKENS_USED: Literal["ai.completion_tokens.used"] = "ai.completion_tokens.used"
"""The number of tokens used to respond to the message.
Type: int
@@ -336,6 +321,18 @@
Example: "{\"user_id\": 123, \"session_id\": \"abc123\"}"
"""
+ # Path: model/attributes/ai/ai__model_id.json
+ AI_MODEL_ID: Literal["ai.model_id"] = "ai.model_id"
+ """The vendor-specific ID of the model used.
+
+ Type: str
+ Contains PII: maybe
+ Defined in OTEL: No
+ Aliases: gen_ai.response.model
+ DEPRECATED: Use gen_ai.response.model instead
+ Example: "gpt-4"
+ """
+
# Path: model/attributes/ai/ai__model__provider.json
AI_MODEL_PROVIDER: Literal["ai.model.provider"] = "ai.model.provider"
"""The provider of the model.
@@ -348,18 +345,6 @@
Example: "openai"
"""
- # Path: model/attributes/ai/ai__model_id.json
- AI_MODEL_ID: Literal["ai.model_id"] = "ai.model_id"
- """The vendor-specific ID of the model used.
-
- Type: str
- Contains PII: maybe
- Defined in OTEL: No
- Aliases: gen_ai.response.model
- DEPRECATED: Use gen_ai.response.model instead
- Example: "gpt-4"
- """
-
# Path: model/attributes/ai/ai__pipeline__name.json
AI_PIPELINE_NAME: Literal["ai.pipeline.name"] = "ai.pipeline.name"
"""The name of the AI pipeline.
@@ -419,6 +404,17 @@
Example: true
"""
+ # Path: model/attributes/ai/ai__responses.json
+ AI_RESPONSES: Literal["ai.responses"] = "ai.responses"
+ """The response messages sent back by the AI model.
+
+ Type: List[str]
+ Contains PII: maybe
+ Defined in OTEL: No
+ DEPRECATED: Use gen_ai.response.text instead
+ Example: ["hello","world"]
+ """
+
# Path: model/attributes/ai/ai__response_format.json
AI_RESPONSE_FORMAT: Literal["ai.response_format"] = "ai.response_format"
"""For an AI model call, the format of the response
@@ -430,17 +426,6 @@
Example: "json_object"
"""
- # Path: model/attributes/ai/ai__responses.json
- AI_RESPONSES: Literal["ai.responses"] = "ai.responses"
- """The response messages sent back by the AI model.
-
- Type: List[str]
- Contains PII: maybe
- Defined in OTEL: No
- DEPRECATED: Use gen_ai.response.text instead
- Example: ["hello","world"]
- """
-
# Path: model/attributes/ai/ai__search_queries.json
AI_SEARCH_QUERIES: Literal["ai.search_queries"] = "ai.search_queries"
"""Queries used to search for relevant context or documents.
@@ -522,6 +507,17 @@
Example: ["Hello, how are you?","What is the capital of France?"]
"""
+ # Path: model/attributes/ai/ai__tools.json
+ AI_TOOLS: Literal["ai.tools"] = "ai.tools"
+ """For an AI model call, the functions that are available
+
+ Type: List[str]
+ Contains PII: maybe
+ Defined in OTEL: No
+ DEPRECATED: Use gen_ai.request.available_tools instead
+ Example: ["function_1","function_2"]
+ """
+
# Path: model/attributes/ai/ai__tool_calls.json
AI_TOOL_CALLS: Literal["ai.tool_calls"] = "ai.tool_calls"
"""For an AI model call, the tool calls that were made.
@@ -533,17 +529,6 @@
Example: ["tool_call_1","tool_call_2"]
"""
- # Path: model/attributes/ai/ai__tools.json
- AI_TOOLS: Literal["ai.tools"] = "ai.tools"
- """For an AI model call, the functions that are available
-
- Type: List[str]
- Contains PII: maybe
- Defined in OTEL: No
- DEPRECATED: Use gen_ai.request.available_tools instead
- Example: ["function_1","function_2"]
- """
-
# Path: model/attributes/ai/ai__top_k.json
AI_TOP_K: Literal["ai.top_k"] = "ai.top_k"
"""Limits the model to only consider the K most likely next tokens, where K is an integer (e.g., top_k=20 means only the 20 highest probability tokens are considered).
@@ -655,9 +640,7 @@
"""
# Path: model/attributes/browser/browser__script__invoker_type.json
- BROWSER_SCRIPT_INVOKER_TYPE: Literal["browser.script.invoker_type"] = (
- "browser.script.invoker_type"
- )
+ BROWSER_SCRIPT_INVOKER_TYPE: Literal["browser.script.invoker_type"] = "browser.script.invoker_type"
"""Browser script entry point type.
Type: str
@@ -667,9 +650,7 @@
"""
# Path: model/attributes/browser/browser__script__source_char_position.json
- BROWSER_SCRIPT_SOURCE_CHAR_POSITION: Literal[
- "browser.script.source_char_position"
- ] = "browser.script.source_char_position"
+ BROWSER_SCRIPT_SOURCE_CHAR_POSITION: Literal["browser.script.source_char_position"] = "browser.script.source_char_position"
"""A number representing the script character position of the script.
Type: int
@@ -690,9 +671,7 @@
"""
# Path: model/attributes/browser/browser__web_vital__cls__value.json
- BROWSER_WEB_VITAL_CLS_VALUE: Literal["browser.web_vital.cls.value"] = (
- "browser.web_vital.cls.value"
- )
+ BROWSER_WEB_VITAL_CLS_VALUE: Literal["browser.web_vital.cls.value"] = "browser.web_vital.cls.value"
"""The value of the recorded Cumulative Layout Shift (CLS) web vital
Type: float
@@ -703,9 +682,7 @@
"""
# Path: model/attributes/browser/browser__web_vital__inp__value.json
- BROWSER_WEB_VITAL_INP_VALUE: Literal["browser.web_vital.inp.value"] = (
- "browser.web_vital.inp.value"
- )
+ BROWSER_WEB_VITAL_INP_VALUE: Literal["browser.web_vital.inp.value"] = "browser.web_vital.inp.value"
"""The value of the recorded Interaction to Next Paint (INP) web vital
Type: float
@@ -716,9 +693,7 @@
"""
# Path: model/attributes/browser/browser__web_vital__lcp__value.json
- BROWSER_WEB_VITAL_LCP_VALUE: Literal["browser.web_vital.lcp.value"] = (
- "browser.web_vital.lcp.value"
- )
+ BROWSER_WEB_VITAL_LCP_VALUE: Literal["browser.web_vital.lcp.value"] = "browser.web_vital.lcp.value"
"""The value of the recorded Largest Contentful Paint (LCP) web vital
Type: float
@@ -820,9 +795,7 @@
"""
# Path: model/attributes/cloudflare/cloudflare__d1__rows_read.json
- CLOUDFLARE_D1_ROWS_READ: Literal["cloudflare.d1.rows_read"] = (
- "cloudflare.d1.rows_read"
- )
+ CLOUDFLARE_D1_ROWS_READ: Literal["cloudflare.d1.rows_read"] = "cloudflare.d1.rows_read"
"""The number of rows read in a Cloudflare D1 operation.
Type: int
@@ -832,9 +805,7 @@
"""
# Path: model/attributes/cloudflare/cloudflare__d1__rows_written.json
- CLOUDFLARE_D1_ROWS_WRITTEN: Literal["cloudflare.d1.rows_written"] = (
- "cloudflare.d1.rows_written"
- )
+ CLOUDFLARE_D1_ROWS_WRITTEN: Literal["cloudflare.d1.rows_written"] = "cloudflare.d1.rows_written"
"""The number of rows written in a Cloudflare D1 operation.
Type: int
@@ -855,26 +826,26 @@
Example: 0.2361
"""
- # Path: model/attributes/code/code__file__path.json
- CODE_FILE_PATH: Literal["code.file.path"] = "code.file.path"
+ # Path: model/attributes/code/code__filepath.json
+ CODE_FILEPATH: Literal["code.filepath"] = "code.filepath"
"""The source code file name that identifies the code unit as uniquely as possible (preferably an absolute file path).
Type: str
Contains PII: maybe
Defined in OTEL: Yes
- Aliases: code.filepath
+ Aliases: code.file.path
+ DEPRECATED: Use code.file.path instead
Example: "/app/myapplication/http/handler/server.py"
"""
- # Path: model/attributes/code/code__filepath.json
- CODE_FILEPATH: Literal["code.filepath"] = "code.filepath"
+ # Path: model/attributes/code/code__file__path.json
+ CODE_FILE_PATH: Literal["code.file.path"] = "code.file.path"
"""The source code file name that identifies the code unit as uniquely as possible (preferably an absolute file path).
Type: str
Contains PII: maybe
Defined in OTEL: Yes
- Aliases: code.file.path
- DEPRECATED: Use code.file.path instead
+ Aliases: code.filepath
Example: "/app/myapplication/http/handler/server.py"
"""
@@ -901,26 +872,26 @@
Example: "server_request"
"""
- # Path: model/attributes/code/code__line__number.json
- CODE_LINE_NUMBER: Literal["code.line.number"] = "code.line.number"
+ # Path: model/attributes/code/code__lineno.json
+ CODE_LINENO: Literal["code.lineno"] = "code.lineno"
"""The line number in code.filepath best representing the operation. It SHOULD point within the code unit named in code.function
Type: int
Contains PII: maybe
Defined in OTEL: Yes
- Aliases: code.lineno
+ Aliases: code.line.number
+ DEPRECATED: Use code.line.number instead
Example: 42
"""
- # Path: model/attributes/code/code__lineno.json
- CODE_LINENO: Literal["code.lineno"] = "code.lineno"
+ # Path: model/attributes/code/code__line__number.json
+ CODE_LINE_NUMBER: Literal["code.line.number"] = "code.line.number"
"""The line number in code.filepath best representing the operation. It SHOULD point within the code unit named in code.function
Type: int
Contains PII: maybe
Defined in OTEL: Yes
- Aliases: code.line.number
- DEPRECATED: Use code.line.number instead
+ Aliases: code.lineno
Example: 42
"""
@@ -956,9 +927,7 @@
"""
# Path: model/attributes/culture/culture__is_24_hour_format.json
- CULTURE_IS_24_HOUR_FORMAT: Literal["culture.is_24_hour_format"] = (
- "culture.is_24_hour_format"
- )
+ CULTURE_IS_24_HOUR_FORMAT: Literal["culture.is_24_hour_format"] = "culture.is_24_hour_format"
"""Whether the culture uses 24-hour time format.
Type: bool
@@ -1044,9 +1013,7 @@
"""
# Path: model/attributes/db/db__query__parameter__[key].json
- DB_QUERY_PARAMETER_KEY: Literal["db.query.parameter.<key>"] = (
- "db.query.parameter.<key>"
- )
+ DB_QUERY_PARAMETER_KEY: Literal["db.query.parameter.<key>"] = "db.query.parameter.<key>"
"""A query parameter used in db.query.text, with <key> being the parameter name, and the attribute value being a string representation of the parameter value.
Type: str
@@ -1388,9 +1355,7 @@
"""
# Path: model/attributes/gen_ai/gen_ai__cost__input_tokens.json
- GEN_AI_COST_INPUT_TOKENS: Literal["gen_ai.cost.input_tokens"] = (
- "gen_ai.cost.input_tokens"
- )
+ GEN_AI_COST_INPUT_TOKENS: Literal["gen_ai.cost.input_tokens"] = "gen_ai.cost.input_tokens"
"""The cost of tokens used to process the AI input (prompt) in USD (without cached input tokens).
Type: float
@@ -1400,9 +1365,7 @@
"""
# Path: model/attributes/gen_ai/gen_ai__cost__output_tokens.json
- GEN_AI_COST_OUTPUT_TOKENS: Literal["gen_ai.cost.output_tokens"] = (
- "gen_ai.cost.output_tokens"
- )
+ GEN_AI_COST_OUTPUT_TOKENS: Literal["gen_ai.cost.output_tokens"] = "gen_ai.cost.output_tokens"
"""The cost of tokens used for creating the AI output in USD (without reasoning tokens).
Type: float
@@ -1412,9 +1375,7 @@
"""
# Path: model/attributes/gen_ai/gen_ai__cost__total_tokens.json
- GEN_AI_COST_TOTAL_TOKENS: Literal["gen_ai.cost.total_tokens"] = (
- "gen_ai.cost.total_tokens"
- )
+ GEN_AI_COST_TOTAL_TOKENS: Literal["gen_ai.cost.total_tokens"] = "gen_ai.cost.total_tokens"
"""The total cost for the tokens used.
Type: float
@@ -1425,9 +1386,7 @@
"""
# Path: model/attributes/gen_ai/gen_ai__embeddings__input.json
- GEN_AI_EMBEDDINGS_INPUT: Literal["gen_ai.embeddings.input"] = (
- "gen_ai.embeddings.input"
- )
+ GEN_AI_EMBEDDINGS_INPUT: Literal["gen_ai.embeddings.input"] = "gen_ai.embeddings.input"
"""The input to the embeddings model.
Type: str
@@ -1511,9 +1470,7 @@
"""
# Path: model/attributes/gen_ai/gen_ai__request__available_tools.json
- GEN_AI_REQUEST_AVAILABLE_TOOLS: Literal["gen_ai.request.available_tools"] = (
- "gen_ai.request.available_tools"
- )
+ GEN_AI_REQUEST_AVAILABLE_TOOLS: Literal["gen_ai.request.available_tools"] = "gen_ai.request.available_tools"
"""The available tools for the model. It has to be a stringified version of an array of objects.
Type: str
@@ -1524,9 +1481,7 @@
"""
# Path: model/attributes/gen_ai/gen_ai__request__frequency_penalty.json
- GEN_AI_REQUEST_FREQUENCY_PENALTY: Literal["gen_ai.request.frequency_penalty"] = (
- "gen_ai.request.frequency_penalty"
- )
+ GEN_AI_REQUEST_FREQUENCY_PENALTY: Literal["gen_ai.request.frequency_penalty"] = "gen_ai.request.frequency_penalty"
"""Used to reduce repetitiveness of generated tokens. The higher the value, the stronger a penalty is applied to previously present tokens, proportional to how many times they have already appeared in the prompt or prior generation.
Type: float
@@ -1537,9 +1492,7 @@
"""
# Path: model/attributes/gen_ai/gen_ai__request__max_tokens.json
- GEN_AI_REQUEST_MAX_TOKENS: Literal["gen_ai.request.max_tokens"] = (
- "gen_ai.request.max_tokens"
- )
+ GEN_AI_REQUEST_MAX_TOKENS: Literal["gen_ai.request.max_tokens"] = "gen_ai.request.max_tokens"
"""The maximum number of tokens to generate in the response.
Type: int
@@ -1549,9 +1502,7 @@
"""
# Path: model/attributes/gen_ai/gen_ai__request__messages.json
- GEN_AI_REQUEST_MESSAGES: Literal["gen_ai.request.messages"] = (
- "gen_ai.request.messages"
- )
+ GEN_AI_REQUEST_MESSAGES: Literal["gen_ai.request.messages"] = "gen_ai.request.messages"
"""The messages passed to the model. It has to be a stringified version of an array of objects. The `role` attribute of each object must be `"user"`, `"assistant"`, `"tool"`, or `"system"`. For messages of the role `"tool"`, the `content` can be a string or an arbitrary object with information about the tool call. For other messages the `content` can be either a string or a list of objects in the format `{type: "text", text:"..."}`.
Type: str
@@ -1573,9 +1524,7 @@
"""
# Path: model/attributes/gen_ai/gen_ai__request__presence_penalty.json
- GEN_AI_REQUEST_PRESENCE_PENALTY: Literal["gen_ai.request.presence_penalty"] = (
- "gen_ai.request.presence_penalty"
- )
+ GEN_AI_REQUEST_PRESENCE_PENALTY: Literal["gen_ai.request.presence_penalty"] = "gen_ai.request.presence_penalty"
"""Used to reduce repetitiveness of generated tokens. Similar to frequency_penalty, except that this penalty is applied equally to all tokens that have already appeared, regardless of their exact frequencies.
Type: float
@@ -1597,9 +1546,7 @@
"""
# Path: model/attributes/gen_ai/gen_ai__request__temperature.json
- GEN_AI_REQUEST_TEMPERATURE: Literal["gen_ai.request.temperature"] = (
- "gen_ai.request.temperature"
- )
+ GEN_AI_REQUEST_TEMPERATURE: Literal["gen_ai.request.temperature"] = "gen_ai.request.temperature"
"""For an AI model call, the temperature parameter. Temperature essentially means how random the output will be.
Type: float
@@ -1632,9 +1579,7 @@
"""
# Path: model/attributes/gen_ai/gen_ai__response__finish_reasons.json
- GEN_AI_RESPONSE_FINISH_REASONS: Literal["gen_ai.response.finish_reasons"] = (
- "gen_ai.response.finish_reasons"
- )
+ GEN_AI_RESPONSE_FINISH_REASONS: Literal["gen_ai.response.finish_reasons"] = "gen_ai.response.finish_reasons"
"""The reason why the model stopped generating.
Type: str
@@ -1667,9 +1612,7 @@
"""
# Path: model/attributes/gen_ai/gen_ai__response__streaming.json
- GEN_AI_RESPONSE_STREAMING: Literal["gen_ai.response.streaming"] = (
- "gen_ai.response.streaming"
- )
+ GEN_AI_RESPONSE_STREAMING: Literal["gen_ai.response.streaming"] = "gen_ai.response.streaming"
"""Whether or not the AI model call's response was streamed back asynchronously
Type: bool
@@ -1691,9 +1634,7 @@
"""
# Path: model/attributes/gen_ai/gen_ai__response__time_to_first_token.json
- GEN_AI_RESPONSE_TIME_TO_FIRST_TOKEN: Literal[
- "gen_ai.response.time_to_first_token"
- ] = "gen_ai.response.time_to_first_token"
+ GEN_AI_RESPONSE_TIME_TO_FIRST_TOKEN: Literal["gen_ai.response.time_to_first_token"] = "gen_ai.response.time_to_first_token"
"""Time in seconds when the first response content chunk arrived in streaming responses.
Type: float
@@ -1703,9 +1644,7 @@
"""
# Path: model/attributes/gen_ai/gen_ai__response__tokens_per_second.json
- GEN_AI_RESPONSE_TOKENS_PER_SECOND: Literal["gen_ai.response.tokens_per_second"] = (
- "gen_ai.response.tokens_per_second"
- )
+ GEN_AI_RESPONSE_TOKENS_PER_SECOND: Literal["gen_ai.response.tokens_per_second"] = "gen_ai.response.tokens_per_second"
"""The total output tokens per seconds throughput
Type: float
@@ -1715,9 +1654,7 @@
"""
# Path: model/attributes/gen_ai/gen_ai__response__tool_calls.json
- GEN_AI_RESPONSE_TOOL_CALLS: Literal["gen_ai.response.tool_calls"] = (
- "gen_ai.response.tool_calls"
- )
+ GEN_AI_RESPONSE_TOOL_CALLS: Literal["gen_ai.response.tool_calls"] = "gen_ai.response.tool_calls"
"""The tool calls in the model's response. It has to be a stringified version of an array of objects.
Type: str
@@ -1739,34 +1676,30 @@
Example: "openai"
"""
- # Path: model/attributes/gen_ai/gen_ai__system__message.json
- GEN_AI_SYSTEM_MESSAGE: Literal["gen_ai.system.message"] = "gen_ai.system.message"
+ # Path: model/attributes/gen_ai/gen_ai__system_instructions.json
+ GEN_AI_SYSTEM_INSTRUCTIONS: Literal["gen_ai.system_instructions"] = "gen_ai.system_instructions"
"""The system instructions passed to the model.
Type: str
- Contains PII: true
- Defined in OTEL: No
- DEPRECATED: Use gen_ai.system_instructions instead
+ Contains PII: maybe
+ Defined in OTEL: Yes
+ Aliases: ai.preamble
Example: "You are a helpful assistant"
"""
- # Path: model/attributes/gen_ai/gen_ai__system_instructions.json
- GEN_AI_SYSTEM_INSTRUCTIONS: Literal["gen_ai.system_instructions"] = (
- "gen_ai.system_instructions"
- )
+ # Path: model/attributes/gen_ai/gen_ai__system__message.json
+ GEN_AI_SYSTEM_MESSAGE: Literal["gen_ai.system.message"] = "gen_ai.system.message"
"""The system instructions passed to the model.
Type: str
- Contains PII: maybe
- Defined in OTEL: Yes
- Aliases: ai.preamble
+ Contains PII: true
+ Defined in OTEL: No
+ DEPRECATED: Use gen_ai.system_instructions instead
Example: "You are a helpful assistant"
"""
# Path: model/attributes/gen_ai/gen_ai__tool__call__arguments.json
- GEN_AI_TOOL_CALL_ARGUMENTS: Literal["gen_ai.tool.call.arguments"] = (
- "gen_ai.tool.call.arguments"
- )
+ GEN_AI_TOOL_CALL_ARGUMENTS: Literal["gen_ai.tool.call.arguments"] = "gen_ai.tool.call.arguments"
"""The arguments of the tool call. It has to be a stringified version of the arguments to the tool.
Type: str
@@ -1777,9 +1710,7 @@
"""
# Path: model/attributes/gen_ai/gen_ai__tool__call__result.json
- GEN_AI_TOOL_CALL_RESULT: Literal["gen_ai.tool.call.result"] = (
- "gen_ai.tool.call.result"
- )
+ GEN_AI_TOOL_CALL_RESULT: Literal["gen_ai.tool.call.result"] = "gen_ai.tool.call.result"
"""The result of the tool call. It has to be a stringified version of the result of the tool.
Type: str
@@ -1790,9 +1721,7 @@
"""
# Path: model/attributes/gen_ai/gen_ai__tool__definitions.json
- GEN_AI_TOOL_DEFINITIONS: Literal["gen_ai.tool.definitions"] = (
- "gen_ai.tool.definitions"
- )
+ GEN_AI_TOOL_DEFINITIONS: Literal["gen_ai.tool.definitions"] = "gen_ai.tool.definitions"
"""The list of source system tool definitions available to the GenAI agent or model.
Type: str
@@ -1802,9 +1731,7 @@
"""
# Path: model/attributes/gen_ai/gen_ai__tool__description.json
- GEN_AI_TOOL_DESCRIPTION: Literal["gen_ai.tool.description"] = (
- "gen_ai.tool.description"
- )
+ GEN_AI_TOOL_DESCRIPTION: Literal["gen_ai.tool.description"] = "gen_ai.tool.description"
"""The description of the tool being used.
Type: str
@@ -1871,9 +1798,7 @@
"""
# Path: model/attributes/gen_ai/gen_ai__usage__completion_tokens.json
- GEN_AI_USAGE_COMPLETION_TOKENS: Literal["gen_ai.usage.completion_tokens"] = (
- "gen_ai.usage.completion_tokens"
- )
+ GEN_AI_USAGE_COMPLETION_TOKENS: Literal["gen_ai.usage.completion_tokens"] = "gen_ai.usage.completion_tokens"
"""The number of tokens used in the GenAI response (completion).
Type: int
@@ -1885,9 +1810,7 @@
"""
# Path: model/attributes/gen_ai/gen_ai__usage__input_tokens.json
- GEN_AI_USAGE_INPUT_TOKENS: Literal["gen_ai.usage.input_tokens"] = (
- "gen_ai.usage.input_tokens"
- )
+ GEN_AI_USAGE_INPUT_TOKENS: Literal["gen_ai.usage.input_tokens"] = "gen_ai.usage.input_tokens"
"""The number of tokens used to process the AI input (prompt) including cached input tokens.
Type: int
@@ -1897,34 +1820,28 @@
Example: 10
"""
- # Path: model/attributes/gen_ai/gen_ai__usage__input_tokens__cache_write.json
- GEN_AI_USAGE_INPUT_TOKENS_CACHE_WRITE: Literal[
- "gen_ai.usage.input_tokens.cache_write"
- ] = "gen_ai.usage.input_tokens.cache_write"
- """The number of tokens written to the cache when processing the AI input (prompt).
+ # Path: model/attributes/gen_ai/gen_ai__usage__input_tokens__cached.json
+ GEN_AI_USAGE_INPUT_TOKENS_CACHED: Literal["gen_ai.usage.input_tokens.cached"] = "gen_ai.usage.input_tokens.cached"
+ """The number of cached tokens used to process the AI input (prompt).
Type: int
Contains PII: maybe
Defined in OTEL: No
- Example: 100
+ Example: 50
"""
- # Path: model/attributes/gen_ai/gen_ai__usage__input_tokens__cached.json
- GEN_AI_USAGE_INPUT_TOKENS_CACHED: Literal["gen_ai.usage.input_tokens.cached"] = (
- "gen_ai.usage.input_tokens.cached"
- )
- """The number of cached tokens used to process the AI input (prompt).
+ # Path: model/attributes/gen_ai/gen_ai__usage__input_tokens__cache_write.json
+ GEN_AI_USAGE_INPUT_TOKENS_CACHE_WRITE: Literal["gen_ai.usage.input_tokens.cache_write"] = "gen_ai.usage.input_tokens.cache_write"
+ """The number of tokens written to the cache when processing the AI input (prompt).
Type: int
Contains PII: maybe
Defined in OTEL: No
- Example: 50
+ Example: 100
"""
# Path: model/attributes/gen_ai/gen_ai__usage__output_tokens.json
- GEN_AI_USAGE_OUTPUT_TOKENS: Literal["gen_ai.usage.output_tokens"] = (
- "gen_ai.usage.output_tokens"
- )
+ GEN_AI_USAGE_OUTPUT_TOKENS: Literal["gen_ai.usage.output_tokens"] = "gen_ai.usage.output_tokens"
"""The number of tokens used for creating the AI output (including reasoning tokens).
Type: int
@@ -1935,9 +1852,7 @@
"""
# Path: model/attributes/gen_ai/gen_ai__usage__output_tokens__reasoning.json
- GEN_AI_USAGE_OUTPUT_TOKENS_REASONING: Literal[
- "gen_ai.usage.output_tokens.reasoning"
- ] = "gen_ai.usage.output_tokens.reasoning"
+ GEN_AI_USAGE_OUTPUT_TOKENS_REASONING: Literal["gen_ai.usage.output_tokens.reasoning"] = "gen_ai.usage.output_tokens.reasoning"
"""The number of tokens used for reasoning to create the AI output.
Type: int
@@ -1947,9 +1862,7 @@
"""
# Path: model/attributes/gen_ai/gen_ai__usage__prompt_tokens.json
- GEN_AI_USAGE_PROMPT_TOKENS: Literal["gen_ai.usage.prompt_tokens"] = (
- "gen_ai.usage.prompt_tokens"
- )
+ GEN_AI_USAGE_PROMPT_TOKENS: Literal["gen_ai.usage.prompt_tokens"] = "gen_ai.usage.prompt_tokens"
"""The number of tokens used in the GenAI input (prompt).
Type: int
@@ -1961,9 +1874,7 @@
"""
# Path: model/attributes/gen_ai/gen_ai__usage__total_tokens.json
- GEN_AI_USAGE_TOTAL_TOKENS: Literal["gen_ai.usage.total_tokens"] = (
- "gen_ai.usage.total_tokens"
- )
+ GEN_AI_USAGE_TOTAL_TOKENS: Literal["gen_ai.usage.total_tokens"] = "gen_ai.usage.total_tokens"
"""The total number of tokens used to process the prompt. (input tokens plus output todkens)
Type: int
@@ -2006,9 +1917,7 @@
"""
# Path: model/attributes/http/http__decoded_response_content_length.json
- HTTP_DECODED_RESPONSE_CONTENT_LENGTH: Literal[
- "http.decoded_response_content_length"
- ] = "http.decoded_response_content_length"
+ HTTP_DECODED_RESPONSE_CONTENT_LENGTH: Literal["http.decoded_response_content_length"] = "http.decoded_response_content_length"
"""The decoded body size of the response (in bytes).
Type: int
@@ -2073,34 +1982,28 @@
Example: "?foo=bar&bar=baz"
"""
- # Path: model/attributes/http/http__request__connect_start.json
- HTTP_REQUEST_CONNECT_START: Literal["http.request.connect_start"] = (
- "http.request.connect_start"
- )
- """The UNIX timestamp representing the time immediately before the user agent starts establishing the connection to the server to retrieve the resource.
+ # Path: model/attributes/http/http__request__connection_end.json
+ HTTP_REQUEST_CONNECTION_END: Literal["http.request.connection_end"] = "http.request.connection_end"
+ """The UNIX timestamp representing the time immediately after the browser finishes establishing the connection to the server to retrieve the resource. The timestamp value includes the time interval to establish the transport connection, as well as other time intervals such as TLS handshake and SOCKS authentication.
Type: float
Contains PII: maybe
Defined in OTEL: No
- Example: 1732829555.111
+ Example: 1732829555.15
"""
- # Path: model/attributes/http/http__request__connection_end.json
- HTTP_REQUEST_CONNECTION_END: Literal["http.request.connection_end"] = (
- "http.request.connection_end"
- )
- """The UNIX timestamp representing the time immediately after the browser finishes establishing the connection to the server to retrieve the resource. The timestamp value includes the time interval to establish the transport connection, as well as other time intervals such as TLS handshake and SOCKS authentication.
+ # Path: model/attributes/http/http__request__connect_start.json
+ HTTP_REQUEST_CONNECT_START: Literal["http.request.connect_start"] = "http.request.connect_start"
+ """The UNIX timestamp representing the time immediately before the user agent starts establishing the connection to the server to retrieve the resource.
Type: float
Contains PII: maybe
Defined in OTEL: No
- Example: 1732829555.15
+ Example: 1732829555.111
"""
... diff truncated: showing 800 of 8889 lines
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.

This PR adds attributes for Core Web Vitals measurements:
browser.web_vital.cls.valuebrowser.web_vital.lcp.valuebrowser.web_vital.inp.valueThese new attributes are inspired by OTel, who use
browser.web_vital.(lcp|inp|fid|cls)span event names for recording web vitals. I still marked"is_in_otel": falsebecause they are not attributes in OTel but span events. But open to change this if reviewers prefertruehere.For backward compatibility, the original shorthand attributes (
cls,lcp,inp) are also added but marked as deprecated with references to their new replacements.