Skip to content

feat(attributes): Add core web web vital value attributes#229

Merged
Lms24 merged 9 commits intomainfrom
lms/feat-web-vital-attributes
Mar 9, 2026
Merged

feat(attributes): Add core web web vital value attributes#229
Lms24 merged 9 commits intomainfrom
lms/feat-web-vital-attributes

Conversation

@Lms24
Copy link
Member

@Lms24 Lms24 commented Jan 22, 2026

This PR adds attributes for Core Web Vitals measurements:

  • browser.web_vital.cls.value
  • browser.web_vital.lcp.value
  • browser.web_vital.inp.value

These new attributes are inspired by OTel, who use browser.web_vital.(lcp|inp|fid|cls) span event names for recording web vitals. I still marked "is_in_otel": false because they are not attributes in OTel but span events. But open to change this if reviewers prefer true here.

For backward compatibility, the original shorthand attributes (cls, lcp, inp) are also added but marked as deprecated with references to their new replacements.

@github-actions

This comment was marked as outdated.

@Lms24 Lms24 force-pushed the lms/feat-web-vital-attributes branch from fcc2171 to c248972 Compare January 23, 2026 09:58
@Lms24 Lms24 changed the title feat(attributes): Add first batch of web vital attributes feat(attributes): Add core web web vital value attributes Jan 23, 2026
@Lms24 Lms24 marked this pull request as ready for review January 23, 2026 10:01
@Lms24 Lms24 requested a review from lcian as a code owner January 23, 2026 10:01
@Lms24 Lms24 force-pushed the lms/feat-web-vital-attributes branch 2 times, most recently from c248972 to 18ea0e6 Compare January 23, 2026 10:03
@lcian lcian added Feature and removed Feature labels Jan 23, 2026
@Lms24 Lms24 force-pushed the lms/feat-web-vital-attributes branch from 18ea0e6 to 4dd8316 Compare March 6, 2026 12:25
@Lms24 Lms24 requested review from a team and mjq as code owners March 6, 2026 12:25
@github-actions
Copy link

github-actions bot commented Mar 6, 2026

Semver Impact of This PR

🟡 Minor (new features)

📋 Changelog Preview

This is how your changes will appear in the changelog.
Entries from this PR are highlighted with a left border (blockquote style).


New Features ✨

Attributes

  • Add core web web vital value attributes by Lms24 in #229
  • Add allow_any_value field to attribute schema by vgrozdanic in #272

Other

  • (http) Add http.server.request.time_in_queue attribute by dingsdax in #267
  • (resource) Add resource.deployment.environment by mjq in #266
  • Add sentry.timestamp.sequence attribute to the spec by logaretm in #262
  • Add changelog tracking to attribute definitions by ericapisani in #270

Bug Fixes 🐛

  • (attributes) Remove allow_any_value boolean attribute and allow any as type by vgrozdanic in #273
  • (gen_ai) Input and output token description by obostjancic in #261
  • Don't run changelog generation on yarn generate by Lms24 in #277
  • Avoid changelog generation recursion by Lms24 in #274

Documentation 📚

  • Update README with up-to-date links by ericapisani in #258

Internal Changes 🔧

Deps

  • Bump dompurify from 3.3.1 to 3.3.2 by dependabot in #278
  • Bump svgo from 3.3.2 to 3.3.3 by dependabot in #275
  • Bump svelte from 5.51.5 to 5.53.5 by dependabot in #271
  • Bump rollup from 4.40.1 to 4.59.0 by dependabot in #269
  • Bump svelte from 5.48.1 to 5.51.5 by dependabot in #260

Deps Dev

  • Bump tar from 7.5.8 to 7.5.10 by dependabot in #276
  • Bump tar from 7.5.7 to 7.5.8 by dependabot in #259

Other

  • (ai) Deprecate rest of ai.* attributes by constantinius in #264
  • (gen_ai) Deprecate gen_ai.tool.input, gen_ai.tool.message, gen_ai.tool.output by constantinius in #265

🤖 This preview updates automatically when you update the PR.

Copy link

@cursor cursor bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Cursor Bugbot has reviewed your changes and found 1 potential issue.

Bugbot Autofix prepared a fix for the issue found in the latest run.

  • ✅ Fixed: Python lcp entry placed out of alphabetical order
    • Added sorting logic to Python generator to match JavaScript generator, ensuring attributes are alphabetically ordered by constant name.

Create PR

Or push these changes by commenting:

@cursor push 57b90b025b
Preview (57b90b025b)
diff --git a/python/src/sentry_conventions/attributes.py b/python/src/sentry_conventions/attributes.py
--- a/python/src/sentry_conventions/attributes.py
+++ b/python/src/sentry_conventions/attributes.py
@@ -5,13 +5,10 @@
 import warnings
 from dataclasses import dataclass
 from enum import Enum
-from typing import Dict, List, Literal, Optional, TypedDict, Union
+from typing import List, Union, Literal, Optional, Dict, TypedDict
 
-AttributeValue = Union[
-    str, int, float, bool, List[str], List[int], List[float], List[bool]
-]
+AttributeValue = Union[str, int, float, bool, List[str], List[int], List[float], List[bool]]
 
-
 class AttributeType(Enum):
     STRING = "string"
     BOOLEAN = "boolean"
@@ -23,84 +20,75 @@
     DOUBLE_ARRAY = "double[]"
     ANY = "any"
 
-
 class IsPii(Enum):
     TRUE = "true"
     FALSE = "false"
     MAYBE = "maybe"
 
-
 @dataclass
 class PiiInfo:
     """Holds information about PII in an attribute's values."""
-
     isPii: IsPii
     reason: Optional[str] = None
 
-
 class DeprecationStatus(Enum):
     BACKFILL = "backfill"
     NORMALIZE = "normalize"
 
-
 @dataclass
 class DeprecationInfo:
     """Holds information about a deprecation."""
-
     replacement: Optional[str] = None
     reason: Optional[str] = None
     status: Optional[DeprecationStatus] = None
 
-
 @dataclass
 class ChangelogEntry:
     """A changelog entry tracking a change to an attribute."""
 
     version: str
     """The sentry-conventions release version"""
-
+    
     prs: Optional[List[int]] = None
     """GitHub PR numbers"""
-
+    
     description: Optional[str] = None
     """Optional description of what changed"""
 
-
 @dataclass
 class AttributeMetadata:
     """The metadata for an attribute."""
 
     brief: str
     """A description of the attribute"""
-
+    
     type: AttributeType
     """The type of the attribute value"""
-
+    
     pii: PiiInfo
     """If an attribute can have pii. Is either true, false or maybe. Optionally include a reason about why it has PII or not"""
-
+    
     is_in_otel: bool
     """Whether the attribute is defined in OpenTelemetry Semantic Conventions"""
-
+    
     has_dynamic_suffix: Optional[bool] = None
     """If an attribute has a dynamic suffix, for example http.response.header.<key> where <key> is dynamic"""
-
+    
     example: Optional[AttributeValue] = None
     """An example value of the attribute"""
-
+    
     deprecation: Optional[DeprecationInfo] = None
     """If an attribute was deprecated, and what it was replaced with"""
-
+    
     aliases: Optional[List[str]] = None
     """If there are attributes that alias to this attribute"""
-
+    
     sdks: Optional[List[str]] = None
     """If an attribute is SDK specific, list the SDKs that use this attribute. This is not an exhaustive list, there might be SDKs that send this attribute that are is not documented here."""
-
+    
     changelog: Optional[List[ChangelogEntry]] = None
     """Changelog entries tracking how this attribute has changed across versions"""
 
-
 class _AttributeNamesMeta(type):
     _deprecated_names = {
         "AI_CITATIONS",
@@ -214,7 +202,6 @@
             )
         return super().__getattribute__(name)
 
-
 class ATTRIBUTE_NAMES(metaclass=_AttributeNamesMeta):
     """Contains all attribute names as class attributes with their documentation."""
 
@@ -230,9 +217,7 @@
     """
 
     # Path: model/attributes/ai/ai__completion_tokens__used.json
-    AI_COMPLETION_TOKENS_USED: Literal["ai.completion_tokens.used"] = (
-        "ai.completion_tokens.used"
-    )
+    AI_COMPLETION_TOKENS_USED: Literal["ai.completion_tokens.used"] = "ai.completion_tokens.used"
     """The number of tokens used to respond to the message.
 
     Type: int
@@ -336,6 +321,18 @@
     Example: "{\"user_id\": 123, \"session_id\": \"abc123\"}"
     """
 
+    # Path: model/attributes/ai/ai__model_id.json
+    AI_MODEL_ID: Literal["ai.model_id"] = "ai.model_id"
+    """The vendor-specific ID of the model used.
+
+    Type: str
+    Contains PII: maybe
+    Defined in OTEL: No
+    Aliases: gen_ai.response.model
+    DEPRECATED: Use gen_ai.response.model instead
+    Example: "gpt-4"
+    """
+
     # Path: model/attributes/ai/ai__model__provider.json
     AI_MODEL_PROVIDER: Literal["ai.model.provider"] = "ai.model.provider"
     """The provider of the model.
@@ -348,18 +345,6 @@
     Example: "openai"
     """
 
-    # Path: model/attributes/ai/ai__model_id.json
-    AI_MODEL_ID: Literal["ai.model_id"] = "ai.model_id"
-    """The vendor-specific ID of the model used.
-
-    Type: str
-    Contains PII: maybe
-    Defined in OTEL: No
-    Aliases: gen_ai.response.model
-    DEPRECATED: Use gen_ai.response.model instead
-    Example: "gpt-4"
-    """
-
     # Path: model/attributes/ai/ai__pipeline__name.json
     AI_PIPELINE_NAME: Literal["ai.pipeline.name"] = "ai.pipeline.name"
     """The name of the AI pipeline.
@@ -419,6 +404,17 @@
     Example: true
     """
 
+    # Path: model/attributes/ai/ai__responses.json
+    AI_RESPONSES: Literal["ai.responses"] = "ai.responses"
+    """The response messages sent back by the AI model.
+
+    Type: List[str]
+    Contains PII: maybe
+    Defined in OTEL: No
+    DEPRECATED: Use gen_ai.response.text instead
+    Example: ["hello","world"]
+    """
+
     # Path: model/attributes/ai/ai__response_format.json
     AI_RESPONSE_FORMAT: Literal["ai.response_format"] = "ai.response_format"
     """For an AI model call, the format of the response
@@ -430,17 +426,6 @@
     Example: "json_object"
     """
 
-    # Path: model/attributes/ai/ai__responses.json
-    AI_RESPONSES: Literal["ai.responses"] = "ai.responses"
-    """The response messages sent back by the AI model.
-
-    Type: List[str]
-    Contains PII: maybe
-    Defined in OTEL: No
-    DEPRECATED: Use gen_ai.response.text instead
-    Example: ["hello","world"]
-    """
-
     # Path: model/attributes/ai/ai__search_queries.json
     AI_SEARCH_QUERIES: Literal["ai.search_queries"] = "ai.search_queries"
     """Queries used to search for relevant context or documents.
@@ -522,6 +507,17 @@
     Example: ["Hello, how are you?","What is the capital of France?"]
     """
 
+    # Path: model/attributes/ai/ai__tools.json
+    AI_TOOLS: Literal["ai.tools"] = "ai.tools"
+    """For an AI model call, the functions that are available
+
+    Type: List[str]
+    Contains PII: maybe
+    Defined in OTEL: No
+    DEPRECATED: Use gen_ai.request.available_tools instead
+    Example: ["function_1","function_2"]
+    """
+
     # Path: model/attributes/ai/ai__tool_calls.json
     AI_TOOL_CALLS: Literal["ai.tool_calls"] = "ai.tool_calls"
     """For an AI model call, the tool calls that were made.
@@ -533,17 +529,6 @@
     Example: ["tool_call_1","tool_call_2"]
     """
 
-    # Path: model/attributes/ai/ai__tools.json
-    AI_TOOLS: Literal["ai.tools"] = "ai.tools"
-    """For an AI model call, the functions that are available
-
-    Type: List[str]
-    Contains PII: maybe
-    Defined in OTEL: No
-    DEPRECATED: Use gen_ai.request.available_tools instead
-    Example: ["function_1","function_2"]
-    """
-
     # Path: model/attributes/ai/ai__top_k.json
     AI_TOP_K: Literal["ai.top_k"] = "ai.top_k"
     """Limits the model to only consider the K most likely next tokens, where K is an integer (e.g., top_k=20 means only the 20 highest probability tokens are considered).
@@ -655,9 +640,7 @@
     """
 
     # Path: model/attributes/browser/browser__script__invoker_type.json
-    BROWSER_SCRIPT_INVOKER_TYPE: Literal["browser.script.invoker_type"] = (
-        "browser.script.invoker_type"
-    )
+    BROWSER_SCRIPT_INVOKER_TYPE: Literal["browser.script.invoker_type"] = "browser.script.invoker_type"
     """Browser script entry point type.
 
     Type: str
@@ -667,9 +650,7 @@
     """
 
     # Path: model/attributes/browser/browser__script__source_char_position.json
-    BROWSER_SCRIPT_SOURCE_CHAR_POSITION: Literal[
-        "browser.script.source_char_position"
-    ] = "browser.script.source_char_position"
+    BROWSER_SCRIPT_SOURCE_CHAR_POSITION: Literal["browser.script.source_char_position"] = "browser.script.source_char_position"
     """A number representing the script character position of the script.
 
     Type: int
@@ -690,9 +671,7 @@
     """
 
     # Path: model/attributes/browser/browser__web_vital__cls__value.json
-    BROWSER_WEB_VITAL_CLS_VALUE: Literal["browser.web_vital.cls.value"] = (
-        "browser.web_vital.cls.value"
-    )
+    BROWSER_WEB_VITAL_CLS_VALUE: Literal["browser.web_vital.cls.value"] = "browser.web_vital.cls.value"
     """The value of the recorded Cumulative Layout Shift (CLS) web vital
 
     Type: float
@@ -703,9 +682,7 @@
     """
 
     # Path: model/attributes/browser/browser__web_vital__inp__value.json
-    BROWSER_WEB_VITAL_INP_VALUE: Literal["browser.web_vital.inp.value"] = (
-        "browser.web_vital.inp.value"
-    )
+    BROWSER_WEB_VITAL_INP_VALUE: Literal["browser.web_vital.inp.value"] = "browser.web_vital.inp.value"
     """The value of the recorded Interaction to Next Paint (INP) web vital
 
     Type: float
@@ -716,9 +693,7 @@
     """
 
     # Path: model/attributes/browser/browser__web_vital__lcp__value.json
-    BROWSER_WEB_VITAL_LCP_VALUE: Literal["browser.web_vital.lcp.value"] = (
-        "browser.web_vital.lcp.value"
-    )
+    BROWSER_WEB_VITAL_LCP_VALUE: Literal["browser.web_vital.lcp.value"] = "browser.web_vital.lcp.value"
     """The value of the recorded Largest Contentful Paint (LCP) web vital
 
     Type: float
@@ -820,9 +795,7 @@
     """
 
     # Path: model/attributes/cloudflare/cloudflare__d1__rows_read.json
-    CLOUDFLARE_D1_ROWS_READ: Literal["cloudflare.d1.rows_read"] = (
-        "cloudflare.d1.rows_read"
-    )
+    CLOUDFLARE_D1_ROWS_READ: Literal["cloudflare.d1.rows_read"] = "cloudflare.d1.rows_read"
     """The number of rows read in a Cloudflare D1 operation.
 
     Type: int
@@ -832,9 +805,7 @@
     """
 
     # Path: model/attributes/cloudflare/cloudflare__d1__rows_written.json
-    CLOUDFLARE_D1_ROWS_WRITTEN: Literal["cloudflare.d1.rows_written"] = (
-        "cloudflare.d1.rows_written"
-    )
+    CLOUDFLARE_D1_ROWS_WRITTEN: Literal["cloudflare.d1.rows_written"] = "cloudflare.d1.rows_written"
     """The number of rows written in a Cloudflare D1 operation.
 
     Type: int
@@ -855,26 +826,26 @@
     Example: 0.2361
     """
 
-    # Path: model/attributes/code/code__file__path.json
-    CODE_FILE_PATH: Literal["code.file.path"] = "code.file.path"
+    # Path: model/attributes/code/code__filepath.json
+    CODE_FILEPATH: Literal["code.filepath"] = "code.filepath"
     """The source code file name that identifies the code unit as uniquely as possible (preferably an absolute file path).
 
     Type: str
     Contains PII: maybe
     Defined in OTEL: Yes
-    Aliases: code.filepath
+    Aliases: code.file.path
+    DEPRECATED: Use code.file.path instead
     Example: "/app/myapplication/http/handler/server.py"
     """
 
-    # Path: model/attributes/code/code__filepath.json
-    CODE_FILEPATH: Literal["code.filepath"] = "code.filepath"
+    # Path: model/attributes/code/code__file__path.json
+    CODE_FILE_PATH: Literal["code.file.path"] = "code.file.path"
     """The source code file name that identifies the code unit as uniquely as possible (preferably an absolute file path).
 
     Type: str
     Contains PII: maybe
     Defined in OTEL: Yes
-    Aliases: code.file.path
-    DEPRECATED: Use code.file.path instead
+    Aliases: code.filepath
     Example: "/app/myapplication/http/handler/server.py"
     """
 
@@ -901,26 +872,26 @@
     Example: "server_request"
     """
 
-    # Path: model/attributes/code/code__line__number.json
-    CODE_LINE_NUMBER: Literal["code.line.number"] = "code.line.number"
+    # Path: model/attributes/code/code__lineno.json
+    CODE_LINENO: Literal["code.lineno"] = "code.lineno"
     """The line number in code.filepath best representing the operation. It SHOULD point within the code unit named in code.function
 
     Type: int
     Contains PII: maybe
     Defined in OTEL: Yes
-    Aliases: code.lineno
+    Aliases: code.line.number
+    DEPRECATED: Use code.line.number instead
     Example: 42
     """
 
-    # Path: model/attributes/code/code__lineno.json
-    CODE_LINENO: Literal["code.lineno"] = "code.lineno"
+    # Path: model/attributes/code/code__line__number.json
+    CODE_LINE_NUMBER: Literal["code.line.number"] = "code.line.number"
     """The line number in code.filepath best representing the operation. It SHOULD point within the code unit named in code.function
 
     Type: int
     Contains PII: maybe
     Defined in OTEL: Yes
-    Aliases: code.line.number
-    DEPRECATED: Use code.line.number instead
+    Aliases: code.lineno
     Example: 42
     """
 
@@ -956,9 +927,7 @@
     """
 
     # Path: model/attributes/culture/culture__is_24_hour_format.json
-    CULTURE_IS_24_HOUR_FORMAT: Literal["culture.is_24_hour_format"] = (
-        "culture.is_24_hour_format"
-    )
+    CULTURE_IS_24_HOUR_FORMAT: Literal["culture.is_24_hour_format"] = "culture.is_24_hour_format"
     """Whether the culture uses 24-hour time format.
 
     Type: bool
@@ -1044,9 +1013,7 @@
     """
 
     # Path: model/attributes/db/db__query__parameter__[key].json
-    DB_QUERY_PARAMETER_KEY: Literal["db.query.parameter.<key>"] = (
-        "db.query.parameter.<key>"
-    )
+    DB_QUERY_PARAMETER_KEY: Literal["db.query.parameter.<key>"] = "db.query.parameter.<key>"
     """A query parameter used in db.query.text, with <key> being the parameter name, and the attribute value being a string representation of the parameter value.
 
     Type: str
@@ -1388,9 +1355,7 @@
     """
 
     # Path: model/attributes/gen_ai/gen_ai__cost__input_tokens.json
-    GEN_AI_COST_INPUT_TOKENS: Literal["gen_ai.cost.input_tokens"] = (
-        "gen_ai.cost.input_tokens"
-    )
+    GEN_AI_COST_INPUT_TOKENS: Literal["gen_ai.cost.input_tokens"] = "gen_ai.cost.input_tokens"
     """The cost of tokens used to process the AI input (prompt) in USD (without cached input tokens).
 
     Type: float
@@ -1400,9 +1365,7 @@
     """
 
     # Path: model/attributes/gen_ai/gen_ai__cost__output_tokens.json
-    GEN_AI_COST_OUTPUT_TOKENS: Literal["gen_ai.cost.output_tokens"] = (
-        "gen_ai.cost.output_tokens"
-    )
+    GEN_AI_COST_OUTPUT_TOKENS: Literal["gen_ai.cost.output_tokens"] = "gen_ai.cost.output_tokens"
     """The cost of tokens used for creating the AI output in USD (without reasoning tokens).
 
     Type: float
@@ -1412,9 +1375,7 @@
     """
 
     # Path: model/attributes/gen_ai/gen_ai__cost__total_tokens.json
-    GEN_AI_COST_TOTAL_TOKENS: Literal["gen_ai.cost.total_tokens"] = (
-        "gen_ai.cost.total_tokens"
-    )
+    GEN_AI_COST_TOTAL_TOKENS: Literal["gen_ai.cost.total_tokens"] = "gen_ai.cost.total_tokens"
     """The total cost for the tokens used.
 
     Type: float
@@ -1425,9 +1386,7 @@
     """
 
     # Path: model/attributes/gen_ai/gen_ai__embeddings__input.json
-    GEN_AI_EMBEDDINGS_INPUT: Literal["gen_ai.embeddings.input"] = (
-        "gen_ai.embeddings.input"
-    )
+    GEN_AI_EMBEDDINGS_INPUT: Literal["gen_ai.embeddings.input"] = "gen_ai.embeddings.input"
     """The input to the embeddings model.
 
     Type: str
@@ -1511,9 +1470,7 @@
     """
 
     # Path: model/attributes/gen_ai/gen_ai__request__available_tools.json
-    GEN_AI_REQUEST_AVAILABLE_TOOLS: Literal["gen_ai.request.available_tools"] = (
-        "gen_ai.request.available_tools"
-    )
+    GEN_AI_REQUEST_AVAILABLE_TOOLS: Literal["gen_ai.request.available_tools"] = "gen_ai.request.available_tools"
     """The available tools for the model. It has to be a stringified version of an array of objects.
 
     Type: str
@@ -1524,9 +1481,7 @@
     """
 
     # Path: model/attributes/gen_ai/gen_ai__request__frequency_penalty.json
-    GEN_AI_REQUEST_FREQUENCY_PENALTY: Literal["gen_ai.request.frequency_penalty"] = (
-        "gen_ai.request.frequency_penalty"
-    )
+    GEN_AI_REQUEST_FREQUENCY_PENALTY: Literal["gen_ai.request.frequency_penalty"] = "gen_ai.request.frequency_penalty"
     """Used to reduce repetitiveness of generated tokens. The higher the value, the stronger a penalty is applied to previously present tokens, proportional to how many times they have already appeared in the prompt or prior generation.
 
     Type: float
@@ -1537,9 +1492,7 @@
     """
 
     # Path: model/attributes/gen_ai/gen_ai__request__max_tokens.json
-    GEN_AI_REQUEST_MAX_TOKENS: Literal["gen_ai.request.max_tokens"] = (
-        "gen_ai.request.max_tokens"
-    )
+    GEN_AI_REQUEST_MAX_TOKENS: Literal["gen_ai.request.max_tokens"] = "gen_ai.request.max_tokens"
     """The maximum number of tokens to generate in the response.
 
     Type: int
@@ -1549,9 +1502,7 @@
     """
 
     # Path: model/attributes/gen_ai/gen_ai__request__messages.json
-    GEN_AI_REQUEST_MESSAGES: Literal["gen_ai.request.messages"] = (
-        "gen_ai.request.messages"
-    )
+    GEN_AI_REQUEST_MESSAGES: Literal["gen_ai.request.messages"] = "gen_ai.request.messages"
     """The messages passed to the model. It has to be a stringified version of an array of objects. The `role` attribute of each object must be `"user"`, `"assistant"`, `"tool"`, or `"system"`. For messages of the role `"tool"`, the `content` can be a string or an arbitrary object with information about the tool call. For other messages the `content` can be either a string or a list of objects in the format `{type: "text", text:"..."}`.
 
     Type: str
@@ -1573,9 +1524,7 @@
     """
 
     # Path: model/attributes/gen_ai/gen_ai__request__presence_penalty.json
-    GEN_AI_REQUEST_PRESENCE_PENALTY: Literal["gen_ai.request.presence_penalty"] = (
-        "gen_ai.request.presence_penalty"
-    )
+    GEN_AI_REQUEST_PRESENCE_PENALTY: Literal["gen_ai.request.presence_penalty"] = "gen_ai.request.presence_penalty"
     """Used to reduce repetitiveness of generated tokens. Similar to frequency_penalty, except that this penalty is applied equally to all tokens that have already appeared, regardless of their exact frequencies.
 
     Type: float
@@ -1597,9 +1546,7 @@
     """
 
     # Path: model/attributes/gen_ai/gen_ai__request__temperature.json
-    GEN_AI_REQUEST_TEMPERATURE: Literal["gen_ai.request.temperature"] = (
-        "gen_ai.request.temperature"
-    )
+    GEN_AI_REQUEST_TEMPERATURE: Literal["gen_ai.request.temperature"] = "gen_ai.request.temperature"
     """For an AI model call, the temperature parameter. Temperature essentially means how random the output will be.
 
     Type: float
@@ -1632,9 +1579,7 @@
     """
 
     # Path: model/attributes/gen_ai/gen_ai__response__finish_reasons.json
-    GEN_AI_RESPONSE_FINISH_REASONS: Literal["gen_ai.response.finish_reasons"] = (
-        "gen_ai.response.finish_reasons"
-    )
+    GEN_AI_RESPONSE_FINISH_REASONS: Literal["gen_ai.response.finish_reasons"] = "gen_ai.response.finish_reasons"
     """The reason why the model stopped generating.
 
     Type: str
@@ -1667,9 +1612,7 @@
     """
 
     # Path: model/attributes/gen_ai/gen_ai__response__streaming.json
-    GEN_AI_RESPONSE_STREAMING: Literal["gen_ai.response.streaming"] = (
-        "gen_ai.response.streaming"
-    )
+    GEN_AI_RESPONSE_STREAMING: Literal["gen_ai.response.streaming"] = "gen_ai.response.streaming"
     """Whether or not the AI model call's response was streamed back asynchronously
 
     Type: bool
@@ -1691,9 +1634,7 @@
     """
 
     # Path: model/attributes/gen_ai/gen_ai__response__time_to_first_token.json
-    GEN_AI_RESPONSE_TIME_TO_FIRST_TOKEN: Literal[
-        "gen_ai.response.time_to_first_token"
-    ] = "gen_ai.response.time_to_first_token"
+    GEN_AI_RESPONSE_TIME_TO_FIRST_TOKEN: Literal["gen_ai.response.time_to_first_token"] = "gen_ai.response.time_to_first_token"
     """Time in seconds when the first response content chunk arrived in streaming responses.
 
     Type: float
@@ -1703,9 +1644,7 @@
     """
 
     # Path: model/attributes/gen_ai/gen_ai__response__tokens_per_second.json
-    GEN_AI_RESPONSE_TOKENS_PER_SECOND: Literal["gen_ai.response.tokens_per_second"] = (
-        "gen_ai.response.tokens_per_second"
-    )
+    GEN_AI_RESPONSE_TOKENS_PER_SECOND: Literal["gen_ai.response.tokens_per_second"] = "gen_ai.response.tokens_per_second"
     """The total output tokens per seconds throughput
 
     Type: float
@@ -1715,9 +1654,7 @@
     """
 
     # Path: model/attributes/gen_ai/gen_ai__response__tool_calls.json
-    GEN_AI_RESPONSE_TOOL_CALLS: Literal["gen_ai.response.tool_calls"] = (
-        "gen_ai.response.tool_calls"
-    )
+    GEN_AI_RESPONSE_TOOL_CALLS: Literal["gen_ai.response.tool_calls"] = "gen_ai.response.tool_calls"
     """The tool calls in the model's response. It has to be a stringified version of an array of objects.
 
     Type: str
@@ -1739,34 +1676,30 @@
     Example: "openai"
     """
 
-    # Path: model/attributes/gen_ai/gen_ai__system__message.json
-    GEN_AI_SYSTEM_MESSAGE: Literal["gen_ai.system.message"] = "gen_ai.system.message"
+    # Path: model/attributes/gen_ai/gen_ai__system_instructions.json
+    GEN_AI_SYSTEM_INSTRUCTIONS: Literal["gen_ai.system_instructions"] = "gen_ai.system_instructions"
     """The system instructions passed to the model.
 
     Type: str
-    Contains PII: true
-    Defined in OTEL: No
-    DEPRECATED: Use gen_ai.system_instructions instead
+    Contains PII: maybe
+    Defined in OTEL: Yes
+    Aliases: ai.preamble
     Example: "You are a helpful assistant"
     """
 
-    # Path: model/attributes/gen_ai/gen_ai__system_instructions.json
-    GEN_AI_SYSTEM_INSTRUCTIONS: Literal["gen_ai.system_instructions"] = (
-        "gen_ai.system_instructions"
-    )
+    # Path: model/attributes/gen_ai/gen_ai__system__message.json
+    GEN_AI_SYSTEM_MESSAGE: Literal["gen_ai.system.message"] = "gen_ai.system.message"
     """The system instructions passed to the model.
 
     Type: str
-    Contains PII: maybe
-    Defined in OTEL: Yes
-    Aliases: ai.preamble
+    Contains PII: true
+    Defined in OTEL: No
+    DEPRECATED: Use gen_ai.system_instructions instead
     Example: "You are a helpful assistant"
     """
 
     # Path: model/attributes/gen_ai/gen_ai__tool__call__arguments.json
-    GEN_AI_TOOL_CALL_ARGUMENTS: Literal["gen_ai.tool.call.arguments"] = (
-        "gen_ai.tool.call.arguments"
-    )
+    GEN_AI_TOOL_CALL_ARGUMENTS: Literal["gen_ai.tool.call.arguments"] = "gen_ai.tool.call.arguments"
     """The arguments of the tool call. It has to be a stringified version of the arguments to the tool.
 
     Type: str
@@ -1777,9 +1710,7 @@
     """
 
     # Path: model/attributes/gen_ai/gen_ai__tool__call__result.json
-    GEN_AI_TOOL_CALL_RESULT: Literal["gen_ai.tool.call.result"] = (
-        "gen_ai.tool.call.result"
-    )
+    GEN_AI_TOOL_CALL_RESULT: Literal["gen_ai.tool.call.result"] = "gen_ai.tool.call.result"
     """The result of the tool call. It has to be a stringified version of the result of the tool.
 
     Type: str
@@ -1790,9 +1721,7 @@
     """
 
     # Path: model/attributes/gen_ai/gen_ai__tool__definitions.json
-    GEN_AI_TOOL_DEFINITIONS: Literal["gen_ai.tool.definitions"] = (
-        "gen_ai.tool.definitions"
-    )
+    GEN_AI_TOOL_DEFINITIONS: Literal["gen_ai.tool.definitions"] = "gen_ai.tool.definitions"
     """The list of source system tool definitions available to the GenAI agent or model.
 
     Type: str
@@ -1802,9 +1731,7 @@
     """
 
     # Path: model/attributes/gen_ai/gen_ai__tool__description.json
-    GEN_AI_TOOL_DESCRIPTION: Literal["gen_ai.tool.description"] = (
-        "gen_ai.tool.description"
-    )
+    GEN_AI_TOOL_DESCRIPTION: Literal["gen_ai.tool.description"] = "gen_ai.tool.description"
     """The description of the tool being used.
 
     Type: str
@@ -1871,9 +1798,7 @@
     """
 
     # Path: model/attributes/gen_ai/gen_ai__usage__completion_tokens.json
-    GEN_AI_USAGE_COMPLETION_TOKENS: Literal["gen_ai.usage.completion_tokens"] = (
-        "gen_ai.usage.completion_tokens"
-    )
+    GEN_AI_USAGE_COMPLETION_TOKENS: Literal["gen_ai.usage.completion_tokens"] = "gen_ai.usage.completion_tokens"
     """The number of tokens used in the GenAI response (completion).
 
     Type: int
@@ -1885,9 +1810,7 @@
     """
 
     # Path: model/attributes/gen_ai/gen_ai__usage__input_tokens.json
-    GEN_AI_USAGE_INPUT_TOKENS: Literal["gen_ai.usage.input_tokens"] = (
-        "gen_ai.usage.input_tokens"
-    )
+    GEN_AI_USAGE_INPUT_TOKENS: Literal["gen_ai.usage.input_tokens"] = "gen_ai.usage.input_tokens"
     """The number of tokens used to process the AI input (prompt) including cached input tokens.
 
     Type: int
@@ -1897,34 +1820,28 @@
     Example: 10
     """
 
-    # Path: model/attributes/gen_ai/gen_ai__usage__input_tokens__cache_write.json
-    GEN_AI_USAGE_INPUT_TOKENS_CACHE_WRITE: Literal[
-        "gen_ai.usage.input_tokens.cache_write"
-    ] = "gen_ai.usage.input_tokens.cache_write"
-    """The number of tokens written to the cache when processing the AI input (prompt).
+    # Path: model/attributes/gen_ai/gen_ai__usage__input_tokens__cached.json
+    GEN_AI_USAGE_INPUT_TOKENS_CACHED: Literal["gen_ai.usage.input_tokens.cached"] = "gen_ai.usage.input_tokens.cached"
+    """The number of cached tokens used to process the AI input (prompt).
 
     Type: int
     Contains PII: maybe
     Defined in OTEL: No
-    Example: 100
+    Example: 50
     """
 
-    # Path: model/attributes/gen_ai/gen_ai__usage__input_tokens__cached.json
-    GEN_AI_USAGE_INPUT_TOKENS_CACHED: Literal["gen_ai.usage.input_tokens.cached"] = (
-        "gen_ai.usage.input_tokens.cached"
-    )
-    """The number of cached tokens used to process the AI input (prompt).
+    # Path: model/attributes/gen_ai/gen_ai__usage__input_tokens__cache_write.json
+    GEN_AI_USAGE_INPUT_TOKENS_CACHE_WRITE: Literal["gen_ai.usage.input_tokens.cache_write"] = "gen_ai.usage.input_tokens.cache_write"
+    """The number of tokens written to the cache when processing the AI input (prompt).
 
     Type: int
     Contains PII: maybe
     Defined in OTEL: No
-    Example: 50
+    Example: 100
     """
 
     # Path: model/attributes/gen_ai/gen_ai__usage__output_tokens.json
-    GEN_AI_USAGE_OUTPUT_TOKENS: Literal["gen_ai.usage.output_tokens"] = (
-        "gen_ai.usage.output_tokens"
-    )
+    GEN_AI_USAGE_OUTPUT_TOKENS: Literal["gen_ai.usage.output_tokens"] = "gen_ai.usage.output_tokens"
     """The number of tokens used for creating the AI output (including reasoning tokens).
 
     Type: int
@@ -1935,9 +1852,7 @@
     """
 
     # Path: model/attributes/gen_ai/gen_ai__usage__output_tokens__reasoning.json
-    GEN_AI_USAGE_OUTPUT_TOKENS_REASONING: Literal[
-        "gen_ai.usage.output_tokens.reasoning"
-    ] = "gen_ai.usage.output_tokens.reasoning"
+    GEN_AI_USAGE_OUTPUT_TOKENS_REASONING: Literal["gen_ai.usage.output_tokens.reasoning"] = "gen_ai.usage.output_tokens.reasoning"
     """The number of tokens used for reasoning to create the AI output.
 
     Type: int
@@ -1947,9 +1862,7 @@
     """
 
     # Path: model/attributes/gen_ai/gen_ai__usage__prompt_tokens.json
-    GEN_AI_USAGE_PROMPT_TOKENS: Literal["gen_ai.usage.prompt_tokens"] = (
-        "gen_ai.usage.prompt_tokens"
-    )
+    GEN_AI_USAGE_PROMPT_TOKENS: Literal["gen_ai.usage.prompt_tokens"] = "gen_ai.usage.prompt_tokens"
     """The number of tokens used in the GenAI input (prompt).
 
     Type: int
@@ -1961,9 +1874,7 @@
     """
 
     # Path: model/attributes/gen_ai/gen_ai__usage__total_tokens.json
-    GEN_AI_USAGE_TOTAL_TOKENS: Literal["gen_ai.usage.total_tokens"] = (
-        "gen_ai.usage.total_tokens"
-    )
+    GEN_AI_USAGE_TOTAL_TOKENS: Literal["gen_ai.usage.total_tokens"] = "gen_ai.usage.total_tokens"
     """The total number of tokens used to process the prompt. (input tokens plus output todkens)
 
     Type: int
@@ -2006,9 +1917,7 @@
     """
 
     # Path: model/attributes/http/http__decoded_response_content_length.json
-    HTTP_DECODED_RESPONSE_CONTENT_LENGTH: Literal[
-        "http.decoded_response_content_length"
-    ] = "http.decoded_response_content_length"
+    HTTP_DECODED_RESPONSE_CONTENT_LENGTH: Literal["http.decoded_response_content_length"] = "http.decoded_response_content_length"
     """The decoded body size of the response (in bytes).
 
     Type: int
@@ -2073,34 +1982,28 @@
     Example: "?foo=bar&bar=baz"
     """
 
-    # Path: model/attributes/http/http__request__connect_start.json
-    HTTP_REQUEST_CONNECT_START: Literal["http.request.connect_start"] = (
-        "http.request.connect_start"
-    )
-    """The UNIX timestamp representing the time immediately before the user agent starts establishing the connection to the server to retrieve the resource.
+    # Path: model/attributes/http/http__request__connection_end.json
+    HTTP_REQUEST_CONNECTION_END: Literal["http.request.connection_end"] = "http.request.connection_end"
+    """The UNIX timestamp representing the time immediately after the browser finishes establishing the connection to the server to retrieve the resource. The timestamp value includes the time interval to establish the transport connection, as well as other time intervals such as TLS handshake and SOCKS authentication.
 
     Type: float
     Contains PII: maybe
     Defined in OTEL: No
-    Example: 1732829555.111
+    Example: 1732829555.15
     """
 
-    # Path: model/attributes/http/http__request__connection_end.json
-    HTTP_REQUEST_CONNECTION_END: Literal["http.request.connection_end"] = (
-        "http.request.connection_end"
-    )
-    """The UNIX timestamp representing the time immediately after the browser finishes establishing the connection to the server to retrieve the resource. The timestamp value includes the time interval to establish the transport connection, as well as other time intervals such as TLS handshake and SOCKS authentication.
+    # Path: model/attributes/http/http__request__connect_start.json
+    HTTP_REQUEST_CONNECT_START: Literal["http.request.connect_start"] = "http.request.connect_start"
+    """The UNIX timestamp representing the time immediately before the user agent starts establishing the connection to the server to retrieve the resource.
 
     Type: float
     Contains PII: maybe
     Defined in OTEL: No
-    Example: 1732829555.15
+    Example: 1732829555.111
     """
... diff truncated: showing 800 of 8889 lines
This Bugbot Autofix run was free. To enable autofix for future PRs, go to the Cursor dashboard.

@Lms24 Lms24 merged commit fe22d96 into main Mar 9, 2026
12 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants