Summary
After enabling diagnostics-otel and seeing Anthropic + openai-codex metrics flowing correctly, no metrics appear for models on the openai provider (e.g. openai/gpt-5.4).
Code investigation
The emitDiagnosticEvent instrumentation in the source is provider-agnostic — it fires based on providerUsed/modelUsed dynamically. The diagnostics-otel plugin registers its listener on globalThis.__openclawDiagnosticEventsState.
Hypothesis: Models invoked via sessions_spawn subagents may run in a subprocess that does not share the globalThis listener store, causing emitDiagnosticEvent calls to be dispatched to an empty listener set in the subprocess. This would explain why openai-codex metrics appear (from cron isolated sessions, which likely run embedded in the gateway process), but openai provider metrics do not (only tested via sessions_spawn).
A secondary possibility: the usage object from the OpenAI Chat Completions API (prompt_tokens, completion_tokens, total_tokens) may not be normalized before the hasNonzeroUsage check, causing the guard to return false and skip event emission.
To reproduce
- Enable
diagnostics-otel with a working OTLP endpoint
- Run
sessions_spawn with model: openai/gpt-5.4 (or any openai provider model)
- Confirm the model responds successfully
- Query Prometheus — no
openclaw_provider="openai" series appear
Expected
All provider backends emit openclaw_tokens_total and related metrics regardless of invocation path.
Environment
diagnostics-otel plugin enabled, OTLP/HTTP configured
openai:default auth profile configured, calls succeed
openai-codex metrics appear correctly from cron runs
Summary
After enabling
diagnostics-oteland seeing Anthropic +openai-codexmetrics flowing correctly, no metrics appear for models on theopenaiprovider (e.g.openai/gpt-5.4).Code investigation
The
emitDiagnosticEventinstrumentation in the source is provider-agnostic — it fires based onproviderUsed/modelUseddynamically. The diagnostics-otel plugin registers its listener onglobalThis.__openclawDiagnosticEventsState.Hypothesis: Models invoked via
sessions_spawnsubagents may run in a subprocess that does not share theglobalThislistener store, causingemitDiagnosticEventcalls to be dispatched to an empty listener set in the subprocess. This would explain whyopenai-codexmetrics appear (from cron isolated sessions, which likely run embedded in the gateway process), butopenaiprovider metrics do not (only tested viasessions_spawn).A secondary possibility: the
usageobject from the OpenAI Chat Completions API (prompt_tokens,completion_tokens,total_tokens) may not be normalized before thehasNonzeroUsagecheck, causing the guard to return false and skip event emission.To reproduce
diagnostics-otelwith a working OTLP endpointsessions_spawnwithmodel: openai/gpt-5.4(or anyopenaiprovider model)openclaw_provider="openai"series appearExpected
All provider backends emit
openclaw_tokens_totaland related metrics regardless of invocation path.Environment
diagnostics-otelplugin enabled, OTLP/HTTP configuredopenai:defaultauth profile configured, calls succeedopenai-codexmetrics appear correctly from cron runs