Summary
A recent OpenClaw version appears to spend significant time before streaming begins, even when using a minimal tool profile. The delay is reproducible outside of Discord/message ingress and appears to occur inside OpenClaw's agent/runtime preparation path.
The main contributors observed were:
- eager construction/discovery for built-in media/web/capability tools
- repeated provider/plugin hook resolution during stream setup
The affected paths appear to be stock OpenClaw runtime/tooling paths, not custom user tools or unusual third-party scripts.
Environment
- OS: macOS
- OpenClaw affected version: 2026.4.29
- Node: 22.22.2
- Model/provider:
openai-codex/gpt-5.5
- Tool profile:
minimal
- Gateway mode: LaunchAgent/Gateway
- Reproduced outside Discord ingress/typing path
A downgrade to OpenClaw 2026.4.23 appeared materially faster in limited testing, but this should be treated as supporting evidence rather than a fully controlled regression bisect.
Observed behavior
On OpenClaw 2026.4.29, there was a consistent delay before streaming/first useful output. Typical embedded prep time was roughly 10-11 seconds before stream-ready.
Representative timing breakdown:
core-plugin-tools: ~4.6-4.8s
bundle-tools: ~0.5-0.6s
system-prompt: ~2.0-2.7s
session-resource-loader: ~0.5-0.8s
stream-setup: ~2.36-2.47s
Additional startup-before-stream time was also sometimes observed, commonly around 9-13.5s, involving runtime plugins, model resolution, auth, and attempt dispatch.
Troubleshooting performed
- Confirmed the delay occurs before streaming and is not caused by Discord ingress, typing indicators, or model first-token latency.
- Confirmed
tools.profile: minimal was already in use.
- Tested alternate tool profiles; minimal was best, but still showed the delay.
- Tested OpenAI-only plugin configuration; this did not materially improve timing.
- Removing plugin entries made tool timing worse, so that was not a valid workaround.
- Direct
openai/... model IDs were not actionable in this environment without an OPENAI_API_KEY; the active path used OpenAI Codex OAuth.
- Node version alone was not proven as the cause.
- Source inspection showed the slow paths are part of stock OpenClaw built-in tool/runtime setup.
Detailed findings
1. Eager built-in tool/provider construction
Most of core-plugin-tools time came from eager construction/discovery for built-in capability/media/web tools, including:
- image analysis
- image generation
- video generation
- music generation
- PDF
- web search
An experimental skip of capability-backed/media/web constructors reduced core-plugin-tools from roughly ~4.8s to ~158ms.
That is not a recommended fix as-is, because it removes or undermines useful tools, but it strongly suggests the cost is in eager provider/tool discovery rather than actual tool invocation.
2. Repeated provider hook/plugin resolution during stream setup
Stream setup repeatedly spent time in provider/plugin hook paths such as:
registerProviderStreamForModel
resolveProviderTextTransforms
runtimePlan.resolveExtraParams
applyExtraParamsToAgent
A cache probe reduced stream setup from roughly ~2.4s to ~3ms, suggesting repeated uncached provider-hook/plugin resolution is a major contributor.
Expected behavior
Tool definitions should remain available, but provider discovery and expensive capability checks should ideally be lazy, cached, or reused across runs where safe.
In particular:
- media/web/capability provider discovery should not add multiple seconds before every stream if the tools are not invoked
- provider hook/plugin lookup used during stream setup should be memoized or reused where possible
tools.profile: minimal should avoid unnecessary expensive setup work where practical
Actual behavior
Even with tools.profile: minimal, OpenClaw 2026.4.29 spent several seconds in stock tool/provider setup before streaming began.
Possible fix direction
Potential areas to investigate:
- lazy-load or cache provider discovery for built-in media/web/capability tools
- avoid constructing expensive provider-backed tool inventories until the tool is invoked or explicitly listed
- memoize provider-hook/plugin lookup used by stream setup
- reuse provider stream/text transform/extra-param resolution where inputs have not changed
Related issues
Possibly related at a high level, but not a duplicate:
I did not find an existing open issue that specifically documents eager media/web provider discovery plus uncached provider hook resolution causing pre-stream delay.
Summary
A recent OpenClaw version appears to spend significant time before streaming begins, even when using a minimal tool profile. The delay is reproducible outside of Discord/message ingress and appears to occur inside OpenClaw's agent/runtime preparation path.
The main contributors observed were:
The affected paths appear to be stock OpenClaw runtime/tooling paths, not custom user tools or unusual third-party scripts.
Environment
openai-codex/gpt-5.5minimalA downgrade to OpenClaw 2026.4.23 appeared materially faster in limited testing, but this should be treated as supporting evidence rather than a fully controlled regression bisect.
Observed behavior
On OpenClaw 2026.4.29, there was a consistent delay before streaming/first useful output. Typical embedded prep time was roughly 10-11 seconds before stream-ready.
Representative timing breakdown:
core-plugin-tools: ~4.6-4.8sbundle-tools: ~0.5-0.6ssystem-prompt: ~2.0-2.7ssession-resource-loader: ~0.5-0.8sstream-setup: ~2.36-2.47sAdditional startup-before-stream time was also sometimes observed, commonly around 9-13.5s, involving runtime plugins, model resolution, auth, and attempt dispatch.
Troubleshooting performed
tools.profile: minimalwas already in use.openai/...model IDs were not actionable in this environment without anOPENAI_API_KEY; the active path used OpenAI Codex OAuth.Detailed findings
1. Eager built-in tool/provider construction
Most of
core-plugin-toolstime came from eager construction/discovery for built-in capability/media/web tools, including:An experimental skip of capability-backed/media/web constructors reduced
core-plugin-toolsfrom roughly ~4.8s to ~158ms.That is not a recommended fix as-is, because it removes or undermines useful tools, but it strongly suggests the cost is in eager provider/tool discovery rather than actual tool invocation.
2. Repeated provider hook/plugin resolution during stream setup
Stream setup repeatedly spent time in provider/plugin hook paths such as:
registerProviderStreamForModelresolveProviderTextTransformsruntimePlan.resolveExtraParamsapplyExtraParamsToAgentA cache probe reduced stream setup from roughly ~2.4s to ~3ms, suggesting repeated uncached provider-hook/plugin resolution is a major contributor.
Expected behavior
Tool definitions should remain available, but provider discovery and expensive capability checks should ideally be lazy, cached, or reused across runs where safe.
In particular:
tools.profile: minimalshould avoid unnecessary expensive setup work where practicalActual behavior
Even with
tools.profile: minimal, OpenClaw 2026.4.29 spent several seconds in stock tool/provider setup before streaming began.Possible fix direction
Potential areas to investigate:
Related issues
Possibly related at a high level, but not a duplicate:
I did not find an existing open issue that specifically documents eager media/web provider discovery plus uncached provider hook resolution causing pre-stream delay.