Bug Description
Long assistant responses in the webchat control-UI are silently truncated at approximately 4000 characters. The content streams correctly during generation, but the final rendered message is cut off. There is no error, no indication of truncation, and no configuration option to change this limit.
Environment
- OpenClaw version: 2026.3.13
- Platform: macOS (arm64), Node v25.8.1
- Channel: webchat (control-UI)
- Model: anthropic/claude-opus-4-6 (maxTokens: 32000)
- Config: blockStreamingDefault: "on"
Steps to Reproduce
- Configure a model with
maxTokens: 32000
- Use a system prompt that instructs the model to generate a long response (~3000-5000 words)
- Observe in webchat control-UI: the response streams fully, then the final rendered message is truncated at ~4000 characters
Expected Behavior
The full response should render in webchat, or if a limit is necessary, it should be configurable via textChunkLimit (like other channels) or a webchat-specific setting.
Actual Behavior
The response is silently truncated at ~4000 characters. The user sees the full content streaming, then it disappears and only ~4000 chars remain.
Root Cause Investigation
I traced through the OpenClaw source code and found the following:
1. DEFAULT_CHUNK_LIMIT is hardcoded at 4000 characters
In the compiled dist files (e.g. reply-*.js):
const DEFAULT_CHUNK_LIMIT = 4e3;
2. Webchat bypasses provider-specific textChunkLimit config
The resolveTextChunkLimit function explicitly skips webchat:
if (!provider || provider === "webchat") return DEFAULT_CHUNK_MODE;
This means that even if you set textChunkLimit in the config, it has no effect on the webchat channel. Other channels (Telegram, Discord, WhatsApp) respect provider-specific chunk limits, but webchat is hardcoded.
3. Block streaming also bypasses webchat
Similarly, resolveChunkMode returns the default for webchat:
if (!provider || provider === "webchat") return DEFAULT_CHUNK_MODE;
4. The webchat dispatch handler itself does NOT truncate
The createReplyDispatcher for webchat collects all finalReplyParts and joins them with \n\n — there is no truncation here. The truncation happens upstream in the reply/chunking layer before the webchat dispatcher receives the content.
5. The frontend does NOT truncate
The control-UI frontend (index-*.js) renders whatever it receives with no length limit.
Impact
This is particularly problematic for educational/tutoring use cases where the model needs to generate long-form teaching content (2000-5000 words). The workaround is splitting responses into multiple shorter messages using tool-call separators between text blocks, which is fragile and non-obvious.
Proposed Fix
One or more of:
- Make webchat chunk limit configurable — Allow
textChunkLimit to work for webchat like it does for other channels, or add a webchat-specific override (e.g. channels.webchat.textChunkLimit)
- Increase the default — 4000 chars is quite low for a local-only UI with no API message size constraints. Consider 16000+ or unlimited for webchat since there are no upstream API limits
- Remove the webchat bypass in
resolveTextChunkLimit so it respects the same config as other channels
- At minimum, indicate truncation — if truncation must happen, show a clear indicator (e.g. "... [message truncated]") so users know content was lost
Workaround
Currently the only reliable workaround is to insert trivial tool calls (e.g. reading 1 line of a file) between text blocks to force separate message bubbles, each under the 4000 char limit. This works because each text block between tool calls renders as a separate message in webchat.
Related
textChunkLimit config works correctly for Telegram, Discord, WhatsApp
maxTokens config (model-level) is correctly set and not the cause
- Block streaming settings have no effect on webchat
Bug Description
Long assistant responses in the webchat control-UI are silently truncated at approximately 4000 characters. The content streams correctly during generation, but the final rendered message is cut off. There is no error, no indication of truncation, and no configuration option to change this limit.
Environment
Steps to Reproduce
maxTokens: 32000Expected Behavior
The full response should render in webchat, or if a limit is necessary, it should be configurable via
textChunkLimit(like other channels) or a webchat-specific setting.Actual Behavior
The response is silently truncated at ~4000 characters. The user sees the full content streaming, then it disappears and only ~4000 chars remain.
Root Cause Investigation
I traced through the OpenClaw source code and found the following:
1.
DEFAULT_CHUNK_LIMITis hardcoded at 4000 charactersIn the compiled dist files (e.g.
reply-*.js):2. Webchat bypasses provider-specific
textChunkLimitconfigThe
resolveTextChunkLimitfunction explicitly skips webchat:This means that even if you set
textChunkLimitin the config, it has no effect on the webchat channel. Other channels (Telegram, Discord, WhatsApp) respect provider-specific chunk limits, but webchat is hardcoded.3. Block streaming also bypasses webchat
Similarly,
resolveChunkModereturns the default for webchat:4. The webchat dispatch handler itself does NOT truncate
The
createReplyDispatcherfor webchat collects allfinalReplyPartsand joins them with\n\n— there is no truncation here. The truncation happens upstream in the reply/chunking layer before the webchat dispatcher receives the content.5. The frontend does NOT truncate
The control-UI frontend (
index-*.js) renders whatever it receives with no length limit.Impact
This is particularly problematic for educational/tutoring use cases where the model needs to generate long-form teaching content (2000-5000 words). The workaround is splitting responses into multiple shorter messages using tool-call separators between text blocks, which is fragile and non-obvious.
Proposed Fix
One or more of:
textChunkLimitto work for webchat like it does for other channels, or add a webchat-specific override (e.g.channels.webchat.textChunkLimit)resolveTextChunkLimitso it respects the same config as other channelsWorkaround
Currently the only reliable workaround is to insert trivial tool calls (e.g. reading 1 line of a file) between text blocks to force separate message bubbles, each under the 4000 char limit. This works because each text block between tool calls renders as a separate message in webchat.
Related
textChunkLimitconfig works correctly for Telegram, Discord, WhatsAppmaxTokensconfig (model-level) is correctly set and not the cause