Bug type
Regression (worked before, now fails)
Summary
Bug Report: kimi-coding/k2p5 crashes on turn 2+ since v2026.3.7 (preserveSignatures regression)
Summary
kimi-coding with model k2p5 crashes on every multi-turn session since v2026.3.7.
It worked correctly on v2026.3.1 and earlier.
Root cause
The following change in src/agents/transcript-policy.ts between v3.1 and v3.7:
- preserveSignatures: false,
+ preserveSignatures: isAnthropic,
kimi-coding uses modelApi: "anthropic-messages", so isAnthropic = true.
In v3.1, thinking block signatures were always stripped before re-sending context.
In v3.7, kimi-coding now receives its own thinkingSignature blobs back in context — which it cannot handle, causing two distinct crashes:
Crash 1 (@anthropic-ai/sdk lib/MessageStream.js):
_accumulateMessage calls partialParse(jsonBuf) with no try/catch. kimi appends trailing bytes after the closing } in input_json_delta events, causing JSON.parse to throw Unexpected non-whitespace character after JSON at position N.
Crash 2 (@anthropic-ai/sdk core/streaming.js):
yield JSON.parse(sse.data) in fromSSEResponse() — catch block logs and rethrows with no recovery. Same trailing-byte defect, but at the outer SSE event level before MessageStream is reached.
Suggested fix
// transcript-policy.ts
preserveSignatures: isCopilotClaude, // not all isAnthropic — kimi-coding can't handle re-sent signatures
Or equivalently, extend dropThinkingBlocks to include kimi-coding:
const dropThinkingBlocks = isCopilotClaude || provider === "kimi-coding";
(dropping blocks makes preserveSignatures irrelevant for kimi-coding regardless)
Versions
- OpenClaw: v2026.3.7 (broken) vs v2026.3.1 (working)
@anthropic-ai/sdk: 0.73.0 (same in both versions — SDK is not the root cause)
- Model:
kimi-coding / k2p5
Commit range
git diff 2a8ac974e 42a1394c5 -- src/agents/transcript-policy.ts
Workaround applied locally
Three runtime patches applied to node_modules in the running container — see Upgrade_procedure.md section 3.3 for full re-apply procedure:
patches/openclaw/patch-messagestream-kimi-trailingjson.js — wraps partialParse(jsonBuf) in try/catch in MessageStream.js
patches/openclaw/patch-json-parse-kimi-trailingjson.js — prefix extraction in json-parse.js before partialParse
patches/openclaw/patch-streaming-kimi-trailingjson.js — tolerant fallback in both yield JSON.parse(sse.data) catch blocks in streaming.js
dropThinkingBlocks dist bundle patch — const dropThinkingBlocks = isCopilotClaude || provider === "kimi-coding";
GitHub issue
File at: https://github.com/openclaw/openclaw/issues/new
Suggested title:
kimi-coding/k2p5 crashes on turn 2+ since v2026.3.7 (preserveSignatures regression)
Steps to reproduce
- Configure a model using provider
kimi-coding, model ID k2p5
- Run any agent session (cron, manual, or spawned subagent) that requires more than one LLM turn — e.g. a task that calls a tool and then continues
- Observe the session crash on turn 2
Minimal example: a cron prompt that calls web_fetch once and then continues reasoning. The crash occurs as soon as the assistant's turn-1 response (which contains a thinking block with a thinkingSignature) is re-sent as context in turn 2.
Expected behavior
The session continues across multiple turns as it did on v2026.3.1. Thinking blocks from prior turns are stripped or handled gracefully before being re-sent to the kimi API.
Actual behavior
The session crashes on turn 2 with:
Unexpected non-whitespace character after JSON at position N
where N equals the exact byte length of the last tool call's argument JSON. The error originates in @anthropic-ai/sdk@0.73.0 in two separate call sites:
lib/MessageStream.js — partialParse(jsonBuf) called with no surrounding try/catch in _accumulateMessage
core/streaming.js — yield JSON.parse(sse.data) in fromSSEResponse(), catch block rethrows with no recovery
The session is marked isError=true and terminates with stopReason: "error".
OpenClaw version
v2026.3.7
Operating system
Linux (Docker container, Ubuntu-based) on macOS host
Install method
No response
Logs, screenshots, and evidence
Impact and severity
No response
Additional information
No response
Bug type
Regression (worked before, now fails)
Summary
Bug Report: kimi-coding/k2p5 crashes on turn 2+ since v2026.3.7 (preserveSignatures regression)
Summary
kimi-codingwith modelk2p5crashes on every multi-turn session since v2026.3.7.It worked correctly on v2026.3.1 and earlier.
Root cause
The following change in
src/agents/transcript-policy.tsbetween v3.1 and v3.7:kimi-codingusesmodelApi: "anthropic-messages", soisAnthropic = true.In v3.1, thinking block signatures were always stripped before re-sending context.
In v3.7, kimi-coding now receives its own
thinkingSignatureblobs back in context — which it cannot handle, causing two distinct crashes:Crash 1 (
@anthropic-ai/sdklib/MessageStream.js):_accumulateMessagecallspartialParse(jsonBuf)with no try/catch. kimi appends trailing bytes after the closing}ininput_json_deltaevents, causingJSON.parseto throwUnexpected non-whitespace character after JSON at position N.Crash 2 (
@anthropic-ai/sdkcore/streaming.js):yield JSON.parse(sse.data)infromSSEResponse()— catch block logs and rethrows with no recovery. Same trailing-byte defect, but at the outer SSE event level before MessageStream is reached.Suggested fix
Or equivalently, extend
dropThinkingBlocksto includekimi-coding:(dropping blocks makes
preserveSignaturesirrelevant for kimi-coding regardless)Versions
@anthropic-ai/sdk: 0.73.0 (same in both versions — SDK is not the root cause)kimi-coding/k2p5Commit range
Workaround applied locally
Three runtime patches applied to node_modules in the running container — see
Upgrade_procedure.mdsection 3.3 for full re-apply procedure:patches/openclaw/patch-messagestream-kimi-trailingjson.js— wrapspartialParse(jsonBuf)in try/catch inMessageStream.jspatches/openclaw/patch-json-parse-kimi-trailingjson.js— prefix extraction injson-parse.jsbeforepartialParsepatches/openclaw/patch-streaming-kimi-trailingjson.js— tolerant fallback in bothyield JSON.parse(sse.data)catch blocks instreaming.jsdropThinkingBlocksdist bundle patch —const dropThinkingBlocks = isCopilotClaude || provider === "kimi-coding";GitHub issue
File at: https://github.com/openclaw/openclaw/issues/new
Suggested title:
Steps to reproduce
kimi-coding, model IDk2p5Minimal example: a cron prompt that calls
web_fetchonce and then continues reasoning. The crash occurs as soon as the assistant's turn-1 response (which contains athinkingblock with athinkingSignature) is re-sent as context in turn 2.Expected behavior
The session continues across multiple turns as it did on v2026.3.1. Thinking blocks from prior turns are stripped or handled gracefully before being re-sent to the kimi API.
Actual behavior
The session crashes on turn 2 with:
where
Nequals the exact byte length of the last tool call's argument JSON. The error originates in@anthropic-ai/sdk@0.73.0in two separate call sites:lib/MessageStream.js—partialParse(jsonBuf)called with no surrounding try/catch in_accumulateMessagecore/streaming.js—yield JSON.parse(sse.data)infromSSEResponse(), catch block rethrows with no recoveryThe session is marked
isError=trueand terminates withstopReason: "error".OpenClaw version
v2026.3.7
Operating system
Linux (Docker container, Ubuntu-based) on macOS host
Install method
No response
Logs, screenshots, and evidence
Impact and severity
No response
Additional information
No response