fix: suppress partial NO_REPLY tokens at lifecycle boundary#23761
Open
kami-saia wants to merge 2 commits intoopenclaw:mainfrom
Open
fix: suppress partial NO_REPLY tokens at lifecycle boundary#23761kami-saia wants to merge 2 commits intoopenclaw:mainfrom
kami-saia wants to merge 2 commits intoopenclaw:mainfrom
Conversation
When the model streams NO_REPLY as multiple tokens (e.g. 'NO' then '_REPLY'), the lifecycle:end event can fire while the buffer only contains the partial prefix. isSilentReplyText() returns false for 'NO', causing it to leak to connected clients as a real assistant message. This is particularly visible on node clients (VS Code extensions, mobile nodes) that display every chat-final message as a bubble. Fix: add a partial-prefix check — if the buffered text is a strict prefix of SILENT_REPLY_TOKEN and contains only [A-Z_] characters, treat it as a silent reply and suppress it. Fixes openclaw#3340
src/gateway/server-chat.ts
Outdated
| text.length >= 2 && | ||
| text.length < SILENT_REPLY_TOKEN.length && | ||
| SILENT_REPLY_TOKEN.startsWith(text.toUpperCase()) && | ||
| /^[A-Z_]+$/i.test(text); |
Contributor
There was a problem hiding this comment.
the /i flag makes [A-Z_] match lowercase too - if intent is uppercase-only, remove /i flag
Suggested change
| /^[A-Z_]+$/i.test(text); | |
| /^[A-Z_]+$/.test(text); |
Prompt To Fix With AI
This is a comment left during a code review.
Path: src/gateway/server-chat.ts
Line: 345
Comment:
the `/i` flag makes `[A-Z_]` match lowercase too - if intent is uppercase-only, remove `/i` flag
```suggestion
/^[A-Z_]+$/.test(text);
```
How can I resolve this? If you propose a fix, please make it concise.
Author
|
Good catch — removed the |
|
This pull request has been automatically marked as stale due to inactivity. |
Author
|
Bumping. |
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.
Problem
When the model streams
NO_REPLYas multiple tokens (e.g."NO"then"_REPLY"), thelifecycle:endevent can fire while the buffer only contains the partial prefix.isSilentReplyText()returnsfalsefor"NO", causing it to leak to connected clients as a real assistant message.This is particularly visible on node clients (VS Code extensions, mobile nodes) that display every
chat-finalmessage as a bubble — users see random"NO"or"NO_"messages appear.Fix
Add a partial-prefix check in
emitChatFinal: if the buffered text is a strict prefix ofSILENT_REPLY_TOKEN, is shorter than the full token, and contains only[A-Z_]characters, treat it as a silent reply and suppress it.The check is deliberately conservative:
isSilentReplyText)Testing
Tested in production with a VS Code node client (Pawr) over multiple days. No more leaked partial tokens.
Fixes #3340
Greptile Summary
Adds logic to suppress partial
NO_REPLYtokens that leak when the model streams the token across multiple chunks andlifecycle:endfires before the complete token arrives. The fix checks if buffered text is a strict prefix ofSILENT_REPLY_TOKENbefore emittingchat-finalevents."NO"or"NO_"from appearing as assistant messages in client UIsisSilentReplyText()which handles complete tokensConfidence Score: 4/5
Last reviewed commit: abe3c10
(3/5) Reply to the agent's comments like "Can you suggest a fix for this @greptileai?" or ask follow-up questions!
🤖 AI-assisted (Claude Opus/Sonnet via OpenClaw agent). Fully tested in production.