fix: strip <relevant-memories> injected by memory plugin from user messages in WebUI#59697
Conversation
Greptile SummaryThis PR correctly fixes a bug where the
Confidence Score: 4/5
Prompt To Fix All With AIThis is a comment left during a code review.
Path: src/gateway/chat-sanitize.ts
Line: 53
Comment:
**`trimStart()` applied unconditionally to all user messages**
`.trimStart()` is now unconditionally applied to every user message, not just those from which memory tags were actually removed. If a user message begins with intentional leading whitespace (e.g. a code snippet pasted directly, or a message from a channel adapter that prefixes content with spaces), that whitespace will be silently stripped even when no `<relevant-memories>` block was present.
A safer approach would be to conditionally trim only when the strip actually modified the text:
```suggestion
? (() => { const s = stripRelevantMemoriesTags(stripMessageIdHints(stripEnvelope(inboundStripped))); return s !== inboundStripped ? s.trimStart() : s; })()
```
Alternatively, `trimStart()` could be applied only inside `stripRelevantMemoriesTags` when it actually removes a block (which would require a small change to that function's return logic). The same concern applies to lines 86 and 101 in this file, and to `message-extract.ts` line 15.
How can I resolve this? If you propose a fix, please make it concise.Reviews (1): Last reviewed commit: "fix: strip relevant-memories from user m..." | Re-trigger Greptile |
There was a problem hiding this comment.
💡 Codex Review
Here are some automated review suggestions for this pull request.
Reviewed commit: bc9f21c885
ℹ️ About Codex in GitHub
Codex has been enabled to automatically review pull requests in this repo. Reviews are triggered when you
- Open a pull request for review
- Mark a draft as ready
- Comment "@codex review".
If Codex has suggestions, it will comment; otherwise it will react with 👍.
When you sign up for Codex through ChatGPT, Codex can also answer questions or update the PR, like "@codex address that feedback".
…tory Memory plugin prepends <relevant-memories>...</relevant-memories> blocks to user prompts via the before_agent_start hook. These blocks were only stripped from assistant messages (via stripAssistantInternalScaffolding) but never from user-role messages, causing them to appear in the WebUI user message bubble and in chat.history responses. Changes: - Export stripRelevantMemoriesTags from assistant-visible-text.ts - chat-sanitize.ts: apply stripRelevantMemoriesTags when stripping user messages in stripEnvelopeFromMessage (covers chat.history server path) - message-extract.ts: apply stripRelevantMemoriesTags for user-role messages in processMessageText (covers WebUI rendering path) - Add regression tests in both chat-sanitize.test.ts and message-extract.test.ts Fixes openclaw#59568 Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
bc9f21c to
a1939e4
Compare
|
ProjectClownfish pushed a narrow repair to this branch so the original contributor path can stay canonical. Source PR: #59697 |
|
Codex review: needs changes before merge. What this changes: The PR adds a shared injected-memory-prefix sanitizer, wires it into Gateway chat history and Control UI user-message extraction/normalization, adds regression tests, and updates the changelog. Required change before merge: A concrete, narrow PR blocker remains in the helper and tests; an automated repair can adjust the sanitizer without product or security judgment. Security review: Security review cleared: The PR changes sanitizer logic, tests, and changelog text only; it does not touch workflows, dependency sources, package resolution, secrets, permissions, or artifact execution paths. Review findings:
Review detailsBest possible solution: Land a narrow shared sanitizer that recognizes only the memory plugin’s injected leading prefix and generated separator, preserves ordinary user-authored tags and prompt indentation, and covers Gateway history plus both Control UI rendering paths with regression tests. Do we have a high-confidence way to reproduce the issue? Yes. A high-confidence static reproduction exists: current main has Is this the best way to solve the issue? No, not quite. The PR has the right scope and surfaces, but the helper should remove only the injected block plus generated separator instead of arbitrary whitespace after the block. Full review comments:
Overall correctness: patch is incorrect Acceptance criteria:
What I checked:
Likely related people:
Remaining risk / open question:
Codex review notes: model gpt-5.5, reasoning high; reviewed against a256745323ab. |
Summary
before_agent_starthook prepends<relevant-memories>...</relevant-memories>to user prompts before they are stored in session historystripEnvelopeFromMessages()(server) andprocessMessageText(WebUI) only stripped these tags from assistant messages, never from user messagesRoot cause
stripRelevantMemoriesTagswas only reachable viastripAssistantInternalScaffolding, which is only called forrole === "assistant". User messages went throughstripInboundMetadata+stripEnvelope, neither of which handles<relevant-memories>XML tags.Fix
stripRelevantMemoriesTagsfromassistant-visible-text.tschat-sanitize.ts(stripEnvelopeFromMessage): applystripRelevantMemoriesTagsto user-role message content — fixes thechat.historyRPC pathmessage-extract.ts(processMessageText): applystripRelevantMemoriesTagsfor user-role messages — fixes the WebUI rendering pathchat-sanitize.test.tsandmessage-extract.test.tsfor both string content and array content shapesTest plan
npx vitest run src/gateway/chat-sanitize.test.ts— 11 tests passnpx vitest run ui/src/ui/chat/message-extract.test.ts— 6 tests passmemory-lancedbextension enabled, send a message, verify user bubble shows only the actual message textFixes #59568
🤖 Generated with Claude Code