Problem
After context compaction, the assistant's outbound message dedup guard resets because it only tracks duplicates within a single assistantMessageIndex. When the model re-generates text from the compaction summary as new output, it gets delivered to Telegram as a duplicate message.
Reproduction
- Have a long conversation that triggers compaction (
compaction-safeguard logs: new content uses X% of context; dropped N older chunk(s))
- The compaction summary includes the assistant's previous reply text verbatim
- After compaction, the model sometimes re-outputs that same text as its new response
- OpenClaw delivers it as a new Telegram message — user sees the same content twice
Root Cause
In src/agents/pi-embedded-subscribe.ts, shouldSkipAssistantText() only dedupes within the same assistantMessageIndex:
const shouldSkipAssistantText = (text: string) => {
if (state.lastAssistantTextMessageIndex !== state.assistantMessageIndex) {
return false; // <-- resets on new turn, no cross-turn dedup
}
// ...
};
The messagingToolSentTexts dedup in filterMessagingToolDuplicates only covers messages sent via the message tool, not the main assistant reply path.
Suggested Fix
Add a small rolling hash cache (e.g., last 20 delivered text hashes, 1h TTL) that persists across turns within the same session. Before delivering an assistant reply chunk, check if a normalized hash of the first ~200 chars matches any recently delivered hash. If so, skip delivery.
This could live in rememberAssistantText / shouldSkipAssistantText as an additional cross-turn check, or as a lightweight filter in the Telegram send path.
Impact
- Affects all channels but most visible on Telegram (each chunk = separate message)
- Users report receiving the same reply twice after long conversations
- Workaround: system prompt instruction to not re-output summary text (behavioral, unreliable)
Environment
- OpenClaw 2026.3.3
- Compaction mode: safeguard
- Channel: Telegram (direct chat)
Problem
After context compaction, the assistant's outbound message dedup guard resets because it only tracks duplicates within a single
assistantMessageIndex. When the model re-generates text from the compaction summary as new output, it gets delivered to Telegram as a duplicate message.Reproduction
compaction-safeguardlogs:new content uses X% of context; dropped N older chunk(s))Root Cause
In
src/agents/pi-embedded-subscribe.ts,shouldSkipAssistantText()only dedupes within the sameassistantMessageIndex:The
messagingToolSentTextsdedup infilterMessagingToolDuplicatesonly covers messages sent via themessagetool, not the main assistant reply path.Suggested Fix
Add a small rolling hash cache (e.g., last 20 delivered text hashes, 1h TTL) that persists across turns within the same session. Before delivering an assistant reply chunk, check if a normalized hash of the first ~200 chars matches any recently delivered hash. If so, skip delivery.
This could live in
rememberAssistantText/shouldSkipAssistantTextas an additional cross-turn check, or as a lightweight filter in the Telegram send path.Impact
Environment