fix(telegram): preserve forwarded message metadata during debounce batching#8368
fix(telegram): preserve forwarded message metadata during debounce batching#8368PatrickBauer wants to merge 1 commit intoopenclaw:mainfrom
Conversation
22c7b0a to
f07cd40
Compare
| export function buildForwardPrefix(fwd: TelegramForwardedContext): string { | ||
| return `[Forwarded from ${fwd.from}${fwd.date ? ` at ${new Date(fwd.date * 1000).toISOString()}` : ""}]\n`; | ||
| } |
There was a problem hiding this comment.
[P2] buildForwardPrefix() returning a trailing newline makes it easy to misuse
buildForwardPrefix() always appends \n, which forces call sites to remember to trimEnd() when they want a standalone prefix (as in debounce batching). That’s already bitten this PR once and makes future call sites prone to accidental extra blank lines. Consider returning the prefix without a trailing newline and letting callers add separators explicitly.
Note: If this suggestion doesn't match your team's coding style, reply to this and let me know. I'll remember it for next time!
Prompt To Fix With AI
This is a comment left during a code review.
Path: src/telegram/bot/helpers.ts
Line: 452:454
Comment:
[P2] `buildForwardPrefix()` returning a trailing newline makes it easy to misuse
`buildForwardPrefix()` always appends `\n`, which forces call sites to remember to `trimEnd()` when they want a standalone prefix (as in debounce batching). That’s already bitten this PR once and makes future call sites prone to accidental extra blank lines. Consider returning the prefix *without* a trailing newline and letting callers add separators explicitly.
<sub>Note: If this suggestion doesn't match your team's coding style, reply to this and let me know. I'll remember it for next time!</sub>
How can I resolve this? If you propose a fix, please make it concise.There was a problem hiding this comment.
@greptileai Fair point. However, changing buildForwardPrefix() to not include the trailing newline would be a behavioral change for the existing call site in bot-message-context.ts where the newline is expected (it gets concatenated directly with the body text). Since this PR is scoped to fixing the debounce batching bug, I kept the helper consistent with its existing behavior and handled the edge case at the call site with .trimEnd(). A follow-up to normalize this would make sense though — happy to do that as a separate PR if the maintainers prefer.
There was a problem hiding this comment.
You're right that changing the behavior mid-PR would expand the scope. The .trimEnd() workaround is fine for now — it's explicit about handling the edge case and keeps this PR focused on the debounce fix.
If you do a follow-up, consider that the current bot-message-context.ts usage concatenates prefix + body, so returning no newline and letting callers write buildForwardPrefix(fwd) + '\n' + body would make the separator explicit everywhere. But that's a refactor for another day.
For this PR, the approach is reasonable and the comment explains the tradeoff. ✅
| const combinedText = entries | ||
| .map((entry) => entry.msg.text ?? entry.msg.caption ?? "") | ||
| .map((entry) => { | ||
| const text = entry.msg.text ?? entry.msg.caption ?? ""; | ||
| const fwd = normalizeForwardedContext(entry.msg); | ||
| if (fwd) { | ||
| const prefix = buildForwardPrefix(fwd).trimEnd(); | ||
| return text ? `${prefix}\n${text}` : prefix; | ||
| } |
There was a problem hiding this comment.
[P2] Consider asserting batch invariants when adding per-entry forward attribution
This batching still builds the synthetic message from first.msg (minus forward fields) while combinedText now embeds per-entry forward attribution. If debounceKey ever groups messages across different conversations/senders (buggy keying or unexpected ctx fields), the resulting combined text could contain misleading forward headers in a single synthetic message context. It may be worth adding a lightweight assertion/log that entries share the same chat/thread/sender IDs before composing combinedText (or at least in verbose logging).
Note: If this suggestion doesn't match your team's coding style, reply to this and let me know. I'll remember it for next time!
Prompt To Fix With AI
This is a comment left during a code review.
Path: src/telegram/bot-handlers.ts
Line: 94:101
Comment:
[P2] Consider asserting batch invariants when adding per-entry forward attribution
This batching still builds the synthetic message from `first.msg` (minus forward fields) while `combinedText` now embeds per-entry forward attribution. If `debounceKey` ever groups messages across different conversations/senders (buggy keying or unexpected ctx fields), the resulting combined text could contain misleading forward headers in a single synthetic message context. It may be worth adding a lightweight assertion/log that `entries` share the same chat/thread/sender IDs before composing `combinedText` (or at least in verbose logging).
<sub>Note: If this suggestion doesn't match your team's coding style, reply to this and let me know. I'll remember it for next time!</sub>
How can I resolve this? If you propose a fix, please make it concise.There was a problem hiding this comment.
@greptileai The debounceKey is built from chat ID + thread ID (see buildKey in the debouncer setup), so entries within a batch are already guaranteed to belong to the same conversation. Cross-chat mixing would be a bug in the debouncer itself, not in this code path.
That said, a verbose-level log confirming batch composition could be useful for debugging — but that feels out of scope for this bugfix PR. The existing onError handler already catches flush failures.
There was a problem hiding this comment.
You're right—if debounceKey construction already enforces chat+thread identity, then batch invariants are guaranteed upstream and cross-chat mixing can't happen here.
A verbose log would help debugging, but you're also right that it's out of scope for this fix. The change is correct as-is.
I'm satisfied with the explanation. No action needed.
7ddf183 to
a8dc0bb
Compare
…tching When multiple Telegram messages arrive within the debounceMs window and get batched, forward metadata (forward_origin, forward_from, etc.) was lost for all messages except the first one, since the synthetic message only spread first.msg properties. Changes: - Extract shared buildForwardPrefix() helper into bot/helpers.ts - Inline [Forwarded from ...] prefix per entry in combined text, including forwarded messages without text/caption - Strip forward fields from synthetic message via destructuring to prevent double-prefixing by bot-message-context.ts Relates to openclaw#6263
a8dc0bb to
82dcbef
Compare
bfc1ccb to
f92900f
Compare
|
This pull request has been automatically marked as stale due to inactivity. |
This comment was marked as spam.
This comment was marked as spam.
|
This pull request has been automatically marked as stale due to inactivity. |
Problem
When multiple Telegram messages arrive within the
messages.inbound.debounceMswindow and get batched, forward metadata (forward_origin,forward_from,forward_from_chat, etc.) is lost for all messages except the first one.Root Cause
In
bot-handlers.ts, theonFlushcallback builds asyntheticMessageby spreading onlyfirst.msg, so forward metadata fromentries[1..n]is discarded.Fix
buildForwardPrefix()helper intobot/helpers.ts— avoids format duplication betweenbot-message-context.tsandbot-handlers.tsnormalizeForwardedContext(). If present, a[Forwarded from ...]prefix is prepended (including forwarded messages without text/caption).first.msgto preventbot-message-context.tsfrom double-prefixing the first entry.Testing
Forward 3+ messages from different senders quickly (within debounce window). Before this fix, only the first message retained its
[Forwarded from ...]header. After the fix, all messages preserve their attribution.Relates to #6263
Greptile Overview
Greptile Summary
This PR fixes Telegram inbound debounce batching so forwarded-message attribution is preserved for each message in a batch instead of only the first one. It does this by extracting a shared
buildForwardPrefix()helper intosrc/telegram/bot/helpers.ts, usingnormalizeForwardedContext()per entry when composing the batchedcombinedText, and strippingforward_*fields from the synthetic message to avoid double-prefixing when the synthetic message is later processed bybuildTelegramMessageContext().These changes fit into the existing Telegram pipeline by keeping the synthetic-message approach (batched messages are still processed as a single
TelegramMessage), while ensuring the user-visible forward attribution survives batching.Confidence Score: 4/5