Skip to content

fix(slack): flush draft stream before final reply to preserve messag ordering#20248

Open
Utkarshbhimte wants to merge 4 commits intoopenclaw:mainfrom
PROPAGANDAnow:fix/slack-draft-stream-ordering
Open

fix(slack): flush draft stream before final reply to preserve messag ordering#20248
Utkarshbhimte wants to merge 4 commits intoopenclaw:mainfrom
PROPAGANDAnow:fix/slack-draft-stream-ordering

Conversation

@Utkarshbhimte
Copy link

@Utkarshbhimte Utkarshbhimte commented Feb 18, 2026

Summary

• Problem: When streaming is enabled, the draft stream loop runs independently of the dispatcher delivery chain. On final reply, throttled partial-reply text can flush after the final message, causing messages to appear out of order.
• Why it matters: Users see the final answer first, then get a dump of intermediate "thinking" text after — confusing and unprofessional.
• What changed: Added a draftStream.flush() + stop() call before delivering the final reply payload, ensuring all pending streamed text is sent before the final message.
• What did NOT change: Non-streaming delivery path unchanged. No changes to block reply behavior or partial reply throttling logic.

Change Type

• [x] Bug fix

Scope

• Slack monitor (dispatch.ts)

Greptile Summary

This PR fixes a message ordering bug in Slack's non-streaming draft stream delivery path. When the final reply is dispatched, throttled partial-reply text from the independent draft stream loop could flush after the final message, causing confusing out-of-order delivery. The fix adds draftStream.flush() + stop() before delivering the final reply payload when kind === "final".

  • The core fix is correct: flushing the draft stream before the final deliverReplies call ensures all pending partial text is sent first.
  • The { kind } destructuring with a default value is safe but unnecessary — createReplyDispatcherWithTyping always passes { kind } to the deliver callback.
  • There's an edge case in the canFinalizeViaPreviewEdit fallback path where stop() is called (line 243) before flush() would run, making the new flush block a no-op in that scenario. This is pre-existing behavior but worth noting.
  • No test coverage was added for the new flush-before-final behavior.

Confidence Score: 4/5

  • This PR is safe to merge — the fix is targeted, the flush/stop pattern is well-established in the codebase, and the change only affects the non-streaming draft stream delivery path.
  • The fix correctly addresses the race condition described. The flush() and stop() calls are safe (idempotent, no-op when no pending data). The only concern is a minor edge case in the preview-edit fallback path where the flush becomes a no-op, but that's a pre-existing issue not introduced by this PR. No tests were added, but the change is minimal and low-risk.
  • No files require special attention beyond the single changed file src/slack/monitor/message-handler/dispatch.ts.

Last reviewed commit: eb8cdc9

… ordering

When the agent generates multiple text segments with tool calls between them,
the draft stream (partial reply preview) accumulates throttled updates
independently of the reply dispatcher. Without this fix, the final reply
can be delivered before pending draft stream text has flushed, causing
intermediate/thinking messages to appear AFTER the result in Slack threads.

This change flushes and stops the draft stream before delivering final
reply payloads, ensuring all preview text is sent before the conclusive
response.
@chatgpt-codex-connector
Copy link

You have reached your Codex usage limits for code reviews. You can see your limits in the Codex usage dashboard.

@openclaw-barnacle openclaw-barnacle bot added channel: slack Channel integration: slack size: XS labels Feb 18, 2026
Copy link
Contributor

@greptile-apps greptile-apps bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

1 file reviewed, 1 comment

Edit Code Review Agent Settings | Greptile

Comment on lines +277 to +285
// Flush any pending draft stream text before delivering the final reply.
// Without this, throttled partial-reply text can appear AFTER the final
// message because the draft stream loop is independent of the dispatcher
// delivery chain.
if (kind === "final" && hasStreamedMessage) {
await draftStream.flush();
draftStream.stop();
hasStreamedMessage = false;
}
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Flush becomes a no-op on preview-edit fallback path

When canFinalizeViaPreviewEdit is true (line 242), draftStream?.stop() is called at line 243 before the chat.update attempt. If that chat.update fails (catch at line 252), execution falls through to this new block. At that point, stopped is already true and loop.pendingText was cleared by loop.stop(), so flush() here becomes a no-op — any throttled partial-reply text that was pending when stop() was called at line 243 is silently lost.

This is a pre-existing issue (the preview-edit path already called stop() without flush()), but this new block won't rescue that edge case as one might expect from reading the code. Consider calling flush() before stop() in the canFinalizeViaPreviewEdit branch as well:

Suggested change
// Flush any pending draft stream text before delivering the final reply.
// Without this, throttled partial-reply text can appear AFTER the final
// message because the draft stream loop is independent of the dispatcher
// delivery chain.
if (kind === "final" && hasStreamedMessage) {
await draftStream.flush();
draftStream.stop();
hasStreamedMessage = false;
}
// Flush any pending draft stream text before delivering the final reply.
// Without this, throttled partial-reply text can appear AFTER the final
// message because the draft stream loop is independent of the dispatcher
// delivery chain.
if (kind === "final" && hasStreamedMessage) {
await draftStream.flush();
draftStream.stop();
hasStreamedMessage = false;
}
Prompt To Fix With AI
This is a comment left during a code review.
Path: src/slack/monitor/message-handler/dispatch.ts
Line: 277:285

Comment:
**Flush becomes a no-op on preview-edit fallback path**

When `canFinalizeViaPreviewEdit` is true (line 242), `draftStream?.stop()` is called at line 243 before the `chat.update` attempt. If that `chat.update` fails (catch at line 252), execution falls through to this new block. At that point, `stopped` is already `true` and `loop.pendingText` was cleared by `loop.stop()`, so `flush()` here becomes a no-op — any throttled partial-reply text that was pending when `stop()` was called at line 243 is silently lost.

This is a pre-existing issue (the preview-edit path already called `stop()` without `flush()`), but this new block won't rescue that edge case as one might expect from reading the code. Consider calling `flush()` before `stop()` in the `canFinalizeViaPreviewEdit` branch as well:

```suggestion
      // Flush any pending draft stream text before delivering the final reply.
      // Without this, throttled partial-reply text can appear AFTER the final
      // message because the draft stream loop is independent of the dispatcher
      // delivery chain.
      if (kind === "final" && hasStreamedMessage) {
        await draftStream.flush();
        draftStream.stop();
        hasStreamedMessage = false;
      }
```

How can I resolve this? If you propose a fix, please make it concise.

Address review comment: when canFinalizeViaPreviewEdit is true, the
previous code called stop() without flush(), silently dropping any
throttled partial-reply text. Now flush() is called before stop() to
ensure all pending text is delivered.
@openclaw-barnacle
Copy link

This pull request has been automatically marked as stale due to inactivity.
Please add updates or it will be closed.

@openclaw-barnacle openclaw-barnacle bot added the stale Marked as stale due to inactivity label Feb 28, 2026
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

channel: slack Channel integration: slack size: XS stale Marked as stale due to inactivity

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants