Skip to content

fix(feishu): suppress duplicate final delivers from thinking models#50822

Closed
forhope1122 wants to merge 1 commit intoopenclaw:mainfrom
forhope1122:fix/feishu-thinking-model-duplicate-finals
Closed

fix(feishu): suppress duplicate final delivers from thinking models#50822
forhope1122 wants to merge 1 commit intoopenclaw:mainfrom
forhope1122:fix/feishu-thinking-model-duplicate-finals

Conversation

@forhope1122
Copy link
Copy Markdown

Summary

  • Problem: Thinking models (e.g. kimi-k2.5) emit one final deliver per tool-call round, not just at the end of the full response. The existing deliveredFinalTexts dedup only filters identical text, so each intermediate final (with different content) was sent as a separate streaming card message.
  • Why it matters: Users saw the agent's intermediate tool-call results re-sent as duplicate messages after the final streaming card closed, producing 5 cards instead of 1 for a 4-tool-call session.
  • What changed: All final delivers are now deferred (pendingFinalPayload). Intermediate finals update the streaming card content in place; onIdle sends only the last one and closes the card.
  • What did NOT change: Non-thinking models (single final deliver) are unaffected. block, partial, media delivery, and all other paths are unchanged.

Change Type

  • Bug fix

Scope

  • Integrations

Linked Issue/PR

  • Closes # (none yet — discovered during local testing)

User-visible / Behavior Changes

Before: multi-tool-call thinking agents sent N streaming card messages (one per tool-call round).
After: only 1 streaming card message is sent, showing live progress throughout.

Security Impact

  • New permissions/capabilities? No
  • Secrets/tokens handling changed? No
  • New/changed network calls? No
  • Command/tool execution surface changed? No
  • Data access scope changed? No

Repro + Verification

Environment

  • OS: macOS
  • Runtime/container: Node.js v24.13.1
  • Model/provider: bailian/kimi-k2.5
  • Integration/channel: Feishu (websocket mode, renderMode: card, streaming: true)
  • Relevant config: channels.feishu.renderMode: "card"

Steps

  1. Configure a Feishu channel with streaming card mode enabled
  2. Send a message that requires multiple tool calls (e.g. "查询 xxx 的信息,提取xxx字段,再查询xxxx信息")
  3. Observe the number of messages sent in the Feishu chat

Expected

  • 1 streaming card message, updated live as each tool call completes

Actual (before fix)

  • 5 streaming card messages sent sequentially after the response completed (replies=5)

Evidence

  • Trace/log snippets

Before:

feishu[work-bot] Started streaming: cardId=...
feishu[work-bot] Closed streaming: cardId=...
feishu[work-bot] Started streaming: cardId=...   ← duplicate
feishu[work-bot] Closed streaming: cardId=...
... (×5)
dispatch complete (queuedFinal=true, replies=5)

After:

feishu[work-bot] Started streaming: cardId=...
feishu[work-bot] Closed streaming: cardId=...
dispatch complete (queuedFinal=true, replies=1)

Human Verification

  • Verified scenarios: multi-tool-call query (4 tool calls) with kimi-k2.5 thinking model on Feishu websocket, streaming card mode
  • Edge cases checked: single tool call (replies=1, unaffected); no tool call (direct final, unaffected)
  • What I did not verify: webhook mode; non-card render modes; models without thinking (Claude, GPT)

Review Conversations

  • I replied to or resolved every bot review conversation I addressed in this PR.
  • I left unresolved only the conversations that still need reviewer or maintainer judgment.

Compatibility / Migration

  • Backward compatible? Yes
  • Config/env changes? No
  • Migration needed? No

Failure Recovery

  • How to revert: restore reply-dispatcher.ts.bck in extensions/feishu/src/
  • Known bad symptoms: if replies=0 in logs after fix, onIdle may not be firing — check pendingFinalPayload is being consumed

Risks and Mitigations

  • Risk: If onIdle never fires (e.g. agent errors mid-run), the pending final is dropped and no message is sent.
    • Mitigation: onError calls closeStreaming() which handles the streaming card; however pendingFinalPayload is not flushed on error. A follow-up could flush pending in onError as well.

Thinking models (e.g. kimi-k2.5) emit one 'final' deliver per tool-call
round rather than only at the end. The previous dedup logic only filtered
identical text, so each intermediate final was sent as a separate streaming
card message.

Fix: defer all final payloads and only deliver the last one when onIdle
fires. Intermediate finals still update the streaming card via
onPartialReply so live progress remains visible.

Fixes: replies=5 -> replies=1 for multi-tool-call thinking agents.
Copilot AI review requested due to automatic review settings March 20, 2026 04:00
@openclaw-barnacle openclaw-barnacle Bot added channel: feishu Channel integration: feishu size: M labels Mar 20, 2026
Copy link
Copy Markdown

@chatgpt-codex-connector chatgpt-codex-connector Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

💡 Codex Review

Here are some automated review suggestions for this pull request.

Reviewed commit: 05ae4acfb6

ℹ️ About Codex in GitHub

Codex has been enabled to automatically review pull requests in this repo. Reviews are triggered when you

  • Open a pull request for review
  • Mark a draft as ready
  • Comment "@codex review".

If Codex has suggestions, it will comment; otherwise it will react with 👍.

When you sign up for Codex through ChatGPT, Codex can also answer questions or update the PR, like "@codex address that feedback".

Comment on lines +342 to +344
pendingFinalPayload = payload;
pendingFinalMediaList = mediaList;
return;
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

P1 Badge Fallback to text send when streaming start fails

If streaming.start() fails (for example due to bad app credentials or a transient Feishu API error), this branch still stores the final payload and returns early. onIdle then follows the streaming-card path and only calls closeStreaming(), so no non-stream message is ever sent and the user can get no final reply at all. We should preserve the previous fallback behavior and send chunked text when streaming is not active.

Useful? React with 👍 / 👎.

startStreaming();
}
await typingCallbacks?.onReplyStart?.();
void typingCallbacks.onReplyStart?.();
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

P2 Badge Await typing startup before continuing dispatch

Calling typingCallbacks.onReplyStart with void introduces a race on fast responses: onIdle can run before addTypingIndicator finishes, so stop() sees no typingState and does nothing, then the start promise resolves and leaves a stale typing reaction with no later cleanup. Awaiting onReplyStart avoids this indicator leak and restores the prior ordering guarantees.

Useful? React with 👍 / 👎.

@greptile-apps
Copy link
Copy Markdown
Contributor

greptile-apps Bot commented Mar 20, 2026

Greptile Summary

This PR fixes a real user-visible bug where thinking models (e.g. kimi-k2.5) would produce N streaming card messages (one per tool-call round) instead of a single live-updating card. The fix defers all final payloads via pendingFinalPayload and flushes only the last one in onIdle, which is the right approach.

Issues found:

  • Silent content drop when streaming fails to start (P1): In onIdle, the if (streamingEnabled && useCard) branch calls closeStreaming() and marks the text as delivered — but if streaming is null because startStreaming() threw an error, closeStreaming() is a no-op and the final text is never actually sent. The original code fell through to a direct sendStructuredCardFeishu call in this scenario. A fallback send (e.g. sendChunkedTextReply({ text, useCard: true, ... })) is needed when !streaming?.isActive().

  • streamingStartPromise not reset after start failure (P1): The removal of streamingStartPromise = null from the startStreaming() catch block means any subsequent call to startStreaming() in the same response lifecycle is silently skipped (the guard if (!streamingEnabled || streamingStartPromise || streaming) returns early). This directly amplifies the content-drop bug above, since a transient API error at stream start guarantees the final payload is never delivered.

  • pendingFinalPayload not flushed in onError (P2): Acknowledged by the PR author — if the agent throws before onIdle fires, the deferred final payload is silently discarded. Worth addressing as a follow-up or within this PR.

Confidence Score: 2/5

  • Not safe to merge as-is — two regressions can cause the final message to be silently dropped when streaming fails to start.
  • The core fix (deferring finals to onIdle) is sound and solves the stated problem. However, the PR introduces two regressions: (1) streamingStartPromise is not reset after a startStreaming() failure, so retries within the same response lifecycle are blocked; and (2) onIdle lacks a fallback send path when streaming is inactive — meaning any streaming start failure silently drops the final text. These two issues compound each other and can cause complete message loss, which is worse than the original duplicate-message bug being fixed.
  • extensions/feishu/src/reply-dispatcher.ts — specifically the onIdle flush logic (lines 413–424) and the startStreaming() error catch block (lines 207–210).
Prompt To Fix All With AI
This is a comment left during a code review.
Path: extensions/feishu/src/reply-dispatcher.ts
Line: 413-424

Comment:
**Silent content drop when streaming fails to start**

When `streamingEnabled && useCard` is true but `streaming` is `null` (e.g., the `startStreaming()` call failed with a network/API error), this branch calls `closeStreaming()` — which is a no-op since `streaming` is already `null` — and then adds `text` to `deliveredFinalTexts` without ever sending it to the user.

The original code handled this via a fallback: after the `if (streaming?.isActive())` early-return block in `deliver`, the code fell through to `sendStructuredCardFeishu`/`sendMessageFeishu` when `streaming` was null. The new deferred-final approach moves all delivery to `onIdle` but only provides a send-fallback in the `else` branch (i.e., when `!streamingEnabled || !useCard`), leaving the failure case uncovered.

A minimal fix is to check `streaming` before deciding whether to close or send directly:

```typescript
if (hasText) {
  if (streamingEnabled && useCard) {
    if (streaming?.isActive()) {
      streamText = mergeStreamingText(streamText, text);
      await closeStreaming();
    } else {
      // Streaming failed to start — fall back to a direct card send
      await sendChunkedTextReply({ text, useCard: true, infoKind: "final" });
    }
    deliveredFinalTexts.add(text);
  } else {
    await sendChunkedTextReply({ text, useCard, infoKind: "final" });
    deliveredFinalTexts.add(text);
  }
}
```

How can I resolve this? If you propose a fix, please make it concise.

---

This is a comment left during a code review.
Path: extensions/feishu/src/reply-dispatcher.ts
Line: 207-210

Comment:
**`streamingStartPromise` not cleared on error — blocks all subsequent streaming attempts**

The old code reset `streamingStartPromise = null` in the catch block specifically so that the next `deliver` call could retry streaming. That reset was removed in this PR.

`startStreaming()` guards itself with:
```typescript
if (!streamingEnabled || streamingStartPromise || streaming) {
  return;
}
```

After the error, `streaming` is `null` but `streamingStartPromise` still points to a **resolved** (caught) Promise, so every subsequent call to `startStreaming()` silently returns early. For the rest of the response lifecycle, no new streaming session can be established.

Combined with the deferred-final design (content only sent in `onIdle`), this directly amplifies the silent-drop bug above: a transient `start()` failure guarantees that the pending final text is never delivered.

```suggestion
        params.runtime.error?.(`feishu: streaming start failed: ${String(error)}`);
        streaming = null;
        streamingStartPromise = null; // allow retry on next deliver
```

How can I resolve this? If you propose a fix, please make it concise.

---

This is a comment left during a code review.
Path: extensions/feishu/src/reply-dispatcher.ts
Line: 394-399

Comment:
**`pendingFinalPayload` not flushed on error**

The PR description acknowledges this explicitly, but it's worth calling out for reviewers: if `onError` fires before `onIdle` (e.g., the agent errors mid-run after stashing a final payload), `pendingFinalPayload` is not consumed. The user would see the streaming card closed (via `closeStreaming()`) but would receive no final message text.

A simple mitigation is to attempt to flush the pending payload in `onError` before closing the stream:
```typescript
onError: async (error, info) => {
  params.runtime.error?.(
    `feishu[${account.accountId}] ${info.kind} reply failed: ${String(error)}`,
  );
  // Flush deferred final if it hasn't been sent yet
  if (pendingFinalPayload) {
    const text = pendingFinalPayload.text ?? "";
    streamText = mergeStreamingText(streamText, text);
    pendingFinalPayload = null;
    pendingFinalMediaList = [];
  }
  await closeStreaming();
  typingCallbacks.onIdle?.();
},
```
This is a follow-up candidate the PR author flagged — flagging here so it doesn't get lost.

How can I resolve this? If you propose a fix, please make it concise.

Last reviewed commit: "fix(feishu): suppres..."

Comment on lines +413 to +424
if (hasText) {
if (streamingEnabled && useCard) {
// streaming.update was already called in deliver; just close now
if (streaming?.isActive()) {
streamText = mergeStreamingText(streamText, text);
}
await closeStreaming();
deliveredFinalTexts.add(text);
} else {
await sendChunkedTextReply({ text, useCard, infoKind: "final" });
deliveredFinalTexts.add(text);
}
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

P1 Silent content drop when streaming fails to start

When streamingEnabled && useCard is true but streaming is null (e.g., the startStreaming() call failed with a network/API error), this branch calls closeStreaming() — which is a no-op since streaming is already null — and then adds text to deliveredFinalTexts without ever sending it to the user.

The original code handled this via a fallback: after the if (streaming?.isActive()) early-return block in deliver, the code fell through to sendStructuredCardFeishu/sendMessageFeishu when streaming was null. The new deferred-final approach moves all delivery to onIdle but only provides a send-fallback in the else branch (i.e., when !streamingEnabled || !useCard), leaving the failure case uncovered.

A minimal fix is to check streaming before deciding whether to close or send directly:

if (hasText) {
  if (streamingEnabled && useCard) {
    if (streaming?.isActive()) {
      streamText = mergeStreamingText(streamText, text);
      await closeStreaming();
    } else {
      // Streaming failed to start — fall back to a direct card send
      await sendChunkedTextReply({ text, useCard: true, infoKind: "final" });
    }
    deliveredFinalTexts.add(text);
  } else {
    await sendChunkedTextReply({ text, useCard, infoKind: "final" });
    deliveredFinalTexts.add(text);
  }
}
Prompt To Fix With AI
This is a comment left during a code review.
Path: extensions/feishu/src/reply-dispatcher.ts
Line: 413-424

Comment:
**Silent content drop when streaming fails to start**

When `streamingEnabled && useCard` is true but `streaming` is `null` (e.g., the `startStreaming()` call failed with a network/API error), this branch calls `closeStreaming()` — which is a no-op since `streaming` is already `null` — and then adds `text` to `deliveredFinalTexts` without ever sending it to the user.

The original code handled this via a fallback: after the `if (streaming?.isActive())` early-return block in `deliver`, the code fell through to `sendStructuredCardFeishu`/`sendMessageFeishu` when `streaming` was null. The new deferred-final approach moves all delivery to `onIdle` but only provides a send-fallback in the `else` branch (i.e., when `!streamingEnabled || !useCard`), leaving the failure case uncovered.

A minimal fix is to check `streaming` before deciding whether to close or send directly:

```typescript
if (hasText) {
  if (streamingEnabled && useCard) {
    if (streaming?.isActive()) {
      streamText = mergeStreamingText(streamText, text);
      await closeStreaming();
    } else {
      // Streaming failed to start — fall back to a direct card send
      await sendChunkedTextReply({ text, useCard: true, infoKind: "final" });
    }
    deliveredFinalTexts.add(text);
  } else {
    await sendChunkedTextReply({ text, useCard, infoKind: "final" });
    deliveredFinalTexts.add(text);
  }
}
```

How can I resolve this? If you propose a fix, please make it concise.

Comment on lines 207 to 210
} catch (error) {
params.runtime.error?.(`feishu: streaming start failed: ${String(error)}`);
streaming = null;
streamingStartPromise = null; // allow retry on next deliver
}
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

P1 streamingStartPromise not cleared on error — blocks all subsequent streaming attempts

The old code reset streamingStartPromise = null in the catch block specifically so that the next deliver call could retry streaming. That reset was removed in this PR.

startStreaming() guards itself with:

if (!streamingEnabled || streamingStartPromise || streaming) {
  return;
}

After the error, streaming is null but streamingStartPromise still points to a resolved (caught) Promise, so every subsequent call to startStreaming() silently returns early. For the rest of the response lifecycle, no new streaming session can be established.

Combined with the deferred-final design (content only sent in onIdle), this directly amplifies the silent-drop bug above: a transient start() failure guarantees that the pending final text is never delivered.

Suggested change
} catch (error) {
params.runtime.error?.(`feishu: streaming start failed: ${String(error)}`);
streaming = null;
streamingStartPromise = null; // allow retry on next deliver
}
params.runtime.error?.(`feishu: streaming start failed: ${String(error)}`);
streaming = null;
streamingStartPromise = null; // allow retry on next deliver
Prompt To Fix With AI
This is a comment left during a code review.
Path: extensions/feishu/src/reply-dispatcher.ts
Line: 207-210

Comment:
**`streamingStartPromise` not cleared on error — blocks all subsequent streaming attempts**

The old code reset `streamingStartPromise = null` in the catch block specifically so that the next `deliver` call could retry streaming. That reset was removed in this PR.

`startStreaming()` guards itself with:
```typescript
if (!streamingEnabled || streamingStartPromise || streaming) {
  return;
}
```

After the error, `streaming` is `null` but `streamingStartPromise` still points to a **resolved** (caught) Promise, so every subsequent call to `startStreaming()` silently returns early. For the rest of the response lifecycle, no new streaming session can be established.

Combined with the deferred-final design (content only sent in `onIdle`), this directly amplifies the silent-drop bug above: a transient `start()` failure guarantees that the pending final text is never delivered.

```suggestion
        params.runtime.error?.(`feishu: streaming start failed: ${String(error)}`);
        streaming = null;
        streamingStartPromise = null; // allow retry on next deliver
```

How can I resolve this? If you propose a fix, please make it concise.

Comment on lines 394 to +399
onError: async (error, info) => {
params.runtime.error?.(
`feishu[${account.accountId}] ${info.kind} reply failed: ${String(error)}`,
);
await closeStreaming();
typingCallbacks?.onIdle?.();
typingCallbacks.onIdle?.();
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

P2 pendingFinalPayload not flushed on error

The PR description acknowledges this explicitly, but it's worth calling out for reviewers: if onError fires before onIdle (e.g., the agent errors mid-run after stashing a final payload), pendingFinalPayload is not consumed. The user would see the streaming card closed (via closeStreaming()) but would receive no final message text.

A simple mitigation is to attempt to flush the pending payload in onError before closing the stream:

onError: async (error, info) => {
  params.runtime.error?.(
    `feishu[${account.accountId}] ${info.kind} reply failed: ${String(error)}`,
  );
  // Flush deferred final if it hasn't been sent yet
  if (pendingFinalPayload) {
    const text = pendingFinalPayload.text ?? "";
    streamText = mergeStreamingText(streamText, text);
    pendingFinalPayload = null;
    pendingFinalMediaList = [];
  }
  await closeStreaming();
  typingCallbacks.onIdle?.();
},

This is a follow-up candidate the PR author flagged — flagging here so it doesn't get lost.

Prompt To Fix With AI
This is a comment left during a code review.
Path: extensions/feishu/src/reply-dispatcher.ts
Line: 394-399

Comment:
**`pendingFinalPayload` not flushed on error**

The PR description acknowledges this explicitly, but it's worth calling out for reviewers: if `onError` fires before `onIdle` (e.g., the agent errors mid-run after stashing a final payload), `pendingFinalPayload` is not consumed. The user would see the streaming card closed (via `closeStreaming()`) but would receive no final message text.

A simple mitigation is to attempt to flush the pending payload in `onError` before closing the stream:
```typescript
onError: async (error, info) => {
  params.runtime.error?.(
    `feishu[${account.accountId}] ${info.kind} reply failed: ${String(error)}`,
  );
  // Flush deferred final if it hasn't been sent yet
  if (pendingFinalPayload) {
    const text = pendingFinalPayload.text ?? "";
    streamText = mergeStreamingText(streamText, text);
    pendingFinalPayload = null;
    pendingFinalMediaList = [];
  }
  await closeStreaming();
  typingCallbacks.onIdle?.();
},
```
This is a follow-up candidate the PR author flagged — flagging here so it doesn't get lost.

How can I resolve this? If you propose a fix, please make it concise.

Copy link
Copy Markdown
Contributor

Copilot AI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pull request overview

This PR aims to fix Feishu streaming-card behavior for “thinking” models that emit multiple final delivers (one per tool-call round) by deferring final delivery until onIdle, so only the last final is sent and intermediate finals only update the existing streaming card.

Changes:

  • Defer all final delivers into a pendingFinalPayload and flush only on onIdle (last final wins).
  • Add deduping for block payload text to avoid duplicate post-stream deliveries.
  • Refactor Feishu dispatcher typing + sending logic (notably switching card send helper and removing reasoning streaming hooks).

Comment on lines 1 to +8
import {
resolveSendableOutboundReplyParts,
resolveTextChunksWithFallback,
sendMediaWithLeadingCaption,
} from "openclaw/plugin-sdk/reply-payload";
import {
createChannelReplyPipeline,
createReplyPrefixContext,
createTypingCallbacks,
logTypingFailure,
type ClawdbotConfig,
type OutboundIdentity,
type ReplyPayload,
type RuntimeEnv,
} from "../runtime-api.js";
} from "openclaw/plugin-sdk/feishu";
Copy link

Copilot AI Mar 20, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

createTypingCallbacks is imported from openclaw/plugin-sdk/feishu, but that subpath does not currently export it (src/plugin-sdk/feishu.ts has no such export). This will fail to compile unless the export is added or the import is changed back to createChannelReplyPipeline/another exported helper.

Copilot uses AI. Check for mistakes.
});
} catch (error) {
params.runtime.error?.(`feishu: streaming start failed: ${String(error)}`);
streaming = null;
Copy link

Copilot AI Mar 20, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

In startStreaming() the catch block nulls streaming but leaves streamingStartPromise set. Because startStreaming() bails out when streamingStartPromise is truthy, a single start failure will permanently prevent future streaming retries for the lifetime of this dispatcher. Reset streamingStartPromise (and/or clear streaming) on failure so later delivers can retry.

Suggested change
streaming = null;
streaming = null;
streamingStartPromise = null;

Copilot uses AI. Check for mistakes.
Comment on lines 201 to 206
try {
const cardHeader = resolveCardHeader(agentId, identity);
const cardNote = resolveCardNote(agentId, identity, prefixContext.prefixContext);
await streaming.start(chatId, resolveReceiveIdType(chatId), {
replyToMessageId,
replyInThread: effectiveReplyInThread,
rootId,
header: cardHeader,
note: cardNote,
});
Copy link

Copilot AI Mar 20, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Streaming card start no longer passes header/note options, even though FeishuStreamingSession.start() supports them (and previous behavior/tests rely on them for agent identity + model/provider context). This is a user-visible regression and also contradicts the PR description (“all other paths unchanged”). Either restore header/note generation/passing or update the PR scope + tests accordingly.

Copilot uses AI. Check for mistakes.
Comment on lines +330 to +339
if (streaming?.isActive()) {
await streaming.update(streamText);
} else {
startStreaming();
if (streamingStartPromise) {
await streamingStartPromise;
}
if (streaming?.isActive()) {
await streaming.update(streamText);
}
Copy link

Copilot AI Mar 20, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The final branch updates the streaming card via direct await streaming.update(...) calls instead of going through partialUpdateQueue (used by queueStreamingUpdate). This can cause concurrent/out-of-order update requests relative to queued partial/block updates. Consider routing these updates through the same queue to guarantee serialized updates to the Feishu API.

Suggested change
if (streaming?.isActive()) {
await streaming.update(streamText);
} else {
startStreaming();
if (streamingStartPromise) {
await streamingStartPromise;
}
if (streaming?.isActive()) {
await streaming.update(streamText);
}
// Ensure streaming is started before enqueueing the update
startStreaming();
if (streamingStartPromise) {
await streamingStartPromise;
}
if (streaming?.isActive()) {
// Route updates through the same queue used for partial/block updates
queueStreamingUpdate(streamText, { mode: "replace" });

Copilot uses AI. Check for mistakes.
Comment on lines +321 to +349
if (info?.kind === "final") {
// Thinking-model agents (e.g. kimi-k2.5) emit one "final" deliver
// per tool-call round, not just at the end. We defer all finals and
// only send the last one when onIdle fires. Each intermediate final
// still updates the streaming card via onPartialReply so the user
// sees live progress.
if (streamingEnabled && useCard) {
// Keep streaming card up-to-date with latest text
streamText = mergeStreamingText(streamText, text);
if (streaming?.isActive()) {
await streaming.update(streamText);
} else {
startStreaming();
if (streamingStartPromise) {
await streamingStartPromise;
}
if (streaming?.isActive()) {
await streaming.update(streamText);
}
}
// Stash as pending — onIdle will close streaming and actually deliver
pendingFinalPayload = payload;
pendingFinalMediaList = mediaList;
return;
}
// Non-streaming path: defer to onIdle as well to avoid duplicates
pendingFinalPayload = payload;
pendingFinalMediaList = mediaList;
return;
Copy link

Copilot AI Mar 20, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

All final payloads are deferred to onIdle even when streaming is disabled. That changes the delivery contract (deliver() no longer sends the final immediately), and if onIdle is not reached (e.g. error path), the final can be dropped. If the intent is only to suppress duplicate finals for streaming card mode, consider limiting deferral to that case, or ensure deferred finals are flushed in onError as well.

Copilot uses AI. Check for mistakes.
onError: async (error, info) => {
params.runtime.error?.(
`feishu[${account.accountId}] ${info.kind} reply failed: ${String(error)}`,
);
Copy link

Copilot AI Mar 20, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

onError closes the streaming card but does not flush pendingFinalPayload. If an error occurs after a thinking model has emitted a deferred final, the user may receive no final message at all. Consider flushing the pending final (best-effort) in onError before/after closeStreaming().

Suggested change
);
);
// Best-effort flush of any deferred final payload so the user still sees it.
if (pendingFinalPayload) {
try {
const finalPayload = pendingFinalPayload;
const finalMediaList = pendingFinalMediaList;
pendingFinalPayload = null;
pendingFinalMediaList = [];
const text = finalPayload.text ?? "";
const hasText = Boolean(text.trim());
const useCard =
renderMode === "card" || (renderMode === "auto" && shouldUseCard(text));
if (hasText) {
if (streamingEnabled && useCard) {
// streaming.update was already called in deliver; just merge and let closeStreaming() finish.
if (streaming?.isActive()) {
streamText = mergeStreamingText(streamText, text);
}
deliveredFinalTexts.add(text);
} else {
await sendChunkedTextReply({ text, useCard, infoKind: "final" });
deliveredFinalTexts.add(text);
}
}
for (const mediaUrl of finalMediaList) {
await sendMediaFeishu({
cfg,
to: chatId,
mediaUrl,
replyToMessageId: sendReplyToMessageId,
replyInThread: effectiveReplyInThread,
accountId,
});
}
} catch (flushError) {
params.runtime.error?.(
`feishu[${account.accountId}] error while flushing pending final on error: ${String(
flushError,
)}`,
);
}
}

Copilot uses AI. Check for mistakes.
Comment on lines 449 to 466
return {
dispatcher,
replyOptions: {
...replyOptions,
onModelSelected: prefixContext.onModelSelected,
disableBlockStreaming: true,
onPartialReply: streamingEnabled
? (payload: ReplyPayload) => {
if (!payload.text) {
return;
}
queueStreamingUpdate(payload.text, {
dedupeWithLastPartial: true,
mode: "snapshot",
});
}
: undefined,
onReasoningStream: streamingEnabled
? (payload: ReplyPayload) => {
if (!payload.text) {
return;
}
startStreaming();
queueReasoningUpdate(payload.text);
}
: undefined,
onReasoningEnd: streamingEnabled ? () => {} : undefined,
},
Copy link

Copilot AI Mar 20, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

replyOptions no longer provides onReasoningStream / onReasoningEnd when streaming is enabled. Existing Feishu behavior/tests expect reasoning to be streamed into the card, and the PR description claims reasoning/block/partial paths are unchanged. Either restore the reasoning callbacks (and formatting) or update the PR description + tests to reflect the new behavior.

Copilot uses AI. Check for mistakes.
@vincentkoc vincentkoc added dedupe:child Duplicate issue/PR child in dedupe cluster close:duplicate Closed as duplicate labels Apr 25, 2026
@vincentkoc vincentkoc self-assigned this Apr 25, 2026
@vincentkoc
Copy link
Copy Markdown
Member

I’m closing this as covered by #71523. The duplicate-card root cause is handled by deferring the final card close until idle and keeping only one live streaming card per turn, while preserving fallback delivery when streaming startup fails.

If a thinking-model turn still emits multiple Feishu cards on current main, share the final-deliver sequence and I’ll re-check it.

@vincentkoc vincentkoc closed this Apr 25, 2026
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

channel: feishu Channel integration: feishu close:duplicate Closed as duplicate dedupe:child Duplicate issue/PR child in dedupe cluster size: M

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants