fix(feishu): finish streaming card closeout#71542
Conversation
|
Codex usage limits have been reached for code reviews. Please check with the admins of this repo to increase the limits by adding credits. |
🔒 Aisle Security AnalysisWe found 3 potential security issue(s) in this PR:
1. 🟡 Reasoning/ tags may leak via final or block streaming card updates (Feishu)
DescriptionIn the Feishu reply dispatcher, partial snapshot updates are cleaned with Affected flows:
If an upstream runtime/model emits Vulnerable code (final path): if (info?.kind === "final") {
streamText = text;
snapshotBaseText = "";
lastSnapshotTextLength = text.length;
flushStreamingCardUpdate(buildCombinedStreamText(reasoningText, streamText));
}RecommendationApply Example fix: const cleaned = stripReasoningTagsFromText(text, { mode: "strict", trim: "both" });
if (!cleaned) return;
if (info?.kind === "block") {
queueStreamingUpdate(cleaned, { mode: "delta", dedupeWithLastPartial: true });
}
if (info?.kind === "final") {
streamText = cleaned;
snapshotBaseText = "";
lastSnapshotTextLength = cleaned.length;
flushStreamingCardUpdate(buildCombinedStreamText(reasoningText, streamText));
}Alternatively, enforce (and document) a strong upstream invariant that 2. 🟡 Feishu streaming card markdown/mention injection via unescaped tool status line
DescriptionThe streaming status line appended to Feishu card markdown is built from
Vulnerable code: updateStreamingStatusLine(
`🔧 **Using: ${payload.name ?? payload.phase ?? "tool"}...**`,
);RecommendationTreat tool/phase labels as untrusted when rendering into Feishu card markdown. Options:
function escapeLarkMdInline(input: string): string {
return input
.replace(/&/g, "&")
.replace(/</g, "<")
.replace(/>/g, ">")
.replace(/[\r\n]+/g, " ")
.trim();
}
const toolLabel = escapeLarkMdInline(payload.name ?? payload.phase ?? "tool");
updateStreamingStatusLine(`🔧 **Using: ${toolLabel}...**`);
Also consider an allowlist for tool labels (e.g., 3. 🟡 Unbounded streaming buffer growth in Feishu reply dispatcher (snapshotBaseText) can cause memory/CPU DoS
DescriptionThe Feishu streaming reply path maintains
Vulnerable code: const currentSnapshotText = snapshotBaseText
? streamText.slice(snapshotBaseText.length)
: streamText;
...
if (startsNewSnapshotBlock) {
snapshotBaseText = streamText;
streamText = `${snapshotBaseText}${nextText}`;
} else {
streamText = `${snapshotBaseText}${mergeStreamingText(currentSnapshotText, nextText)}`;
}RecommendationIntroduce hard limits and/or trimming on streaming buffers, and avoid repeated full-string concatenations. Suggested mitigations:
Example (simple cap): const MAX_STREAM_CHARS = 50_000;
function clampStream(text: string): string {
if (text.length <= MAX_STREAM_CHARS) return text;
// Keep tail to preserve most recent output
return text.slice(text.length - MAX_STREAM_CHARS);
}
// After updating streamText:
streamText = clampStream(streamText);
// If you clamp, also adjust snapshotBaseText accordingly or reset it:
if (snapshotBaseText.length > streamText.length) snapshotBaseText = "";Also consider dropping Analyzed PR: #71542 at commit Last updated on: 2026-04-25T10:54:12Z |
Greptile SummaryThis PR completes the Feishu streaming-card closeout by stripping leaked Confidence Score: 4/5Safe to merge; logic is well-tested and the two findings are narrow edge-case quality concerns. No P0 or P1 issues found. Two P2 items: the typing-indicator reaction can persist if streaming.close() throws (pre-existing pattern, now slightly more reachable), and the multi-reset snapshot-base guard checks only the current slice. Both are unlikely to affect normal usage and the overall change is a clear improvement over the previous state. No files require special attention beyond the two P2 notes in reply-dispatcher.ts.
|
| } else { | ||
| const currentSnapshotText = snapshotBaseText | ||
| ? streamText.slice(snapshotBaseText.length) | ||
| : streamText; | ||
| const startsNewSnapshotBlock = | ||
| lastSnapshotTextLength >= 20 && | ||
| nextText.length < lastSnapshotTextLength * 0.5 && | ||
| !currentSnapshotText.includes(nextText); | ||
| if (startsNewSnapshotBlock) { | ||
| snapshotBaseText = streamText; | ||
| streamText = `${snapshotBaseText}${nextText}`; | ||
| } else { | ||
| streamText = `${snapshotBaseText}${mergeStreamingText(currentSnapshotText, nextText)}`; | ||
| } | ||
| lastSnapshotTextLength = nextText.length; | ||
| } |
There was a problem hiding this comment.
Snapshot-block heuristic only checks
currentSnapshotText, not the accumulated base
The !currentSnapshotText.includes(nextText) guard looks only at the current block's slice (streamText.slice(snapshotBaseText.length)). If there have already been multiple block resets, snapshotBaseText can itself be several blocks long. A new snapshot that happens to duplicate a phrase that appears only in snapshotBaseText (not the current block) will not be detected as a repeat, which is fine — but if mergeStreamingText on the current block returns an inflated string that does include it, the base grows incorrectly.
This is a narrow edge case (requires a repeated phrase spanning multiple inter-tool resets), but worth noting if longer multi-tool conversations exhibit duplicated content in streaming cards.
Prompt To Fix With AI
This is a comment left during a code review.
Path: extensions/feishu/src/reply-dispatcher.ts
Line: 290-305
Comment:
**Snapshot-block heuristic only checks `currentSnapshotText`, not the accumulated base**
The `!currentSnapshotText.includes(nextText)` guard looks only at the *current block's* slice (`streamText.slice(snapshotBaseText.length)`). If there have already been multiple block resets, `snapshotBaseText` can itself be several blocks long. A new snapshot that happens to duplicate a phrase that appears only in `snapshotBaseText` (not the current block) will not be detected as a repeat, which is fine — but if `mergeStreamingText` on the current block returns an inflated string that *does* include it, the base grows incorrectly.
This is a narrow edge case (requires a repeated phrase spanning multiple inter-tool resets), but worth noting if longer multi-tool conversations exhibit duplicated content in streaming cards.
How can I resolve this? If you propose a fix, please make it concise.
Summary
maincard header, shows transient tool/compaction status, and resets streaming state even when close failsCanonical closeout
Supersedes:
Thanks @sesame437, @Vicky-v7, @maoku-family, @Pengxiao-Wang, and @Maple778.
Test plan
Note:
check:changedpassed before the final rebase. After rebasing again onto movingorigin/main, the focused Feishu tests were blocked behind another worktree heavy-check lock for several minutes; the branch is now clean and GitHub CI is the final gate.