chat: enhance final response rendering with pinning logic and repositioning#293597
chat: enhance final response rendering with pinning logic and repositioning#293597justschen merged 1 commit intomicrosoft:mainfrom
Conversation
There was a problem hiding this comment.
Pull request overview
This PR addresses a VS Code Chat rendering issue where the final response content can end up nested inside the “thinking” container (leading to collapsing/cutoff), by forcing a re-render when the final answer markdown is detected inside the thinking DOM and by avoiding pinning behavior for that final answer part.
Changes:
- Detect the “final answer” markdown part during progressive rendering and avoid pinning/combining it into thinking content.
- Update diffing to force re-render of the final answer markdown if its rendered DOM is currently inside a
.chat-thinking-box. - Add a DOM helper to detect whether an existing rendered part is nested under the thinking container.
| private renderChatContentDiff(partsToRender: ReadonlyArray<IChatRendererContent | null>, contentForThisTurn: ReadonlyArray<IChatRendererContent>, element: IChatResponseViewModel, elementIndex: number, templateData: IChatListItemTemplate): void { | ||
| const renderedParts = templateData.renderedParts ?? []; | ||
| templateData.renderedParts = renderedParts; | ||
| const lastMarkdownIndex = partsToRender.findLastIndex(part => part?.kind === 'markdownContent'); |
There was a problem hiding this comment.
lastMarkdownIndex is computed from partsToRender, which is the diff array containing many null entries. When the final markdown part is already rendered (so it’s null in the diff) but an earlier markdown part updates in this tick, this will incorrectly treat that earlier part as the “final answer” and disable pinning/combining for it. Compute the last markdown index from the full contentForThisTurn (or pass the lastMarkdownContentIndex from diff(...)) so the final-answer check is stable across ticks.
| const lastMarkdownIndex = partsToRender.findLastIndex(part => part?.kind === 'markdownContent'); | |
| const lastMarkdownIndex = contentForThisTurn.findLastIndex(part => part.kind === 'markdownContent'); |
| const elementIsComplete = isResponseVM(element) && element.isComplete; | ||
| const lastMarkdownContentIndex = contentToRender.findLastIndex(part => part.kind === 'markdownContent'); | ||
| for (let i = 0; i < contentToRender.length; i++) { | ||
| const content = contentToRender[i]; | ||
| const renderedPart = renderedParts[i]; | ||
| const isFinalAnswerPart = content.kind === 'markdownContent' && i === lastMarkdownContentIndex && elementIsComplete; | ||
|
|
There was a problem hiding this comment.
There’s now duplicated “final answer” detection logic (findLastIndex + isFinalAnswerPart) in both renderChatContentDiff and diff. This increases the risk of the two drifting (and producing different pin/detach behavior). Consider extracting a small helper that derives the final-answer markdown index from the full content and reuse it in both places.
cb51e64 to
324a1ca
Compare
justschen
left a comment
There was a problem hiding this comment.
thanks for taking a look!
This PR ensures the final answer part is properly detached from the thinking container by enforcing a re-render if it was previously nested within the thought process. Fix #291423