Conversation
Add Cursor IDE as the 7th supported platform. Parses agent-transcript JSONL files from ~/.cursor/projects/*/agent-transcripts/, extracts conversation history, tool usage (Anthropic-style tool_use/tool_result blocks), and thinking/reasoning highlights. - New parser: src/parsers/cursor.ts - Strips <user_query> tags from Cursor's wrapped user messages - Derives working directory from project slug path - Full integration: types, index, resume, CLI quick-command, colors - Test fixtures and 42 cross-tool conversion paths (7×6) all passing
📝 WalkthroughWalkthroughThis PR introduces Cursor AI as a new session source by adding a comprehensive transcript parser, extending type definitions, integrating CLI commands, and updating test coverage to support Cursor agent-transcript discovery, parsing, context extraction, and cross-tool handoffs. Changes
Sequence DiagramsequenceDiagram
participant User
participant CLI as CLI Interface
participant Discover as File Discovery
participant Parser as Session Parser
participant Extractor as Context Extractor
participant Resume as Resume Handler
participant CursorApp as Cursor App
User->>CLI: cursor [n] or pick cursor
CLI->>Discover: findTranscriptFiles(~/.cursor/projects)
Discover->>Discover: traverse dirs, collect *.jsonl
Discover-->>Parser: file paths
Parser->>Parser: readAllLines, parse metadata
Parser->>Parser: extract summary from first user message
Parser-->>CLI: UnifiedSession[]
CLI->>User: display sessions (sorted by updatedAt)
User->>CLI: select session
CLI->>Extractor: extractCursorContext(session)
Extractor->>Extractor: read lines, collect user/assistant messages
Extractor->>Extractor: extractToolData (tool_use/tool_result blocks)
Extractor->>Extractor: build tool summaries (grep, read/write files)
Extractor->>Extractor: generateHandoffMarkdown
Extractor-->>Resume: SessionContext (recentMessages, filesModified, toolSummaries)
Resume->>CursorApp: launch via Cursor CLI with cwd
CursorApp-->>User: resume session in Cursor
Estimated code review effort🎯 4 (Complex) | ⏱️ ~60 minutes Breakdown by concern (per tone):
Poem
🚥 Pre-merge checks | ✅ 3✅ Passed checks (3 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches
🧪 Generate unit tests (beta)
Tip Issue Planner is now in beta. Read the docs and try it out! Share your feedback on Discord. Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
There was a problem hiding this comment.
Your free trial has ended. If you'd like to continue receiving code reviews, you can add a payment method here.
There was a problem hiding this comment.
Actionable comments posted: 3
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (6)
src/__tests__/conversions.test.ts (2)
112-113:⚠️ Potential issue | 🟡 MinorStale test descriptions — counts don't match actual source count.
What breaks? Nothing functionally — the assertions use
ALL_SOURCES.lengthdynamically. But the test output will say "20 paths" / "all 5 sources" / "all 6 sources" when there are actually 42 paths and 7 sources. Misleading for anyone debugging test failures.Blast radius? Test readability only.
✏️ Fix stale descriptions
-describe('Cross-Tool Conversions (20 paths)', () => { +describe('Cross-Tool Conversions (42 paths)', () => {- it('all 5 sources produce different session IDs', () => { + it('all sources produce different session IDs', () => {- it('all 6 sources produce markdown with correct source attribution', () => { + it('all sources produce markdown with correct source attribution', () => {Using dynamic phrasing like "all sources" avoids this going stale again when an 8th tool is added.
Also applies to: 166-166, 177-177
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/__tests__/conversions.test.ts` around lines 112 - 113, The test descriptions are hard-coded ("Cross-Tool Conversions (20 paths)" and similar) and can become stale; update the describe/it titles in src/__tests__/conversions.test.ts (e.g., the describe that currently reads "Cross-Tool Conversions (20 paths)") to use dynamic phrasing instead (e.g., "Cross-Tool Conversions — all sources" or include ALL_SOURCES.length programmatically) so the output reflects the actual ALL_SOURCES.length/number of paths; also adjust the other stale titles referenced (around the blocks that use ALL_SOURCES.length at the identified spots) to use the same dynamic or generic wording.
84-89:⚠️ Potential issue | 🟠 MajorTests assert for strings that don't exist in the generated markdown — they will fail.
What breaks? Line 86 asserts
expect(md).toContain('## Original Session')butgenerateHandoffMarkdowngenerates'## Session Overview'(line 32 in markdown.ts). Line 89 assertsexpect(md).toContain('Continue this session')but the actual output is'**You are continuing this session. Pick up exactly where it left off — review the conversation above, check pending tasks, and keep going.**'(line 126 in markdown.ts). The exact string'Continue this session'never appears.Blast radius? This breaks the entire
validateConversionhelper which runs against every source in the test. Any dev with sessions on disk will see consistent failures for all sources tested.Startup-critical? CI passes only because no real sessions exist there. Locally, this fails immediately for any developer with prior sessions, blocking local development workflows.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/__tests__/conversions.test.ts` around lines 84 - 89, The test assertions in conversions.test.ts expect strings that don't match the actual output from generateHandoffMarkdown (e.g., the test expects '## Original Session' and 'Continue this session' while generateHandoffMarkdown emits '## Session Overview' and the full sentence '**You are continuing this session. Pick up exactly where it left off — review the conversation above, check pending tasks, and keep going.**'); update the assertions in the test (or the validateConversion helper) to assert against the real strings produced by generateHandoffMarkdown (use the exact header '## Session Overview' and either the full sentence or a stable substring like 'You are continuing this session.' or a regex match) so the test matches the actual output of generateHandoffMarkdown.src/__tests__/unit-conversions.test.ts (1)
1-5:⚠️ Potential issue | 🟡 MinorStale comments: "30 paths" and "20 paths" should be "42 paths".
Line 2 says "30 cross-tool conversion paths", line 593 says "All 20 Conversion Path Tests" — both are stale after adding the 7th source (7×6 = 42).
Fix
-/** - * Fixture-based unit tests for all 30 cross-tool conversion paths. +/** + * Fixture-based unit tests for all 42 cross-tool conversion paths.-// ─── All 20 Conversion Path Tests ────────────────────────────────────────── +// ─── All 42 Conversion Path Tests ──────────────────────────────────────────Also applies to: 593-595
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/__tests__/unit-conversions.test.ts` around lines 1 - 5, Update the stale numeric counts in the test file comments to reflect 42 conversion paths: change the top comment that reads "30 cross-tool conversion paths" to "42 cross-tool conversion paths" and update the section header/comment that reads "All 20 Conversion Path Tests" (and the nearby "20 paths" mention) to "All 42 Conversion Path Tests" so the descriptive text matches the current 7-source (7×6=42) fixtures; locate these strings in src/__tests__/unit-conversions.test.ts (top comment block and the section header around the conversion path tests) and replace the numeric literals accordingly.src/__tests__/e2e-conversions.test.ts (3)
177-177:⚠️ Potential issue | 🟡 MinorStale comments: "all 5 sources" and "20 Cross-Tool" should reflect 7 sources / 42 paths.
-// ─── Test: handoff markdowns were extracted from all 5 sources ────────────── +// ─── Test: handoff markdowns were extracted from all 7 sources ──────────────-// ─── Test: All 20 conversion paths via live CLI injection ────────────────── +// ─── Test: All 42 conversion paths via live CLI injection ────────────────── -describe('E2E: 20 Cross-Tool Conversion Paths', () => { +describe('E2E: 42 Cross-Tool Conversion Paths', () => {Also applies to: 195-195
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/__tests__/e2e-conversions.test.ts` at line 177, Update the stale test comment strings: find the comment containing "handoff markdowns were extracted from all 5 sources" and change it to "handoff markdowns were extracted from all 7 sources", and update any occurrence of "20 Cross-Tool" (or similar "20" path counts) to reflect "42" (e.g., "42 Cross-Tool" or "42 paths"); there are multiple occurrences of these phrases in the e2e test file—update all instances to match the new 7 sources / 42 paths count so the test descriptions are accurate.
96-115:⚠️ Potential issue | 🟠 MajorMissing
cursorcase inrunTool— E2E tests will throw for cursor-as-target.
What breaks? The
switchinrunToolhas no'cursor'case. When any E2E test routes to cursor as the target tool (e.g.,claude → cursor), line 114 throwsError('Unknown tool: cursor'). The test catches it and recordsERROR: Unknown tool: cursor, then the assertion at line 235 (expect(output).not.toMatch(/^ERROR: Command failed/)) might still pass since the message says "Unknown tool" not "Command failed" — but line 249's acknowledgment check will definitely fail.Blast radius? 6 E2E test paths (every
X → cursorconversion). They'll be marked as failures or produce confusing results.Other locations? Only this file; the non-test
crossToolResumeinresume.tsdoes handle cursor.Startup-critical? E2E test reliability for the new feature. Not prod-breaking but undermines confidence in the cursor integration.
Suggested fix
case 'copilot': // Copilot doesn't have a clean print mode; use -i with stdin cmd = `cat "${tmpFile}" | copilot -i "$(cat ${tmpFile} | head -c 4000)" --no-ask-user 2>&1 | head -100`; break; + case 'cursor': + // Cursor doesn't accept prompts; skip E2E injection test + return 'HANDOFF_RECEIVED — Cursor does not support non-interactive prompt injection. Context saved to .continues-handoff.md.'; default: throw new Error(`Unknown tool: ${tool}`);🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/__tests__/e2e-conversions.test.ts` around lines 96 - 115, The switch inside runTool in the e2e test is missing a 'cursor' case causing Unknown tool: cursor errors; add a case 'cursor' in that switch (in src/__tests__/e2e-conversions.test.ts, inside runTool) that sets cmd to invoke the cursor CLI reading the tmpFile (similar pattern to 'opencode'/'copilot' — pipe or redirect tmpFile into the cursor runner), include the same stderr/stdout handling/trimming used by other cases so output formatting matches tests, and ensure the default error remains for truly unknown tools.
96-115:⚠️ Potential issue | 🟠 MajorMissing
droidandcursorcases inrunToolfunction will crash E2E tests.
droidandcursorare both listed inALL_SOURCES(line 28) and have corresponding parser/extractor entries (lines 30-48), but therunToolswitch statement (lines 96-115) only handlesclaude,gemini,codex,opencode, andcopilot. Any E2E test that tries to run these tools will hit the default case and throwUnknown tool: droidorUnknown tool: cursor.The test's
beforeAllhook iterates overALL_SOURCESand will successfully parse/extract sessions for all tools, but execution fails when attempting to invoke them viarunTool. This blocks verification that droid and cursor integration actually works.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/__tests__/e2e-conversions.test.ts` around lines 96 - 115, The runTool switch in the test is missing handlers for 'droid' and 'cursor', causing Unknown tool errors when ALL_SOURCES includes them; update the runTool function to add cases for 'droid' and 'cursor' (alongside existing 'claude','gemini','codex','opencode','copilot') and provide appropriate shell commands for each tool invocation (mirroring the pattern used for other tools) so tests can execute parsed sessions for droid and cursor without hitting the default error.
🧹 Nitpick comments (2)
src/parsers/cursor.ts (2)
68-101:findTranscriptFilesuses synchronousreaddirSync/existsSyncinside anasyncfunction.
What breaks? With many projects or slow filesystems (network mounts), synchronous I/O blocks the event loop during discovery. Won't crash, but will stall the CLI.
Blast radius? Only during
buildIndex— called once per 5-minute TTL. For most users this is a handful of dirs.Startup-critical? Not a blocker — other parsers likely have the same pattern. Just flagging for awareness.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/parsers/cursor.ts` around lines 68 - 101, findTranscriptFiles is performing synchronous filesystem calls (fs.existsSync, fs.readdirSync) inside an async function which blocks the event loop; replace the sync calls with the async fs.promises APIs (e.g., fs.promises.readdir with { withFileTypes: true } and fs.promises.stat or fs.promises.access) and await them, preserving the same directory checks for CURSOR_PROJECTS_DIR, projectDirs, transcriptsDir and sessionDirs and keeping the same skip-on-error behavior (try/catch). Update any uses of existsSync to use a non-blocking pattern (await stat/access with try/catch) and ensure files.push still receives the jsonlPath when the async existence check succeeds; this prevents blocking during buildIndex while keeping logic unchanged.
306-307: Overly broad MCP detection:name.includes('-')matches non-MCP tools.
What breaks? Tool names like
"web-search","read-file", or any hyphenated name that isn't MCP gets routed throughmcpSummaryinstead of its proper handler. The summary output will look wrong (MCP format wrapping a non-MCP tool).Blast radius? Any Cursor tool with a hyphen that isn't already matched by an earlier branch. Since the earlier branches use exact string matching (e.g.,
'web_search'), a hypothetical'web-search'variant would fall through to MCP.Startup-critical? Low severity — cosmetic summary corruption, not data loss. But it degrades the quality of handoff context.
♻️ Tighten MCP heuristic
- } else if (name.startsWith('mcp__') || name.includes('___') || name.includes('-')) { + } else if (name.startsWith('mcp__') || name.includes('___')) {If you still want to catch MCP tools with hyphens (e.g.
github-mcp-server-list_issues), consider a more specific pattern like checking for multiple hyphens or a known MCP prefix rather than any single hyphen.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/parsers/cursor.ts` around lines 306 - 307, The current MCP detection branch in src/parsers/cursor.ts uses name.includes('-') which wrongly routes hyphenated non‑MCP tools into mcpSummary; update the condition that calls collector.add(..., mcpSummary(...)) to stop using a blanket hyphen check and instead use a stricter MCP heuristic (e.g., require an explicit MCP prefix/token such as "mcp" or "mcp__" or a pattern that indicates an MCP tool like multiple hyphens/underscores) so only true MCP tool names hit mcpSummary; adjust the condition around the variables name, input and result that wraps the collector.add call to use the new pattern.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@src/parsers/cursor.ts`:
- Around line 232-249: readAllLines currently buffers the entire JSONL
transcript into memory; refactor it (and callers like extractCursorContext) to a
single-pass streaming parser using readline.createInterface over the file stream
(same pattern as parseSessionInfo) so you never push all lines into an array.
While streaming, parse each JSON line, incrementally build messages, accumulate
toolResultMap and tool summaries and defer finalizing tool_use summaries until
their corresponding tool_result arrives (or finalize at stream close), and
resolve the promise on 'close' or 'error'; ensure function name readAllLines is
replaced/rewritten to emit or return incremental results compatible with
extractCursorContext to avoid full-file buffering.
- Around line 375-376: The current trimming using recentMessages.slice(-10)
(assigned to trimmed) can produce a window with zero user messages; change the
logic to still keep ~10 most-recent items but ensure at least one user message
is included by checking for a user message in trimmed and, if none, find the
last user-message index in recentMessages (e.g., lastIndex where message.role
=== 'user') and extend the slice start to include that message (or replace the
earliest item in trimmed with that user message) so that the resulting trimmed
array always contains at least one user message while remaining close to the
10-message limit.
In `@src/utils/resume.ts`:
- Around line 101-104: The Cursor handoff branch (case 'cursor' using
runCommand) currently only opens the project and drops the generated handoff
markdown (.continues-handoff.md); update the 'cursor' branch in resume.ts to
surface the handoff to the user: either pass the handoff content to Cursor if
its CLI accepts a prompt argument, or at minimum print a clear message after
saving the file that tells the user the handoff file path and next steps (e.g.,
"Handoff saved to .continues-handoff.md — open it in Cursor"), and apply the
same messaging change in nativeResume if applicable; reference runCommand, the
'cursor' case, nativeResume, and the .continues-handoff.md handoff file when
making the change.
---
Outside diff comments:
In `@src/__tests__/conversions.test.ts`:
- Around line 112-113: The test descriptions are hard-coded ("Cross-Tool
Conversions (20 paths)" and similar) and can become stale; update the
describe/it titles in src/__tests__/conversions.test.ts (e.g., the describe that
currently reads "Cross-Tool Conversions (20 paths)") to use dynamic phrasing
instead (e.g., "Cross-Tool Conversions — all sources" or include
ALL_SOURCES.length programmatically) so the output reflects the actual
ALL_SOURCES.length/number of paths; also adjust the other stale titles
referenced (around the blocks that use ALL_SOURCES.length at the identified
spots) to use the same dynamic or generic wording.
- Around line 84-89: The test assertions in conversions.test.ts expect strings
that don't match the actual output from generateHandoffMarkdown (e.g., the test
expects '## Original Session' and 'Continue this session' while
generateHandoffMarkdown emits '## Session Overview' and the full sentence '**You
are continuing this session. Pick up exactly where it left off — review the
conversation above, check pending tasks, and keep going.**'); update the
assertions in the test (or the validateConversion helper) to assert against the
real strings produced by generateHandoffMarkdown (use the exact header '##
Session Overview' and either the full sentence or a stable substring like 'You
are continuing this session.' or a regex match) so the test matches the actual
output of generateHandoffMarkdown.
In `@src/__tests__/e2e-conversions.test.ts`:
- Line 177: Update the stale test comment strings: find the comment containing
"handoff markdowns were extracted from all 5 sources" and change it to "handoff
markdowns were extracted from all 7 sources", and update any occurrence of "20
Cross-Tool" (or similar "20" path counts) to reflect "42" (e.g., "42 Cross-Tool"
or "42 paths"); there are multiple occurrences of these phrases in the e2e test
file—update all instances to match the new 7 sources / 42 paths count so the
test descriptions are accurate.
- Around line 96-115: The switch inside runTool in the e2e test is missing a
'cursor' case causing Unknown tool: cursor errors; add a case 'cursor' in that
switch (in src/__tests__/e2e-conversions.test.ts, inside runTool) that sets cmd
to invoke the cursor CLI reading the tmpFile (similar pattern to
'opencode'/'copilot' — pipe or redirect tmpFile into the cursor runner), include
the same stderr/stdout handling/trimming used by other cases so output
formatting matches tests, and ensure the default error remains for truly unknown
tools.
- Around line 96-115: The runTool switch in the test is missing handlers for
'droid' and 'cursor', causing Unknown tool errors when ALL_SOURCES includes
them; update the runTool function to add cases for 'droid' and 'cursor'
(alongside existing 'claude','gemini','codex','opencode','copilot') and provide
appropriate shell commands for each tool invocation (mirroring the pattern used
for other tools) so tests can execute parsed sessions for droid and cursor
without hitting the default error.
In `@src/__tests__/unit-conversions.test.ts`:
- Around line 1-5: Update the stale numeric counts in the test file comments to
reflect 42 conversion paths: change the top comment that reads "30 cross-tool
conversion paths" to "42 cross-tool conversion paths" and update the section
header/comment that reads "All 20 Conversion Path Tests" (and the nearby "20
paths" mention) to "All 42 Conversion Path Tests" so the descriptive text
matches the current 7-source (7×6=42) fixtures; locate these strings in
src/__tests__/unit-conversions.test.ts (top comment block and the section header
around the conversion path tests) and replace the numeric literals accordingly.
---
Nitpick comments:
In `@src/parsers/cursor.ts`:
- Around line 68-101: findTranscriptFiles is performing synchronous filesystem
calls (fs.existsSync, fs.readdirSync) inside an async function which blocks the
event loop; replace the sync calls with the async fs.promises APIs (e.g.,
fs.promises.readdir with { withFileTypes: true } and fs.promises.stat or
fs.promises.access) and await them, preserving the same directory checks for
CURSOR_PROJECTS_DIR, projectDirs, transcriptsDir and sessionDirs and keeping the
same skip-on-error behavior (try/catch). Update any uses of existsSync to use a
non-blocking pattern (await stat/access with try/catch) and ensure files.push
still receives the jsonlPath when the async existence check succeeds; this
prevents blocking during buildIndex while keeping logic unchanged.
- Around line 306-307: The current MCP detection branch in src/parsers/cursor.ts
uses name.includes('-') which wrongly routes hyphenated non‑MCP tools into
mcpSummary; update the condition that calls collector.add(..., mcpSummary(...))
to stop using a blanket hyphen check and instead use a stricter MCP heuristic
(e.g., require an explicit MCP prefix/token such as "mcp" or "mcp__" or a
pattern that indicates an MCP tool like multiple hyphens/underscores) so only
true MCP tool names hit mcpSummary; adjust the condition around the variables
name, input and result that wraps the collector.add call to use the new pattern.
| async function readAllLines(filePath: string): Promise<CursorTranscriptLine[]> { | ||
| return new Promise((resolve) => { | ||
| const lines: CursorTranscriptLine[] = []; | ||
| const stream = fs.createReadStream(filePath, { encoding: 'utf8' }); | ||
| const rl = readline.createInterface({ input: stream, crlfDelay: Infinity }); | ||
|
|
||
| rl.on('line', (line) => { | ||
| try { | ||
| lines.push(JSON.parse(line) as CursorTranscriptLine); | ||
| } catch { | ||
| // Skip invalid lines | ||
| } | ||
| }); | ||
|
|
||
| rl.on('close', () => resolve(lines)); | ||
| rl.on('error', () => resolve(lines)); | ||
| }); | ||
| } |
There was a problem hiding this comment.
🛠️ Refactor suggestion | 🟠 Major
readAllLines loads the entire transcript into memory — violates the streaming guideline.
-
What breaks? Large Cursor agent-transcripts (heavy tool-use sessions can be 10k+ lines, multi-MB) get fully buffered in an array. In a
buildIndex+extractContextflow, multiple sessions could spike heap. -
Blast radius? Every
extractCursorContextcall goes through this. If a user has long-running Cursor sessions, this is the bottleneck. -
Other locations?
parseSessionInfoalready does it right with streaming readline. The same pattern should be applied here — process tool data and messages in a single streaming pass. -
Startup-critical? Yes — users with beefy Cursor transcripts will hit this in the happy path of
continues show/continues resume.
As per coding guidelines: "Stream JSONL files with readline.createInterface to avoid loading entire files into memory when parsing sessions."
♻️ Sketch: stream-based approach for extractCursorContext
Instead of readAllLines → two-pass extractToolData → single-pass message extraction, do a single streaming pass that accumulates tool results, tool summaries, messages, and session notes incrementally. The toolResultMap first-pass can be folded in by deferring tool_use summary finalization until the corresponding tool_result arrives (or at stream close).
-async function readAllLines(filePath: string): Promise<CursorTranscriptLine[]> {
- return new Promise((resolve) => {
- const lines: CursorTranscriptLine[] = [];
- const stream = fs.createReadStream(filePath, { encoding: 'utf8' });
- const rl = readline.createInterface({ input: stream, crlfDelay: Infinity });
-
- rl.on('line', (line) => {
- try {
- lines.push(JSON.parse(line) as CursorTranscriptLine);
- } catch {
- // Skip invalid lines
- }
- });
-
- rl.on('close', () => resolve(lines));
- rl.on('error', () => resolve(lines));
- });
-}
+// Replace with a streaming approach inside extractCursorContext that
+// processes each line as it arrives, accumulating collector/messages/notes
+// without retaining every parsed line in memory.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@src/parsers/cursor.ts` around lines 232 - 249, readAllLines currently buffers
the entire JSONL transcript into memory; refactor it (and callers like
extractCursorContext) to a single-pass streaming parser using
readline.createInterface over the file stream (same pattern as parseSessionInfo)
so you never push all lines into an array. While streaming, parse each JSON
line, incrementally build messages, accumulate toolResultMap and tool summaries
and defer finalizing tool_use summaries until their corresponding tool_result
arrives (or finalize at stream close), and resolve the promise on 'close' or
'error'; ensure function name readAllLines is replaced/rewritten to emit or
return incremental results compatible with extractCursorContext to avoid
full-file buffering.
| const trimmed = recentMessages.slice(-10); | ||
|
|
There was a problem hiding this comment.
slice(-10) doesn't guarantee at least one user message is included.
-
What breaks? If the last 10 messages are all assistant messages (e.g., a long chain of tool-use responses), the handoff context has zero user messages — the receiving tool has no idea what the user actually asked.
-
Blast radius? Every cross-tool handoff from a Cursor session with a long assistant tool-use tail.
-
Other locations? Check how other parsers (Claude, Droid, etc.) handle this — they likely have the same trimming logic with the user-message guarantee.
As per coding guidelines: "Keep only the last ~10 messages in recentMessages for handoff while ensuring at least one user message is included."
🛡️ Proposed fix
- const trimmed = recentMessages.slice(-10);
+ let trimmed = recentMessages.slice(-10);
+ if (trimmed.length > 0 && !trimmed.some(m => m.role === 'user')) {
+ const lastUser = recentMessages.findLast(m => m.role === 'user');
+ if (lastUser) trimmed = [lastUser, ...trimmed];
+ }📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| const trimmed = recentMessages.slice(-10); | |
| let trimmed = recentMessages.slice(-10); | |
| if (trimmed.length > 0 && !trimmed.some(m => m.role === 'user')) { | |
| const lastUser = recentMessages.findLast(m => m.role === 'user'); | |
| if (lastUser) trimmed = [lastUser, ...trimmed]; | |
| } | |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@src/parsers/cursor.ts` around lines 375 - 376, The current trimming using
recentMessages.slice(-10) (assigned to trimmed) can produce a window with zero
user messages; change the logic to still keep ~10 most-recent items but ensure
at least one user message is included by checking for a user message in trimmed
and, if none, find the last user-message index in recentMessages (e.g.,
lastIndex where message.role === 'user') and extend the slice start to include
that message (or replace the earliest item in trimmed with that user message) so
that the resulting trimmed array always contains at least one user message while
remaining close to the 10-message limit.
| case 'cursor': | ||
| // Cursor CLI doesn't accept inline prompts; open the project with handoff file | ||
| await runCommand('cursor', [cwd], cwd); | ||
| break; |
There was a problem hiding this comment.
Cross-tool handoff to Cursor silently drops the prompt context.
-
What breaks? When a user does
continues resume <id> --in cursor, the handoff markdown is built (line 71-73) but never passed to the Cursor CLI. Other targets get the prompt injected (claude [prompt],codex [prompt], etc.). Cursor just opens the project directory — the user has no idea they should look at.continues-handoff.md. -
Blast radius? User-facing: anyone doing cross-tool handoff to Cursor loses the seamless context injection that's the whole point of the tool. The file is saved (line 64-65) but there's zero indication to the user.
-
Other locations?
nativeResume(line 42-44) has the same limitation but that's "resume same tool" so it's less critical. -
Startup-critical? Yes — this is the core feature. A silent data loss on handoff erodes trust.
Consider at minimum logging a message telling the user about the handoff file:
Suggested improvement
case 'cursor':
// Cursor CLI doesn't accept inline prompts; open the project with handoff file
+ console.log(`Handoff context saved to: ${localPath}`);
+ console.log('Open .continues-handoff.md in Cursor to see the session context.');
await runCommand('cursor', [cwd], cwd);
break;🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@src/utils/resume.ts` around lines 101 - 104, The Cursor handoff branch (case
'cursor' using runCommand) currently only opens the project and drops the
generated handoff markdown (.continues-handoff.md); update the 'cursor' branch
in resume.ts to surface the handoff to the user: either pass the handoff content
to Cursor if its CLI accepts a prompt argument, or at minimum print a clear
message after saving the file that tells the user the handoff file path and next
steps (e.g., "Handoff saved to .continues-handoff.md — open it in Cursor"), and
apply the same messaging change in nativeResume if applicable; reference
runCommand, the 'cursor' case, nativeResume, and the .continues-handoff.md
handoff file when making the change.
|
lgtm, was not using codex-agent on my workflow but seems like data structure is correct. thx for contribution @Evrim267 |
Summary
src/parsers/cursor.ts) that discovers and parses agent-transcript JSONL files from~/.cursor/projects/*/agent-transcripts/tool_use/tool_resultblocks), and thinking/reasoning highlights<user_query>wrapper tags from Cursor's user messagescontinues cursor), source colorsFiles Changed
src/parsers/cursor.tssrc/types/index.ts'cursor'toSessionSourceunionsrc/parsers/index.tssrc/utils/index.tsbuildIndex()andextractContext()src/utils/markdown.ts'Cursor AI'toSOURCE_LABELSsrc/utils/resume.tssrc/cli.tsblueBright), quick-resume command, help textssrc/__tests__/fixtures/index.tscreateCursorFixture()src/__tests__/unit-conversions.test.tssrc/__tests__/conversions.test.tssrc/__tests__/e2e-conversions.test.tssrc/__tests__/extract-handoffs.tsTest plan
tsc --noEmit)vitest run)tsc)Summary by CodeRabbit
Release Notes
New Features
~/.cursor/projects/*/agent-transcripts/.cursor [n]CLI command to resume the Nth newest Cursor session.Tests