Skip to content

feat: add Cursor AI support#2

Merged
yigitkonur merged 1 commit intoyigitkonur:mainfrom
Evrim267:feat/cursor-support
Feb 20, 2026
Merged

feat: add Cursor AI support#2
yigitkonur merged 1 commit intoyigitkonur:mainfrom
Evrim267:feat/cursor-support

Conversation

@Evrim267
Copy link
Copy Markdown
Contributor

@Evrim267 Evrim267 commented Feb 19, 2026

Summary

  • Add Cursor AI as the 7th supported platform
  • New parser (src/parsers/cursor.ts) that discovers and parses agent-transcript JSONL files from ~/.cursor/projects/*/agent-transcripts/
  • Extracts conversation history, tool usage (Anthropic-style tool_use/tool_result blocks), and thinking/reasoning highlights
  • Strips <user_query> wrapper tags from Cursor's user messages
  • Derives working directory from project slug path
  • Full integration across all modules: types, index, resume, CLI quick-command (continues cursor), source colors
  • Test fixtures and 42 cross-tool conversion paths (7×6) all passing (111 tests)

Files Changed

File Change
src/parsers/cursor.ts New — Cursor session parser
src/types/index.ts Add 'cursor' to SessionSource union
src/parsers/index.ts Barrel export
src/utils/index.ts Wire into buildIndex() and extractContext()
src/utils/markdown.ts Add 'Cursor AI' to SOURCE_LABELS
src/utils/resume.ts Native resume, cross-tool handoff, tool detection
src/cli.ts Color (blueBright), quick-resume command, help texts
src/__tests__/fixtures/index.ts createCursorFixture()
src/__tests__/unit-conversions.test.ts Cursor parser tests + 42 conversion paths
src/__tests__/conversions.test.ts Cursor integration
src/__tests__/e2e-conversions.test.ts Cursor integration
src/__tests__/extract-handoffs.ts Cursor integration

Test plan

  • TypeScript compiles with no errors (tsc --noEmit)
  • All 111 unit tests pass (vitest run)
  • Build succeeds (tsc)
  • Manual test with real Cursor agent-transcripts on a machine with Cursor installed

Summary by CodeRabbit

Release Notes

  • New Features

    • Added support for Cursor AI as a new session source. Parse and extract context from Cursor transcripts located at ~/.cursor/projects/*/agent-transcripts/.
    • Introduced new cursor [n] CLI command to resume the Nth newest Cursor session.
    • Enabled cross-tool handoffs between Cursor and other supported tools.
    • Cursor sessions now appear in session listing and picking workflows.
  • Tests

    • Extended test coverage to verify Cursor source support across all conversion and handoff scenarios.

Add Cursor IDE as the 7th supported platform. Parses agent-transcript
JSONL files from ~/.cursor/projects/*/agent-transcripts/, extracts
conversation history, tool usage (Anthropic-style tool_use/tool_result
blocks), and thinking/reasoning highlights.

- New parser: src/parsers/cursor.ts
- Strips <user_query> tags from Cursor's wrapped user messages
- Derives working directory from project slug path
- Full integration: types, index, resume, CLI quick-command, colors
- Test fixtures and 42 cross-tool conversion paths (7×6) all passing
@coderabbitai
Copy link
Copy Markdown

coderabbitai bot commented Feb 19, 2026

📝 Walkthrough

Walkthrough

This PR introduces Cursor AI as a new session source by adding a comprehensive transcript parser, extending type definitions, integrating CLI commands, and updating test coverage to support Cursor agent-transcript discovery, parsing, context extraction, and cross-tool handoffs.

Changes

Cohort / File(s) Summary
Type & Core Exports
src/types/index.ts, src/parsers/index.ts
Extended SessionSource union type to include 'cursor'; exported new parseCursorSessions and extractCursorContext functions.
Cursor Parser Implementation
src/parsers/cursor.ts
New 395-line parser module discovering Cursor agent-transcript JSONL files under ~/.cursor/projects, parsing session metadata, extracting tool usage (grep, glob, read/write file ops), and generating handoff markdown with session context.
Utility & Resume Integration
src/utils/index.ts, src/utils/markdown.ts, src/utils/resume.ts
Extended buildIndex to parse Cursor sessions; added extractCursorContext delegation; introduced 'Cursor AI' label in SOURCE_LABELS; wired cursor tool support in native/cross-tool resume flows with CLI launch commands.
CLI Commands
src/cli.ts
Added cursor [n] command to resume Nth newest Cursor session; extended color mapping (blueBright), help text, and source/target filtering to include Cursor across pick, list, and resume flows.
Test Fixtures & Coverage
src/__tests__/fixtures/index.ts, src/__tests__/conversions.test.ts, src/__tests__/e2e-conversions.test.ts, src/__tests__/extract-handoffs.ts, src/__tests__/unit-conversions.test.ts
Added createCursorFixture helper; extended ALL_SOURCES and parser/extractor mappings across five test files; verified cursor attribution in markdown, cross-tool handoff paths, and fixture-based conversions (30→42 paths).

Sequence Diagram

sequenceDiagram
    participant User
    participant CLI as CLI Interface
    participant Discover as File Discovery
    participant Parser as Session Parser
    participant Extractor as Context Extractor
    participant Resume as Resume Handler
    participant CursorApp as Cursor App

    User->>CLI: cursor [n] or pick cursor
    CLI->>Discover: findTranscriptFiles(~/.cursor/projects)
    Discover->>Discover: traverse dirs, collect *.jsonl
    Discover-->>Parser: file paths
    Parser->>Parser: readAllLines, parse metadata
    Parser->>Parser: extract summary from first user message
    Parser-->>CLI: UnifiedSession[]
    CLI->>User: display sessions (sorted by updatedAt)
    User->>CLI: select session
    CLI->>Extractor: extractCursorContext(session)
    Extractor->>Extractor: read lines, collect user/assistant messages
    Extractor->>Extractor: extractToolData (tool_use/tool_result blocks)
    Extractor->>Extractor: build tool summaries (grep, read/write files)
    Extractor->>Extractor: generateHandoffMarkdown
    Extractor-->>Resume: SessionContext (recentMessages, filesModified, toolSummaries)
    Resume->>CursorApp: launch via Cursor CLI with cwd
    CursorApp-->>User: resume session in Cursor
Loading

Estimated code review effort

🎯 4 (Complex) | ⏱️ ~60 minutes

Breakdown by concern (per tone):

  • What breaks? Nothing—purely additive feature; existing sources untouched. Zero blast radius to production paths.
  • Blast radius? Isolated to Cursor ecosystem. If Cursor parser logic fails silently or over-aggressively parses, could skip Cursor sessions during buildIndex—non-fatal, graceful degradation. CLI command only invoked if user explicitly targets cursor.
  • Other locations to look? Cursor file discovery guards (unreadable dirs), JSON parse error handling in transcript parsing, tool_use/tool_result edge cases. SummaryCollector logic for specialized handlers (grep, web fetch, mcp). Test fixtures relying on agent-transcripts structure.
  • Startup-critical? Only if company ships Cursor-first UX. Money/scale: adding a new tool category doesn't block scaling—it's metadata aggregation. User-facing only if Cursor users are target audience; non-Cursor users see zero impact.

Poem

🚀 Cursor joins the fold today,
Agent transcripts light the way,
Through ~/.cursor we now explore,
Tool summaries galore, and more!
Handoffs flow across the sky,
Five new sources, standing by. ✨

🚥 Pre-merge checks | ✅ 3
✅ Passed checks (3 passed)
Check name Status Explanation
Description Check ✅ Passed Check skipped - CodeRabbit’s high-level summary is enabled.
Title check ✅ Passed The title accurately captures the primary change: adding Cursor AI as a new supported platform/source across the entire codebase.
Docstring Coverage ✅ Passed Docstring coverage is 84.00% which is sufficient. The required threshold is 80.00%.

✏️ Tip: You can configure your own custom pre-merge checks in the settings.

✨ Finishing Touches
  • 📝 Generate docstrings
🧪 Generate unit tests (beta)
  • Create PR with unit tests
  • Post copyable unit tests in a comment

Tip

Issue Planner is now in beta. Read the docs and try it out! Share your feedback on Discord.


Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

Copy link
Copy Markdown

@greptile-apps greptile-apps bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Your free trial has ended. If you'd like to continue receiving code reviews, you can add a payment method here.

Copy link
Copy Markdown

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 3

Caution

Some comments are outside the diff and can’t be posted inline due to platform limitations.

⚠️ Outside diff range comments (6)
src/__tests__/conversions.test.ts (2)

112-113: ⚠️ Potential issue | 🟡 Minor

Stale test descriptions — counts don't match actual source count.

  1. What breaks? Nothing functionally — the assertions use ALL_SOURCES.length dynamically. But the test output will say "20 paths" / "all 5 sources" / "all 6 sources" when there are actually 42 paths and 7 sources. Misleading for anyone debugging test failures.

  2. Blast radius? Test readability only.

✏️ Fix stale descriptions
-describe('Cross-Tool Conversions (20 paths)', () => {
+describe('Cross-Tool Conversions (42 paths)', () => {
-  it('all 5 sources produce different session IDs', () => {
+  it('all sources produce different session IDs', () => {
-  it('all 6 sources produce markdown with correct source attribution', () => {
+  it('all sources produce markdown with correct source attribution', () => {

Using dynamic phrasing like "all sources" avoids this going stale again when an 8th tool is added.

Also applies to: 166-166, 177-177

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/__tests__/conversions.test.ts` around lines 112 - 113, The test
descriptions are hard-coded ("Cross-Tool Conversions (20 paths)" and similar)
and can become stale; update the describe/it titles in
src/__tests__/conversions.test.ts (e.g., the describe that currently reads
"Cross-Tool Conversions (20 paths)") to use dynamic phrasing instead (e.g.,
"Cross-Tool Conversions — all sources" or include ALL_SOURCES.length
programmatically) so the output reflects the actual ALL_SOURCES.length/number of
paths; also adjust the other stale titles referenced (around the blocks that use
ALL_SOURCES.length at the identified spots) to use the same dynamic or generic
wording.

84-89: ⚠️ Potential issue | 🟠 Major

Tests assert for strings that don't exist in the generated markdown — they will fail.

  1. What breaks? Line 86 asserts expect(md).toContain('## Original Session') but generateHandoffMarkdown generates '## Session Overview' (line 32 in markdown.ts). Line 89 asserts expect(md).toContain('Continue this session') but the actual output is '**You are continuing this session. Pick up exactly where it left off — review the conversation above, check pending tasks, and keep going.**' (line 126 in markdown.ts). The exact string 'Continue this session' never appears.

  2. Blast radius? This breaks the entire validateConversion helper which runs against every source in the test. Any dev with sessions on disk will see consistent failures for all sources tested.

  3. Startup-critical? CI passes only because no real sessions exist there. Locally, this fails immediately for any developer with prior sessions, blocking local development workflows.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/__tests__/conversions.test.ts` around lines 84 - 89, The test assertions
in conversions.test.ts expect strings that don't match the actual output from
generateHandoffMarkdown (e.g., the test expects '## Original Session' and
'Continue this session' while generateHandoffMarkdown emits '## Session
Overview' and the full sentence '**You are continuing this session. Pick up
exactly where it left off — review the conversation above, check pending tasks,
and keep going.**'); update the assertions in the test (or the
validateConversion helper) to assert against the real strings produced by
generateHandoffMarkdown (use the exact header '## Session Overview' and either
the full sentence or a stable substring like 'You are continuing this session.'
or a regex match) so the test matches the actual output of
generateHandoffMarkdown.
src/__tests__/unit-conversions.test.ts (1)

1-5: ⚠️ Potential issue | 🟡 Minor

Stale comments: "30 paths" and "20 paths" should be "42 paths".

Line 2 says "30 cross-tool conversion paths", line 593 says "All 20 Conversion Path Tests" — both are stale after adding the 7th source (7×6 = 42).

Fix
-/**
- * Fixture-based unit tests for all 30 cross-tool conversion paths.
+/**
+ * Fixture-based unit tests for all 42 cross-tool conversion paths.
-// ─── All 20 Conversion Path Tests ──────────────────────────────────────────
+// ─── All 42 Conversion Path Tests ──────────────────────────────────────────

Also applies to: 593-595

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/__tests__/unit-conversions.test.ts` around lines 1 - 5, Update the stale
numeric counts in the test file comments to reflect 42 conversion paths: change
the top comment that reads "30 cross-tool conversion paths" to "42 cross-tool
conversion paths" and update the section header/comment that reads "All 20
Conversion Path Tests" (and the nearby "20 paths" mention) to "All 42 Conversion
Path Tests" so the descriptive text matches the current 7-source (7×6=42)
fixtures; locate these strings in src/__tests__/unit-conversions.test.ts (top
comment block and the section header around the conversion path tests) and
replace the numeric literals accordingly.
src/__tests__/e2e-conversions.test.ts (3)

177-177: ⚠️ Potential issue | 🟡 Minor

Stale comments: "all 5 sources" and "20 Cross-Tool" should reflect 7 sources / 42 paths.

-// ─── Test: handoff markdowns were extracted from all 5 sources ──────────────
+// ─── Test: handoff markdowns were extracted from all 7 sources ──────────────
-// ─── Test: All 20 conversion paths via live CLI injection ──────────────────
+// ─── Test: All 42 conversion paths via live CLI injection ──────────────────

-describe('E2E: 20 Cross-Tool Conversion Paths', () => {
+describe('E2E: 42 Cross-Tool Conversion Paths', () => {

Also applies to: 195-195

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/__tests__/e2e-conversions.test.ts` at line 177, Update the stale test
comment strings: find the comment containing "handoff markdowns were extracted
from all 5 sources" and change it to "handoff markdowns were extracted from all
7 sources", and update any occurrence of "20 Cross-Tool" (or similar "20" path
counts) to reflect "42" (e.g., "42 Cross-Tool" or "42 paths"); there are
multiple occurrences of these phrases in the e2e test file—update all instances
to match the new 7 sources / 42 paths count so the test descriptions are
accurate.

96-115: ⚠️ Potential issue | 🟠 Major

Missing cursor case in runTool — E2E tests will throw for cursor-as-target.

  1. What breaks? The switch in runTool has no 'cursor' case. When any E2E test routes to cursor as the target tool (e.g., claude → cursor), line 114 throws Error('Unknown tool: cursor'). The test catches it and records ERROR: Unknown tool: cursor, then the assertion at line 235 (expect(output).not.toMatch(/^ERROR: Command failed/)) might still pass since the message says "Unknown tool" not "Command failed" — but line 249's acknowledgment check will definitely fail.

  2. Blast radius? 6 E2E test paths (every X → cursor conversion). They'll be marked as failures or produce confusing results.

  3. Other locations? Only this file; the non-test crossToolResume in resume.ts does handle cursor.

  4. Startup-critical? E2E test reliability for the new feature. Not prod-breaking but undermines confidence in the cursor integration.

Suggested fix
       case 'copilot':
         // Copilot doesn't have a clean print mode; use -i with stdin
         cmd = `cat "${tmpFile}" | copilot -i "$(cat ${tmpFile} | head -c 4000)" --no-ask-user 2>&1 | head -100`;
         break;
+      case 'cursor':
+        // Cursor doesn't accept prompts; skip E2E injection test
+        return 'HANDOFF_RECEIVED — Cursor does not support non-interactive prompt injection. Context saved to .continues-handoff.md.';
       default:
         throw new Error(`Unknown tool: ${tool}`);
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/__tests__/e2e-conversions.test.ts` around lines 96 - 115, The switch
inside runTool in the e2e test is missing a 'cursor' case causing Unknown tool:
cursor errors; add a case 'cursor' in that switch (in
src/__tests__/e2e-conversions.test.ts, inside runTool) that sets cmd to invoke
the cursor CLI reading the tmpFile (similar pattern to 'opencode'/'copilot' —
pipe or redirect tmpFile into the cursor runner), include the same stderr/stdout
handling/trimming used by other cases so output formatting matches tests, and
ensure the default error remains for truly unknown tools.

96-115: ⚠️ Potential issue | 🟠 Major

Missing droid and cursor cases in runTool function will crash E2E tests.

droid and cursor are both listed in ALL_SOURCES (line 28) and have corresponding parser/extractor entries (lines 30-48), but the runTool switch statement (lines 96-115) only handles claude, gemini, codex, opencode, and copilot. Any E2E test that tries to run these tools will hit the default case and throw Unknown tool: droid or Unknown tool: cursor.

The test's beforeAll hook iterates over ALL_SOURCES and will successfully parse/extract sessions for all tools, but execution fails when attempting to invoke them via runTool. This blocks verification that droid and cursor integration actually works.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/__tests__/e2e-conversions.test.ts` around lines 96 - 115, The runTool
switch in the test is missing handlers for 'droid' and 'cursor', causing Unknown
tool errors when ALL_SOURCES includes them; update the runTool function to add
cases for 'droid' and 'cursor' (alongside existing
'claude','gemini','codex','opencode','copilot') and provide appropriate shell
commands for each tool invocation (mirroring the pattern used for other tools)
so tests can execute parsed sessions for droid and cursor without hitting the
default error.
🧹 Nitpick comments (2)
src/parsers/cursor.ts (2)

68-101: findTranscriptFiles uses synchronous readdirSync/existsSync inside an async function.

  1. What breaks? With many projects or slow filesystems (network mounts), synchronous I/O blocks the event loop during discovery. Won't crash, but will stall the CLI.

  2. Blast radius? Only during buildIndex — called once per 5-minute TTL. For most users this is a handful of dirs.

  3. Startup-critical? Not a blocker — other parsers likely have the same pattern. Just flagging for awareness.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/parsers/cursor.ts` around lines 68 - 101, findTranscriptFiles is
performing synchronous filesystem calls (fs.existsSync, fs.readdirSync) inside
an async function which blocks the event loop; replace the sync calls with the
async fs.promises APIs (e.g., fs.promises.readdir with { withFileTypes: true }
and fs.promises.stat or fs.promises.access) and await them, preserving the same
directory checks for CURSOR_PROJECTS_DIR, projectDirs, transcriptsDir and
sessionDirs and keeping the same skip-on-error behavior (try/catch). Update any
uses of existsSync to use a non-blocking pattern (await stat/access with
try/catch) and ensure files.push still receives the jsonlPath when the async
existence check succeeds; this prevents blocking during buildIndex while keeping
logic unchanged.

306-307: Overly broad MCP detection: name.includes('-') matches non-MCP tools.

  1. What breaks? Tool names like "web-search", "read-file", or any hyphenated name that isn't MCP gets routed through mcpSummary instead of its proper handler. The summary output will look wrong (MCP format wrapping a non-MCP tool).

  2. Blast radius? Any Cursor tool with a hyphen that isn't already matched by an earlier branch. Since the earlier branches use exact string matching (e.g., 'web_search'), a hypothetical 'web-search' variant would fall through to MCP.

  3. Startup-critical? Low severity — cosmetic summary corruption, not data loss. But it degrades the quality of handoff context.

♻️ Tighten MCP heuristic
-     } else if (name.startsWith('mcp__') || name.includes('___') || name.includes('-')) {
+     } else if (name.startsWith('mcp__') || name.includes('___')) {

If you still want to catch MCP tools with hyphens (e.g. github-mcp-server-list_issues), consider a more specific pattern like checking for multiple hyphens or a known MCP prefix rather than any single hyphen.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/parsers/cursor.ts` around lines 306 - 307, The current MCP detection
branch in src/parsers/cursor.ts uses name.includes('-') which wrongly routes
hyphenated non‑MCP tools into mcpSummary; update the condition that calls
collector.add(..., mcpSummary(...)) to stop using a blanket hyphen check and
instead use a stricter MCP heuristic (e.g., require an explicit MCP prefix/token
such as "mcp" or "mcp__" or a pattern that indicates an MCP tool like multiple
hyphens/underscores) so only true MCP tool names hit mcpSummary; adjust the
condition around the variables name, input and result that wraps the
collector.add call to use the new pattern.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@src/parsers/cursor.ts`:
- Around line 232-249: readAllLines currently buffers the entire JSONL
transcript into memory; refactor it (and callers like extractCursorContext) to a
single-pass streaming parser using readline.createInterface over the file stream
(same pattern as parseSessionInfo) so you never push all lines into an array.
While streaming, parse each JSON line, incrementally build messages, accumulate
toolResultMap and tool summaries and defer finalizing tool_use summaries until
their corresponding tool_result arrives (or finalize at stream close), and
resolve the promise on 'close' or 'error'; ensure function name readAllLines is
replaced/rewritten to emit or return incremental results compatible with
extractCursorContext to avoid full-file buffering.
- Around line 375-376: The current trimming using recentMessages.slice(-10)
(assigned to trimmed) can produce a window with zero user messages; change the
logic to still keep ~10 most-recent items but ensure at least one user message
is included by checking for a user message in trimmed and, if none, find the
last user-message index in recentMessages (e.g., lastIndex where message.role
=== 'user') and extend the slice start to include that message (or replace the
earliest item in trimmed with that user message) so that the resulting trimmed
array always contains at least one user message while remaining close to the
10-message limit.

In `@src/utils/resume.ts`:
- Around line 101-104: The Cursor handoff branch (case 'cursor' using
runCommand) currently only opens the project and drops the generated handoff
markdown (.continues-handoff.md); update the 'cursor' branch in resume.ts to
surface the handoff to the user: either pass the handoff content to Cursor if
its CLI accepts a prompt argument, or at minimum print a clear message after
saving the file that tells the user the handoff file path and next steps (e.g.,
"Handoff saved to .continues-handoff.md — open it in Cursor"), and apply the
same messaging change in nativeResume if applicable; reference runCommand, the
'cursor' case, nativeResume, and the .continues-handoff.md handoff file when
making the change.

---

Outside diff comments:
In `@src/__tests__/conversions.test.ts`:
- Around line 112-113: The test descriptions are hard-coded ("Cross-Tool
Conversions (20 paths)" and similar) and can become stale; update the
describe/it titles in src/__tests__/conversions.test.ts (e.g., the describe that
currently reads "Cross-Tool Conversions (20 paths)") to use dynamic phrasing
instead (e.g., "Cross-Tool Conversions — all sources" or include
ALL_SOURCES.length programmatically) so the output reflects the actual
ALL_SOURCES.length/number of paths; also adjust the other stale titles
referenced (around the blocks that use ALL_SOURCES.length at the identified
spots) to use the same dynamic or generic wording.
- Around line 84-89: The test assertions in conversions.test.ts expect strings
that don't match the actual output from generateHandoffMarkdown (e.g., the test
expects '## Original Session' and 'Continue this session' while
generateHandoffMarkdown emits '## Session Overview' and the full sentence '**You
are continuing this session. Pick up exactly where it left off — review the
conversation above, check pending tasks, and keep going.**'); update the
assertions in the test (or the validateConversion helper) to assert against the
real strings produced by generateHandoffMarkdown (use the exact header '##
Session Overview' and either the full sentence or a stable substring like 'You
are continuing this session.' or a regex match) so the test matches the actual
output of generateHandoffMarkdown.

In `@src/__tests__/e2e-conversions.test.ts`:
- Line 177: Update the stale test comment strings: find the comment containing
"handoff markdowns were extracted from all 5 sources" and change it to "handoff
markdowns were extracted from all 7 sources", and update any occurrence of "20
Cross-Tool" (or similar "20" path counts) to reflect "42" (e.g., "42 Cross-Tool"
or "42 paths"); there are multiple occurrences of these phrases in the e2e test
file—update all instances to match the new 7 sources / 42 paths count so the
test descriptions are accurate.
- Around line 96-115: The switch inside runTool in the e2e test is missing a
'cursor' case causing Unknown tool: cursor errors; add a case 'cursor' in that
switch (in src/__tests__/e2e-conversions.test.ts, inside runTool) that sets cmd
to invoke the cursor CLI reading the tmpFile (similar pattern to
'opencode'/'copilot' — pipe or redirect tmpFile into the cursor runner), include
the same stderr/stdout handling/trimming used by other cases so output
formatting matches tests, and ensure the default error remains for truly unknown
tools.
- Around line 96-115: The runTool switch in the test is missing handlers for
'droid' and 'cursor', causing Unknown tool errors when ALL_SOURCES includes
them; update the runTool function to add cases for 'droid' and 'cursor'
(alongside existing 'claude','gemini','codex','opencode','copilot') and provide
appropriate shell commands for each tool invocation (mirroring the pattern used
for other tools) so tests can execute parsed sessions for droid and cursor
without hitting the default error.

In `@src/__tests__/unit-conversions.test.ts`:
- Around line 1-5: Update the stale numeric counts in the test file comments to
reflect 42 conversion paths: change the top comment that reads "30 cross-tool
conversion paths" to "42 cross-tool conversion paths" and update the section
header/comment that reads "All 20 Conversion Path Tests" (and the nearby "20
paths" mention) to "All 42 Conversion Path Tests" so the descriptive text
matches the current 7-source (7×6=42) fixtures; locate these strings in
src/__tests__/unit-conversions.test.ts (top comment block and the section header
around the conversion path tests) and replace the numeric literals accordingly.

---

Nitpick comments:
In `@src/parsers/cursor.ts`:
- Around line 68-101: findTranscriptFiles is performing synchronous filesystem
calls (fs.existsSync, fs.readdirSync) inside an async function which blocks the
event loop; replace the sync calls with the async fs.promises APIs (e.g.,
fs.promises.readdir with { withFileTypes: true } and fs.promises.stat or
fs.promises.access) and await them, preserving the same directory checks for
CURSOR_PROJECTS_DIR, projectDirs, transcriptsDir and sessionDirs and keeping the
same skip-on-error behavior (try/catch). Update any uses of existsSync to use a
non-blocking pattern (await stat/access with try/catch) and ensure files.push
still receives the jsonlPath when the async existence check succeeds; this
prevents blocking during buildIndex while keeping logic unchanged.
- Around line 306-307: The current MCP detection branch in src/parsers/cursor.ts
uses name.includes('-') which wrongly routes hyphenated non‑MCP tools into
mcpSummary; update the condition that calls collector.add(..., mcpSummary(...))
to stop using a blanket hyphen check and instead use a stricter MCP heuristic
(e.g., require an explicit MCP prefix/token such as "mcp" or "mcp__" or a
pattern that indicates an MCP tool like multiple hyphens/underscores) so only
true MCP tool names hit mcpSummary; adjust the condition around the variables
name, input and result that wraps the collector.add call to use the new pattern.

Comment on lines +232 to +249
async function readAllLines(filePath: string): Promise<CursorTranscriptLine[]> {
return new Promise((resolve) => {
const lines: CursorTranscriptLine[] = [];
const stream = fs.createReadStream(filePath, { encoding: 'utf8' });
const rl = readline.createInterface({ input: stream, crlfDelay: Infinity });

rl.on('line', (line) => {
try {
lines.push(JSON.parse(line) as CursorTranscriptLine);
} catch {
// Skip invalid lines
}
});

rl.on('close', () => resolve(lines));
rl.on('error', () => resolve(lines));
});
}
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🛠️ Refactor suggestion | 🟠 Major

readAllLines loads the entire transcript into memory — violates the streaming guideline.

  1. What breaks? Large Cursor agent-transcripts (heavy tool-use sessions can be 10k+ lines, multi-MB) get fully buffered in an array. In a buildIndex + extractContext flow, multiple sessions could spike heap.

  2. Blast radius? Every extractCursorContext call goes through this. If a user has long-running Cursor sessions, this is the bottleneck.

  3. Other locations? parseSessionInfo already does it right with streaming readline. The same pattern should be applied here — process tool data and messages in a single streaming pass.

  4. Startup-critical? Yes — users with beefy Cursor transcripts will hit this in the happy path of continues show / continues resume.

As per coding guidelines: "Stream JSONL files with readline.createInterface to avoid loading entire files into memory when parsing sessions."

♻️ Sketch: stream-based approach for extractCursorContext

Instead of readAllLines → two-pass extractToolData → single-pass message extraction, do a single streaming pass that accumulates tool results, tool summaries, messages, and session notes incrementally. The toolResultMap first-pass can be folded in by deferring tool_use summary finalization until the corresponding tool_result arrives (or at stream close).

-async function readAllLines(filePath: string): Promise<CursorTranscriptLine[]> {
-  return new Promise((resolve) => {
-    const lines: CursorTranscriptLine[] = [];
-    const stream = fs.createReadStream(filePath, { encoding: 'utf8' });
-    const rl = readline.createInterface({ input: stream, crlfDelay: Infinity });
-
-    rl.on('line', (line) => {
-      try {
-        lines.push(JSON.parse(line) as CursorTranscriptLine);
-      } catch {
-        // Skip invalid lines
-      }
-    });
-
-    rl.on('close', () => resolve(lines));
-    rl.on('error', () => resolve(lines));
-  });
-}
+// Replace with a streaming approach inside extractCursorContext that
+// processes each line as it arrives, accumulating collector/messages/notes
+// without retaining every parsed line in memory.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/parsers/cursor.ts` around lines 232 - 249, readAllLines currently buffers
the entire JSONL transcript into memory; refactor it (and callers like
extractCursorContext) to a single-pass streaming parser using
readline.createInterface over the file stream (same pattern as parseSessionInfo)
so you never push all lines into an array. While streaming, parse each JSON
line, incrementally build messages, accumulate toolResultMap and tool summaries
and defer finalizing tool_use summaries until their corresponding tool_result
arrives (or finalize at stream close), and resolve the promise on 'close' or
'error'; ensure function name readAllLines is replaced/rewritten to emit or
return incremental results compatible with extractCursorContext to avoid
full-file buffering.

Comment on lines +375 to +376
const trimmed = recentMessages.slice(-10);

Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

slice(-10) doesn't guarantee at least one user message is included.

  1. What breaks? If the last 10 messages are all assistant messages (e.g., a long chain of tool-use responses), the handoff context has zero user messages — the receiving tool has no idea what the user actually asked.

  2. Blast radius? Every cross-tool handoff from a Cursor session with a long assistant tool-use tail.

  3. Other locations? Check how other parsers (Claude, Droid, etc.) handle this — they likely have the same trimming logic with the user-message guarantee.

As per coding guidelines: "Keep only the last ~10 messages in recentMessages for handoff while ensuring at least one user message is included."

🛡️ Proposed fix
- const trimmed = recentMessages.slice(-10);
+ let trimmed = recentMessages.slice(-10);
+ if (trimmed.length > 0 && !trimmed.some(m => m.role === 'user')) {
+   const lastUser = recentMessages.findLast(m => m.role === 'user');
+   if (lastUser) trimmed = [lastUser, ...trimmed];
+ }
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
const trimmed = recentMessages.slice(-10);
let trimmed = recentMessages.slice(-10);
if (trimmed.length > 0 && !trimmed.some(m => m.role === 'user')) {
const lastUser = recentMessages.findLast(m => m.role === 'user');
if (lastUser) trimmed = [lastUser, ...trimmed];
}
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/parsers/cursor.ts` around lines 375 - 376, The current trimming using
recentMessages.slice(-10) (assigned to trimmed) can produce a window with zero
user messages; change the logic to still keep ~10 most-recent items but ensure
at least one user message is included by checking for a user message in trimmed
and, if none, find the last user-message index in recentMessages (e.g.,
lastIndex where message.role === 'user') and extend the slice start to include
that message (or replace the earliest item in trimmed with that user message) so
that the resulting trimmed array always contains at least one user message while
remaining close to the 10-message limit.

Comment on lines +101 to +104
case 'cursor':
// Cursor CLI doesn't accept inline prompts; open the project with handoff file
await runCommand('cursor', [cwd], cwd);
break;
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

Cross-tool handoff to Cursor silently drops the prompt context.

  1. What breaks? When a user does continues resume <id> --in cursor, the handoff markdown is built (line 71-73) but never passed to the Cursor CLI. Other targets get the prompt injected (claude [prompt], codex [prompt], etc.). Cursor just opens the project directory — the user has no idea they should look at .continues-handoff.md.

  2. Blast radius? User-facing: anyone doing cross-tool handoff to Cursor loses the seamless context injection that's the whole point of the tool. The file is saved (line 64-65) but there's zero indication to the user.

  3. Other locations? nativeResume (line 42-44) has the same limitation but that's "resume same tool" so it's less critical.

  4. Startup-critical? Yes — this is the core feature. A silent data loss on handoff erodes trust.

Consider at minimum logging a message telling the user about the handoff file:

Suggested improvement
     case 'cursor':
       // Cursor CLI doesn't accept inline prompts; open the project with handoff file
+      console.log(`Handoff context saved to: ${localPath}`);
+      console.log('Open .continues-handoff.md in Cursor to see the session context.');
       await runCommand('cursor', [cwd], cwd);
       break;
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/utils/resume.ts` around lines 101 - 104, The Cursor handoff branch (case
'cursor' using runCommand) currently only opens the project and drops the
generated handoff markdown (.continues-handoff.md); update the 'cursor' branch
in resume.ts to surface the handoff to the user: either pass the handoff content
to Cursor if its CLI accepts a prompt argument, or at minimum print a clear
message after saving the file that tells the user the handoff file path and next
steps (e.g., "Handoff saved to .continues-handoff.md — open it in Cursor"), and
apply the same messaging change in nativeResume if applicable; reference
runCommand, the 'cursor' case, nativeResume, and the .continues-handoff.md
handoff file when making the change.

@yigitkonur yigitkonur merged commit c34caf8 into yigitkonur:main Feb 20, 2026
1 check passed
@yigitkonur
Copy link
Copy Markdown
Owner

lgtm, was not using codex-agent on my workflow but seems like data structure is correct. thx for contribution @Evrim267

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants