fix: parallel workflow reliability and recovery (#328)#335
Conversation
|
The latest updates on your projects. Learn more about Vercel for GitHub. |
WalkthroughAdds explicit target-branch support and preflight validation for parallel runs, atomic session/lock and summary file writes, preserved recovery worktrees, new parallel/sequential summary types and helpers, MergeEngine rollback APIs, worktree cleanup options, and a TUI overlay to display completion summaries. Changes
Sequence Diagram(s)sequenceDiagram
participant CLI as CLI
participant RunCmd as Run command
participant Executor as ParallelExecutor
participant Merge as MergeEngine
participant WT as WorktreeManager
participant FS as Filesystem
participant TUI as TUI
CLI->>RunCmd: parse args (incl. --target-branch)
RunCmd->>Executor: start parallel run (sessionBranchName / preflight)
Executor->>Merge: initializeSessionBranch(explicitBranchName?)
Merge-->>Executor: branch info or error
Executor->>WT: create/manage worktrees
Executor->>Merge: schedule merges / handle conflicts
Merge-->>Executor: events (conflict, rolled-back, completed)
Executor-->>RunCmd: completion metrics & preserved worktrees
RunCmd->>FS: writeParallelRunSummary(summary) (atomic)
RunCmd->>TUI: provide parallelCompletionSummaryLines/path/error
TUI->>User: display overlay with summary
Estimated code review effort🎯 4 (Complex) | ⏱️ ~45 minutes Possibly related PRs
🚥 Pre-merge checks | ✅ 4 | ❌ 1❌ Failed checks (1 warning)
✅ Passed checks (4 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches
🧪 Generate unit tests (beta)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
There was a problem hiding this comment.
Actionable comments posted: 4
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (1)
src/parallel/worktree-manager.ts (1)
275-281:⚠️ Potential issue | 🟠 Major
return preservedis unreachable when cleanup errors occur, silently discarding recovery information.When any non-preserved worktree fails to clean up,
errors.length > 0causes an earlythrow, bypassingreturn preserved. The caller inParallelExecutor.cleanupcatches the error and falls back tothis.preservedRecoveryWorktrees = [](seesrc/parallel/index.tslines 863–871), sogetPreservedRecoveryWorktrees()will return an empty list even though the preserved worktrees still exist on disk. This directly undercuts the PR's goal of surfacing recovery branches to the user.The fix belongs at the caller: pre-compute the preserved list before delegating to
cleanupAll, so cleanup errors for non-preserved worktrees can't suppress it.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/parallel/worktree-manager.ts` around lines 275 - 281, The current flow lets cleanupAll throw before returning the preserved list, causing ParallelExecutor.cleanup to drop preservedRecoveryWorktrees; to fix, in ParallelExecutor.cleanup call the worktree-manager method that lists preserved recovery worktrees (e.g., getPreservedRecoveryWorktrees or equivalent) and store that result into this.preservedRecoveryWorktrees before invoking cleanupAll; then call cleanupAll (or other cleanup methods) and let errors propagate, ensuring preservedRecoveryWorktrees remains populated even if cleanupAll throws.
🧹 Nitpick comments (13)
src/parallel/worktree-manager.ts (1)
13-19: Consider exportingCleanupAllOptionsfor API completeness.
cleanupAllis a public method; callers who want to construct the options in a strongly-typed manner currently can't import the parameter type.♻️ Proposed fix
-interface CleanupAllOptions { +export interface CleanupAllOptions {🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/parallel/worktree-manager.ts` around lines 13 - 19, The interface CleanupAllOptions used as the parameter type for the public method cleanupAll is not exported, preventing callers from importing the type; export the interface by making CleanupAllOptions a named export (e.g., export interface CleanupAllOptions) so external callers can import and construct strongly-typed options for cleanupAll.src/session/persistence.ts (1)
271-271:mkdircall is now redundant.
writeJsonAtomicalready callsmkdir(dirname(filePath), { recursive: true })internally. The explicit call at line 271 duplicates that work. Removing it also allows themkdirimport to be dropped from the import block.♻️ Proposed fix
- // Ensure directory exists - await mkdir(dirname(filePath), { recursive: true }); - // Update timestamp const updatedState: PersistedSessionState = {🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/session/persistence.ts` at line 271, Remove the redundant explicit directory creation: delete the call "await mkdir(dirname(filePath), { recursive: true })" since writeJsonAtomic already performs mkdir(dirname(filePath), { recursive: true }) internally; after removing that line, also remove the unused "mkdir" import from the module's import block to avoid dead imports and keep imports consistent with the functions used (references: mkdir, dirname, filePath, writeJsonAtomic).src/parallel/merge-engine.ts (1)
161-183:validateGitRefis called twice on the explicit branch name.For the
explicitBranchNamepath, line 163 validates the ref and then line 183 validates it again unconditionally. The unconditional call at line 183 already covers both paths (for the auto-generated name it's the only call). Removing the inline call inside theifblock eliminates the redundancy.♻️ Proposed fix
if (explicitBranchName) { branchName = explicitBranchName.trim(); - validateGitRef(branchName, 'sessionBranch'); if (this.branchExists(branchName)) { throw new Error(🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/parallel/merge-engine.ts` around lines 161 - 183, The code calls validateGitRef(branchName, 'sessionBranch') twice when an explicitBranchName is provided; remove the first call inside the explicitBranchName branch so validation is only performed once after branchName selection. In the block that handles explicitBranchName (references: explicitBranchName, branchName, validateGitRef, branchExists), keep the existence check and error throw but delete the inline validateGitRef there so the final unconditional validateGitRef(branchName, 'sessionBranch') remains the single validation point for both explicit and auto-generated names.src/parallel/index.ts (2)
966-975: Unnecessarytry/catcharound a non-throwing method.
mergeEngine.markOperationRolledBackreturns abooleanand never throws. Thetry/catchadds no safety and obscures intent.♻️ Proposed fix
private markConflictOperationRolledBack( operationId: string, reason: string ): void { - try { - this.mergeEngine.markOperationRolledBack(operationId, reason); - } catch { - // Best effort state sync. - } + this.mergeEngine.markOperationRolledBack(operationId, reason); }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/parallel/index.ts` around lines 966 - 975, The try/catch around the non-throwing call is unnecessary—remove the try/catch wrapper in markConflictOperationRolledBack and call this.mergeEngine.markOperationRolledBack(operationId, reason) directly; optionally, inspect the returned boolean if you want to act on success/failure (e.g., log a debug message) but do not catch exceptions since the method never throws.
660-662:removePendingConflictByOperationIdcall in Phase 2 success path is a no-op.
enqueuePendingConflict(which populatesthis.pendingConflicts) is only called on the failure-and-exhausted-retries path (line 690). In the success branch the operation was never added tothis.pendingConflicts, so this removal is always a no-op and can be deleted to reduce confusion.♻️ Proposed fix
if (allResolved) { // Conflict resolution succeeded - mark task as complete - this.removePendingConflictByOperationId(operation.id); try {🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/parallel/index.ts` around lines 660 - 662, The call to removePendingConflictByOperationId(operation.id) in the Phase 2 success branch is a no-op because enqueuePendingConflict only populates this.pendingConflicts on the failure/exhausted-retries path; remove the call to removePendingConflictByOperationId from the success branch (the block where allResolved is true) to avoid confusion and dead code, leaving the failure path that calls enqueuePendingConflict intact and unchanged (refer to enqueuePendingConflict, pendingConflicts, and removePendingConflictByOperationId to locate the related logic).src/parallel/session.ts (1)
71-82: UsewriteJsonAtomicfor consistency, durability, and non-blocking behaviour.The inline sync atomic write lacks an
fdatasynccall beforerenameSync, so a crash between write and OS flush can produce a corrupt or emptyparallel-session.json— the exact file used for crash recovery.persistence.tswas updated to use the sharedwriteJsonAtomichelper for this reason;saveParallelSessionshould do the same. The sync calls also block the event loop inside anasyncfunction.♻️ Proposed fix
+import { writeJsonAtomic } from '../session/atomic-write.js'; ... export async function saveParallelSession( cwd: string, state: ParallelSessionState ): Promise<void> { const filePath = path.join(cwd, SESSION_FILE); - const dir = path.dirname(filePath); - - fs.mkdirSync(dir, { recursive: true }); // Convert Map to array for JSON serialization const serializable = { ...state, taskGraph: serializeTaskGraph(state.taskGraph), }; - const tempPath = `${filePath}.${process.pid}.${Date.now()}.tmp`; - try { - fs.writeFileSync(tempPath, JSON.stringify(serializable, null, 2), 'utf-8'); - fs.renameSync(tempPath, filePath); - } catch (error) { - try { - fs.unlinkSync(tempPath); - } catch { - // Best effort cleanup. - } - throw error; - } + await writeJsonAtomic(filePath, serializable); }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/parallel/session.ts` around lines 71 - 82, Replace the synchronous temp-file write/rename logic in saveParallelSession with the shared async writeJsonAtomic helper (import it from persistence.ts), i.e., remove tempPath/fs.writeFileSync/fs.renameSync/fs.unlinkSync calls and instead await writeJsonAtomic(...) to write the serializable object to the same target (parallel-session.json). Ensure saveParallelSession remains async, propagate errors unchanged, and rely on writeJsonAtomic for fdatasync/atomic durability and non-blocking behavior.src/session/atomic-write.ts (1)
22-22: Temp-file name can collide under concurrent invocations for the same target path.
process.pid + Date.now()is the same for two calls that both evaluate line 22 within the same millisecond. The secondopen(tempPath, 'w', mode)would silently truncate the first's partially-written temp file. Adding a per-call random suffix (e.g.,crypto.randomBytes(4).toString('hex')) eliminates the risk with no overhead.♻️ Proposed fix
+import { randomBytes } from 'node:crypto'; ... - const tempPath = `${filePath}.${process.pid}.${Date.now()}.tmp`; + const tempPath = `${filePath}.${process.pid}.${Date.now()}.${randomBytes(4).toString('hex')}.tmp`;🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/session/atomic-write.ts` at line 22, The temp filename construction using `${filePath}.${process.pid}.${Date.now()}.tmp` can collide when two invocations happen in the same millisecond; change the tempPath generation in atomic-write (where tempPath is used and passed to open(tempPath, 'w', mode)) to append a per-call cryptographically-random suffix (e.g., from crypto.randomBytes(4).toString('hex')) so each call produces a unique tempPath and prevents one open() from truncating another's temp file.src/config/schema.ts (1)
85-86: Naming inconsistency across layers for the same concept.The same user-facing option is called
targetBranchhere (config schema),sessionBranchNameinParallelExecutorConfig(src/parallel/types.tsline 331), andexplicitBranchNameinMergeEngine.initializeSessionBranch. Aligning these to a single name (or at least documenting the mapping in the JSDoc) would reduce contributor friction.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/config/schema.ts` around lines 85 - 86, The config property name for the explicit session branch is inconsistent: the schema defines targetBranch while ParallelExecutorConfig uses sessionBranchName and MergeEngine.initializeSessionBranch expects explicitBranchName; pick one canonical name (e.g., sessionBranchName) and rename the schema property and any uses to match, or add a clear JSDoc mapping/alias on the schema field to document that targetBranch maps to ParallelExecutorConfig.sessionBranchName and MergeEngine.initializeSessionBranch.explicitBranchName; update all references (symbols: targetBranch, ParallelExecutorConfig, sessionBranchName, MergeEngine.initializeSessionBranch, explicitBranchName) so they are consistent across layers.src/commands/run.test.ts (1)
272-277: Consider moving--target-branchtest to its own describe block or a shared "branch options" section.The
--target-branchtest is nested under--direct-merge parsing, but it's a distinct option. Grouping it separately would improve discoverability and make it clearer that these are independent flags (especially since the PR adds validation that they're mutually exclusive).Also, there are no edge-case tests for
--target-branchwithout a value or when followed by another flag (e.g.['--target-branch', '--headless']). The pattern exists for similar options like--epicand--parallel.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/commands/run.test.ts` around lines 272 - 277, The --target-branch test is misplaced under the --direct-merge parsing group and lacks edge-case coverage; move the existing test for the '--target-branch' option into its own describe block (e.g., "target-branch parsing") alongside tests for parseRunArgs, and add edge-case tests that assert behavior when '--target-branch' is supplied without a value and when it's followed by another flag (e.g., ['--target-branch', '--headless']) to ensure proper error/validation handling and mutual-exclusion behavior with --direct-merge.src/session/index.ts (1)
253-261: Session metadata is persisted before lock acquisition — consider reordering.
saveSessionat line 253 writes the session file to disk before the lock is acquired at line 257. If lock acquisition fails (line 258), the method throws but leaves an orphaned session file. This isn't dangerous, but it means a subsequentcheckSessioncall will findhasSession: truewith no corresponding lock, which could confuse resume logic.Consider acquiring the lock first (or at least cleaning up the session file on failure).
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/session/index.ts` around lines 253 - 261, The session is persisted by saveSession before acquiring the lock (acquireLock), which can leave an orphaned session file if lock acquisition fails; change the flow so you acquire the lock first unless options.lockAlreadyAcquired is true, then call saveSession(session) only after a successful acquireLock(session.id) (or, if you must persist first, ensure you remove the saved session on failure by calling the cleanup/removeSession function or deleting the session file in the acquireLock error path). Update the block that currently calls saveSession followed by acquireLock to either move saveSession after acquireLock or add a rollback that deletes the saved session when acquireLock returns false.src/parallel/parallel-executor.test.ts (1)
363-405: Pause/resume gating tests look solid, but the singleawait Promise.resolve()tick is fragile.Line 376 uses a single microtask tick to assert
releasedis stillfalse. If the internal implementation ofwaitWhilePausedchanges to resolve after a different number of ticks, this test could become flaky. Consider using a small polling loop orsetTimeoutwith a brief delay to make the assertion more robust.That said, since the executor is paused and the promise shouldn't resolve until
resume()orstop()is called, this is likely fine in practice.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/parallel/parallel-executor.test.ts` around lines 363 - 405, The test uses a fragile single microtask tick (await Promise.resolve()) to assert that (executor as any).waitWhilePaused() hasn't resolved; replace that with a more robust short wait (e.g., a small polling loop checking the released flag with a timeout or a short setTimeout-based delay) to ensure the waiter truly remains pending while executor.pause() is in effect; update both tests referencing ParallelExecutor, pause, waitWhilePaused, resume, stop and getState to use this short-delay/poll approach before calling resume/stop and asserting released remains false.src/commands/run.tsx (2)
3085-3089: Non-sensitive example command usesprocess.stdout.writewhile adjacent lines useconsole.log— minor inconsistency.The
process.stdout.writeat line 3087 is surrounded byconsole.logcalls (lines 3085, 3088–3089). The rationale for usingprocess.stdout.write(avoiding log interceptors capturing sensitive token content) does not apply to this example command string that contains only a literal placeholder<token>. Switching it back toconsole.logmakes the boundary of the security-sensitive block clearer.♻️ Proposed tidy-up
console.log(' Connect from another machine:'); - process.stdout.write(` ralph-tui remote add <alias> <this-host>:${actualPort} --token <token>\n`); + console.log(` ralph-tui remote add <alias> <this-host>:${actualPort} --token <token>`); console.log('');🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/commands/run.tsx` around lines 3085 - 3089, The sample connection line uses process.stdout.write while surrounding lines use console.log; change the call to console.log so the placeholder-only example (`ralph-tui remote add <alias> <this-host>:${actualPort} --token <token>`) is logged consistently and not treated as a sensitive write—replace the process.stdout.write invocation with console.log in the run.tsx output block (the line using process.stdout.write with actualPort).
3386-3411: "You are now on branch" can be misleading whenreturnToOriginalBranchErrorForGuidanceis also set.When both the "Session Complete!" block and the "Branch Checkout Requires Attention" block fire (i.e., session branch created, execution completed, but return to original branch failed), the existing "You are now on branch:
{originalBranchForGuidance}" message is printed first and is factually wrong — the user is still on the session branch. The subsequent "Branch Checkout Requires Attention" block corrects this, but the temporary contradiction could confuse users.Consider guarding the "You are now on branch" line:
♻️ Suggested guard
console.log(` Changes are on branch: ${sessionBranchForGuidance}`); - console.log(` You are now on branch: ${originalBranchForGuidance}`); + if (!returnToOriginalBranchErrorForGuidance) { + console.log(` You are now on branch: ${originalBranchForGuidance}`); + } console.log('');🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/commands/run.tsx` around lines 3386 - 3411, The "You are now on branch" line inside the "Session Complete!" block can be misleading when returnToOriginalBranchErrorForGuidance is set; update the printing logic in the block that checks sessionBranchForGuidance, originalBranchForGuidance, and parallelExecutor.getState().status === 'completed' so that the message about the current branch is only printed when returnToOriginalBranchErrorForGuidance is falsy — otherwise either omit that line or replace it with an explicit note (e.g., "You are still on branch: {sessionBranchForGuidance} — checkout to {originalBranchForGuidance} failed") to reflect the true state.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@src/commands/run.tsx`:
- Around line 333-336: The git unmerged check using spawnSync('git',
['ls-files', '-u'], baseOptions) currently ignores non-zero exits; update the
logic around the unmerged variable so failures are handled like the
trackedStatus check: if unmerged.status !== 0 push an error (include
unmerged.stderr/exit info) to errors, and otherwise if
unmerged.stdout.trim().length > 0 push the unresolved-merge message; reference
the unmerged variable, baseOptions, spawnSync call, and the errors array when
making this change.
In `@src/parallel/index.ts`:
- Around line 863-871: The catch block resets preservedRecoveryWorktrees to []
and hides real preserved worktrees when worktreeManager.cleanupAll throws;
instead, compute and store the preserved list before calling cleanupAll: call
this.getBranchesToPreserveForRecovery() and derive the preserved worktree
names/paths from the live worktree state (e.g., via worktreeManager or the same
lookup used by getPreservedRecoveryWorktrees()) and assign that result to
this.preservedRecoveryWorktrees before invoking
this.worktreeManager.cleanupAll({ preserveBranches: branchesToPreserve }), then
keep the try/catch only around cleanupAll so cleanup failures don’t overwrite
the preservedRecoveryWorktrees field used by getPreservedRecoveryWorktrees().
In `@src/session/index.ts`:
- Around line 181-196: acquireLock currently opens the lock file with
open(lockPath, 'wx') and writes/syncs but if writeFile or sync throws the
created (partial) lock file remains; change the try/catch so that after open
succeeds any non-EEXIST error from writeFile or sync triggers an attempt to
unlink(lockPath) before rethrowing (preserve the existing EEXIST branch that
returns false), ensure handle.close() is still awaited in the finally block, and
reference the same symbols (acquireLock, handle, lockPath, open, writeFile,
sync, EEXIST, close) when making the change.
In `@src/session/lock.ts`:
- Around line 157-168: When open(lockPath, 'wx') succeeds but handle.writeFile
or handle.sync throws, the code currently only closes the handle and leaves a
corrupt lock file; update the write/sync block in the function that uses
open('wx') (the same scope where variable handle: FileHandle | null is defined,
e.g., inside tryWriteLockFile) to catch errors from
handle.writeFile/handle.sync, ensure you close the handle if open succeeded,
then unlink the lockPath (using fs/promises.unlink or unlink) to remove the
partial file, and rethrow the original error so callers see the failure; keep
readLockFile/checkLock behavior unchanged and do not swallow the original
exception.
---
Outside diff comments:
In `@src/parallel/worktree-manager.ts`:
- Around line 275-281: The current flow lets cleanupAll throw before returning
the preserved list, causing ParallelExecutor.cleanup to drop
preservedRecoveryWorktrees; to fix, in ParallelExecutor.cleanup call the
worktree-manager method that lists preserved recovery worktrees (e.g.,
getPreservedRecoveryWorktrees or equivalent) and store that result into
this.preservedRecoveryWorktrees before invoking cleanupAll; then call cleanupAll
(or other cleanup methods) and let errors propagate, ensuring
preservedRecoveryWorktrees remains populated even if cleanupAll throws.
---
Nitpick comments:
In `@src/commands/run.test.ts`:
- Around line 272-277: The --target-branch test is misplaced under the
--direct-merge parsing group and lacks edge-case coverage; move the existing
test for the '--target-branch' option into its own describe block (e.g.,
"target-branch parsing") alongside tests for parseRunArgs, and add edge-case
tests that assert behavior when '--target-branch' is supplied without a value
and when it's followed by another flag (e.g., ['--target-branch', '--headless'])
to ensure proper error/validation handling and mutual-exclusion behavior with
--direct-merge.
In `@src/commands/run.tsx`:
- Around line 3085-3089: The sample connection line uses process.stdout.write
while surrounding lines use console.log; change the call to console.log so the
placeholder-only example (`ralph-tui remote add <alias>
<this-host>:${actualPort} --token <token>`) is logged consistently and not
treated as a sensitive write—replace the process.stdout.write invocation with
console.log in the run.tsx output block (the line using process.stdout.write
with actualPort).
- Around line 3386-3411: The "You are now on branch" line inside the "Session
Complete!" block can be misleading when returnToOriginalBranchErrorForGuidance
is set; update the printing logic in the block that checks
sessionBranchForGuidance, originalBranchForGuidance, and
parallelExecutor.getState().status === 'completed' so that the message about the
current branch is only printed when returnToOriginalBranchErrorForGuidance is
falsy — otherwise either omit that line or replace it with an explicit note
(e.g., "You are still on branch: {sessionBranchForGuidance} — checkout to
{originalBranchForGuidance} failed") to reflect the true state.
In `@src/config/schema.ts`:
- Around line 85-86: The config property name for the explicit session branch is
inconsistent: the schema defines targetBranch while ParallelExecutorConfig uses
sessionBranchName and MergeEngine.initializeSessionBranch expects
explicitBranchName; pick one canonical name (e.g., sessionBranchName) and rename
the schema property and any uses to match, or add a clear JSDoc mapping/alias on
the schema field to document that targetBranch maps to
ParallelExecutorConfig.sessionBranchName and
MergeEngine.initializeSessionBranch.explicitBranchName; update all references
(symbols: targetBranch, ParallelExecutorConfig, sessionBranchName,
MergeEngine.initializeSessionBranch, explicitBranchName) so they are consistent
across layers.
In `@src/parallel/index.ts`:
- Around line 966-975: The try/catch around the non-throwing call is
unnecessary—remove the try/catch wrapper in markConflictOperationRolledBack and
call this.mergeEngine.markOperationRolledBack(operationId, reason) directly;
optionally, inspect the returned boolean if you want to act on success/failure
(e.g., log a debug message) but do not catch exceptions since the method never
throws.
- Around line 660-662: The call to
removePendingConflictByOperationId(operation.id) in the Phase 2 success branch
is a no-op because enqueuePendingConflict only populates this.pendingConflicts
on the failure/exhausted-retries path; remove the call to
removePendingConflictByOperationId from the success branch (the block where
allResolved is true) to avoid confusion and dead code, leaving the failure path
that calls enqueuePendingConflict intact and unchanged (refer to
enqueuePendingConflict, pendingConflicts, and removePendingConflictByOperationId
to locate the related logic).
In `@src/parallel/merge-engine.ts`:
- Around line 161-183: The code calls validateGitRef(branchName,
'sessionBranch') twice when an explicitBranchName is provided; remove the first
call inside the explicitBranchName branch so validation is only performed once
after branchName selection. In the block that handles explicitBranchName
(references: explicitBranchName, branchName, validateGitRef, branchExists), keep
the existence check and error throw but delete the inline validateGitRef there
so the final unconditional validateGitRef(branchName, 'sessionBranch') remains
the single validation point for both explicit and auto-generated names.
In `@src/parallel/parallel-executor.test.ts`:
- Around line 363-405: The test uses a fragile single microtask tick (await
Promise.resolve()) to assert that (executor as any).waitWhilePaused() hasn't
resolved; replace that with a more robust short wait (e.g., a small polling loop
checking the released flag with a timeout or a short setTimeout-based delay) to
ensure the waiter truly remains pending while executor.pause() is in effect;
update both tests referencing ParallelExecutor, pause, waitWhilePaused, resume,
stop and getState to use this short-delay/poll approach before calling
resume/stop and asserting released remains false.
In `@src/parallel/session.ts`:
- Around line 71-82: Replace the synchronous temp-file write/rename logic in
saveParallelSession with the shared async writeJsonAtomic helper (import it from
persistence.ts), i.e., remove
tempPath/fs.writeFileSync/fs.renameSync/fs.unlinkSync calls and instead await
writeJsonAtomic(...) to write the serializable object to the same target
(parallel-session.json). Ensure saveParallelSession remains async, propagate
errors unchanged, and rely on writeJsonAtomic for fdatasync/atomic durability
and non-blocking behavior.
In `@src/parallel/worktree-manager.ts`:
- Around line 13-19: The interface CleanupAllOptions used as the parameter type
for the public method cleanupAll is not exported, preventing callers from
importing the type; export the interface by making CleanupAllOptions a named
export (e.g., export interface CleanupAllOptions) so external callers can import
and construct strongly-typed options for cleanupAll.
In `@src/session/atomic-write.ts`:
- Line 22: The temp filename construction using
`${filePath}.${process.pid}.${Date.now()}.tmp` can collide when two invocations
happen in the same millisecond; change the tempPath generation in atomic-write
(where tempPath is used and passed to open(tempPath, 'w', mode)) to append a
per-call cryptographically-random suffix (e.g., from
crypto.randomBytes(4).toString('hex')) so each call produces a unique tempPath
and prevents one open() from truncating another's temp file.
In `@src/session/index.ts`:
- Around line 253-261: The session is persisted by saveSession before acquiring
the lock (acquireLock), which can leave an orphaned session file if lock
acquisition fails; change the flow so you acquire the lock first unless
options.lockAlreadyAcquired is true, then call saveSession(session) only after a
successful acquireLock(session.id) (or, if you must persist first, ensure you
remove the saved session on failure by calling the cleanup/removeSession
function or deleting the session file in the acquireLock error path). Update the
block that currently calls saveSession followed by acquireLock to either move
saveSession after acquireLock or add a rollback that deletes the saved session
when acquireLock returns false.
In `@src/session/persistence.ts`:
- Line 271: Remove the redundant explicit directory creation: delete the call
"await mkdir(dirname(filePath), { recursive: true })" since writeJsonAtomic
already performs mkdir(dirname(filePath), { recursive: true }) internally; after
removing that line, also remove the unused "mkdir" import from the module's
import block to avoid dead imports and keep imports consistent with the
functions used (references: mkdir, dirname, filePath, writeJsonAtomic).
ℹ️ Review info
Configuration used: Organization UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (17)
src/commands/run.test.tssrc/commands/run.tsxsrc/config/schema.tssrc/config/types.tssrc/parallel/index.tssrc/parallel/merge-engine.test.tssrc/parallel/merge-engine.tssrc/parallel/parallel-executor.test.tssrc/parallel/session.tssrc/parallel/types.tssrc/parallel/worktree-manager.tssrc/session/atomic-write.tssrc/session/index.tssrc/session/lock.tssrc/session/persistence.tssrc/session/types.tstests/commands/run.test.ts
Codecov Report❌ Patch coverage is
❌ Your patch status has failed because the patch coverage (41.38%) is below the target coverage (50.00%). You can increase the patch coverage or adjust the target coverage. Additional details and impacted files@@ Coverage Diff @@
## main #335 +/- ##
==========================================
+ Coverage 45.96% 46.00% +0.03%
==========================================
Files 101 102 +1
Lines 32273 33158 +885
==========================================
+ Hits 14835 15254 +419
- Misses 17438 17904 +466
🚀 New features to boost your workflow:
|
There was a problem hiding this comment.
Actionable comments posted: 3
🧹 Nitpick comments (3)
src/commands/run.test.ts (3)
375-399:printRunHelptest doesn't assert--target-branchappears in the output.The test checks for
--task-rangeand--parallelbut the new--target-branchflag added to the help text is not verified.✏️ Suggested addition
expect(fullOutput).toContain('--parallel'); + expect(fullOutput).toContain('--target-branch');🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/commands/run.test.ts` around lines 375 - 399, The test "prints help without throwing" for printRunHelp currently asserts presence of '--task-range' and '--parallel' but misses the new '--target-branch' flag; update the test inside the describe('printRunHelp') -> test('prints help without throwing') to also assert that the captured output (variable output / fullOutput) contains '--target-branch' by adding an expect(fullOutput).toContain('--target-branch') alongside the other expects so the help text change is verified.
1-4: ABOUTME comment doesn't reflect new test coverage.The description still says "conflict resolution helpers, and related utilities" but this file now also covers
--target-branchparsing, parallel/sequential run summary helpers, andWorktreeInfo-backed mocks.✏️ Suggested update
/** * ABOUTME: Tests for the run command utilities. - * Covers task range filtering, conflict resolution helpers, and related utilities. + * Covers task range filtering, conflict resolution helpers, parallel/sequential run + * summary helpers, and --target-branch CLI argument parsing. */As per coding guidelines, the file-level JSDoc comment prefixed with "ABOUTME: " should explain the file's purpose; keeping it in sync with the actual test scope avoids reader confusion.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/commands/run.test.ts` around lines 1 - 4, Update the file-level ABOUTME JSDoc at the top of run.test.ts to accurately reflect current test coverage: mention task range filtering, conflict resolution helpers, --target-branch parsing, parallel/sequential run summary helpers, and WorktreeInfo-backed mocks so the comment aligns with the tests exercised (locate the existing ABOUTME comment block and replace its description accordingly).
562-580: MissingdurationMsassertion whenfinishedAtis explicitly provided.The test supplies both
startedAt: '2026-02-23T10:00:00.000Z'andfinishedAt: '2026-02-23T10:05:00.000Z', implying a known 5-minute duration (300 000 ms). There is no assertion ondurationMs, which means the implementation bug described below (seerun.tsxline 503-507) goes undetected here.✏️ Suggested addition
const output = formatSequentialRunSummary(summary); + expect(summary.durationMs).toBe(300_000); // 5 minutes expect(output).toContain('Sequential Run Summary');🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/commands/run.test.ts` around lines 562 - 580, Add an assertion that when both startedAt and finishedAt are provided the computed durationMs is correct: after creating the summary with createSequentialRunSummary (using startedAt '2026-02-23T10:00:00.000Z' and finishedAt '2026-02-23T10:05:00.000Z') assert summary.durationMs === 300000, and optionally assert the formatted output from formatSequentialRunSummary(summary) includes a human-readable duration string derived from durationMs; this ensures the duration calculation (based on startedAt/finishedAt) is tested and will catch the bug in the duration logic.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@src/commands/run.tsx`:
- Around line 503-507: The durationMs calculation currently uses Date.now() and
thus ignores the resolved finishedAt; update the logic in the
SequentialRunSummary construction so that durationMs is computed from
params.startedAt to the resolved finishedAt (the local finishedAt variable),
e.g. parse the finishedAt string to a Date and use its getTime() when
params.startedAt is present, falling back to 0 if startedAt is missing or parsed
dates are invalid; modify the code around the finishedAt and durationMs lines
(the variables finishedAt and durationMs in the function building
SequentialRunSummary) to reference the resolved finishedAt rather than
Date.now().
In `@src/tui/components/RunApp.tsx`:
- Around line 1388-1401: The effect currently toggles showParallelSummaryOverlay
whenever parallelCompletionSummaryLines is non-empty, which can re-open the
overlay after a user dismissed it; modify the useEffect (the effect using
isParallelMode, parallelCompletionSummaryLines, and
setShowParallelSummaryOverlay) to only auto-open when the lines transition from
empty → non-empty by keeping a ref for the previous length (e.g.,
prevParallelLinesCountRef) and only calling setShowParallelSummaryOverlay(true)
if prevCount === 0 && parallelCompletionSummaryLines.length > 0; still set false
when isParallelMode is false or when the current length is 0, and update the
prev ref at the end of the effect.
- Around line 2668-2735: The overlay width logic forces a minimum of 72 even
when the terminal is narrower, causing overflow; compute availableWidth =
Math.max(0, width - 4) and if availableWidth < 72 set
parallelSummaryOverlayWidth = availableWidth, otherwise keep the existing cap
(e.g. parallelSummaryOverlayWidth = Math.min(140, availableWidth) with the 72
minimum); then compute parallelSummaryContentWidth from that overlay width
(ensure it is at least 1 and at most availableWidth - 4) so the content and
overlay never exceed the terminal width — update the calculations around
parallelSummaryOverlayWidth and parallelSummaryContentWidth in this component to
use availableWidth and the conditional min-72 behavior.
---
Nitpick comments:
In `@src/commands/run.test.ts`:
- Around line 375-399: The test "prints help without throwing" for printRunHelp
currently asserts presence of '--task-range' and '--parallel' but misses the new
'--target-branch' flag; update the test inside the describe('printRunHelp') ->
test('prints help without throwing') to also assert that the captured output
(variable output / fullOutput) contains '--target-branch' by adding an
expect(fullOutput).toContain('--target-branch') alongside the other expects so
the help text change is verified.
- Around line 1-4: Update the file-level ABOUTME JSDoc at the top of run.test.ts
to accurately reflect current test coverage: mention task range filtering,
conflict resolution helpers, --target-branch parsing, parallel/sequential run
summary helpers, and WorktreeInfo-backed mocks so the comment aligns with the
tests exercised (locate the existing ABOUTME comment block and replace its
description accordingly).
- Around line 562-580: Add an assertion that when both startedAt and finishedAt
are provided the computed durationMs is correct: after creating the summary with
createSequentialRunSummary (using startedAt '2026-02-23T10:00:00.000Z' and
finishedAt '2026-02-23T10:05:00.000Z') assert summary.durationMs === 300000, and
optionally assert the formatted output from formatSequentialRunSummary(summary)
includes a human-readable duration string derived from durationMs; this ensures
the duration calculation (based on startedAt/finishedAt) is tested and will
catch the bug in the duration logic.
There was a problem hiding this comment.
Actionable comments posted: 1
♻️ Duplicate comments (3)
src/tui/components/RunApp.tsx (1)
2675-2685: Width clamping now correctly handles narrow terminals.The revised calculation (
availableWidth < 72 ? availableWidth : Math.min(140, availableWidth)) removes the hard minimum of 72 that previously caused overflow on narrow screens. This addresses the prior concern.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/tui/components/RunApp.tsx` around lines 2675 - 2685, Update the width-clamping logic in RunApp.tsx to avoid forcing a 72-column minimum: replace the old hard-min logic with a conditional that sets parallelSummaryOverlayWidth to availableWidth when availableWidth < 72 otherwise Math.min(140, availableWidth); ensure parallelSummaryContentWidth still computes using Math.max(1, Math.min(Math.max(0, availableWidth - 4), Math.max(1, parallelSummaryOverlayWidth - 4)))) so the content width respects the new overlay width and prevents overflow on narrow terminals (refer to the variables availableWidth, parallelSummaryOverlayWidth, and parallelSummaryContentWidth).src/commands/run.tsx (2)
606-617: Previous concern aboutgit ls-files -ufailure handling has been addressed.The non-zero exit path now correctly reports the failure with exit code and stderr details, consistent with the
trackedStatuscheck above.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/commands/run.tsx` around lines 606 - 617, The git unmerged-check now correctly handles non-zero exit from spawnSync('git', ['ls-files', '-u'], baseOptions) by recording exit code and stderr into errors; keep this logic in the block that inspects unmerged.status and the else branch that checks unmerged.stdout, ensuring the variables unmerged, exitCode, and stderr are used exactly as shown and that the error messages match the trackedStatus check format above (include exitCode and stderr when present).
504-510: Previous concern aboutdurationMsusingDate.now()has been addressed.The duration is now correctly computed from the resolved
finishedAtrather thanDate.now(), with properNumber.isFiniteguards for both timestamps.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/commands/run.tsx` around lines 504 - 510, The duration calculation in run.tsx has been corrected: ensure finishedAt is resolved via finishedAt = params.finishedAt ?? new Date().toISOString(), startedAtMs is computed with new Date(params.startedAt).getTime(), finishedAtMs uses new Date(finishedAt).getTime(), and durationMs uses Number.isFinite checks for both timestamps to return Math.max(0, finishedAtMs - startedAtMs) or 0; keep this logic as implemented in the durationMs calculation to avoid falling back to Date.now() and to guard against invalid dates.
🧹 Nitpick comments (2)
src/tui/components/RunApp.tsx (1)
2706-2710:availableWidthinsidewrapLineForOverlayshadows the outeravailableWidthfrom Line 2675.Rename the inner variable to avoid the shadowing and make the wrapping intent clearer.
♻️ Proposed rename
- const availableWidth = Math.max(1, maxLineWidth - prefix.length); + const lineAvailableWidth = Math.max(1, maxLineWidth - prefix.length); - if (remaining.length <= availableWidth) { - wrapped.push(`${prefix}${remaining}`.padEnd(maxLineWidth, ' ')); + if (remaining.length <= lineAvailableWidth) { + wrapped.push(`${prefix}${remaining}`.padEnd(maxLineWidth, ' ')); break; } - let breakAt = remaining.lastIndexOf(' ', availableWidth); - if (breakAt <= 0 || breakAt < Math.floor(availableWidth * 0.5)) { - breakAt = availableWidth; + let breakAt = remaining.lastIndexOf(' ', lineAvailableWidth); + if (breakAt <= 0 || breakAt < Math.floor(lineAvailableWidth * 0.5)) { + breakAt = lineAvailableWidth; }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/tui/components/RunApp.tsx` around lines 2706 - 2710, In wrapLineForOverlay, the inner variable named availableWidth shadows the outer availableWidth; rename the inner variable (e.g., remainingLineWidth or wrapWidth) and update all uses within the while loop that computes prefix and slices remaining so the outer availableWidth remains unshadowed and the wrapping logic is clearer (refer to function wrapLineForOverlay and the while (remaining.length > 0) loop).src/commands/run.tsx (1)
362-371: Fallback metrics silently equate "not completed" with "failed" and assume 1:1 task-to-merge ratio.When
completionMetricsis null,totalTasksFailedis computed astotalTasks − totalTasksCompletedandtotalMergesCompleteddefaults tototalTasksCompleted. For interrupted runs, many of those "failed" tasks may simply be pending, and the merge count may not match reality. A brief inline comment documenting these as approximate fallback values would prevent confusion when reading the summary output.📝 Proposed clarifying comment
- const metrics = completionMetrics ?? { + // Fallback approximation when real metrics are unavailable (e.g. interrupted runs). + // "failed" here means "not completed" — some tasks may still be pending. + const metrics = completionMetrics ?? { totalTasksCompleted: executorState.totalTasksCompleted, totalTasksFailed: Math.max( 0, executorState.totalTasks - executorState.totalTasksCompleted ), totalMergesCompleted: executorState.totalTasksCompleted, totalConflictsResolved: 0, durationMs: executorState.elapsedMs, };🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/commands/run.tsx` around lines 362 - 371, The fallback construction of metrics when completionMetrics is null currently sets totalTasksFailed = totalTasks − totalTasksCompleted and totalMergesCompleted = totalTasksCompleted which can be misleading for interrupted runs; add a concise inline comment above the metrics assignment (referencing completionMetrics, metrics, executorState, totalTasksFailed, totalMergesCompleted) stating these are approximate fallback values used only when detailed completionMetrics are unavailable, that “not completed” is treated as pending/unknown rather than definitive failures, and that merge counts may not reflect actual merges to avoid confusion in summary output.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@src/tui/components/RunApp.tsx`:
- Around line 2748-2751: The current logic for visibleParallelSummaryLines uses
slice(-N) which shows only the tail and can drop the header created by
formatParallelRunSummary on small terminals; change the selection to always
include the header lines (e.g., take the first H header lines from the start of
parallelSummaryOverlayLines and then fill the remaining capacity from the end)
so parallelSummaryMaxVisibleLines still limits total lines but preserves header
info; update the code around parallelSummaryMaxVisibleLines and
visibleParallelSummaryLines to compute a headCount (e.g., 2–3) and concat
parallelSummaryOverlayLines.slice(0, headCount) with the last
(parallelSummaryMaxVisibleLines - headCount) lines, ensuring you clamp counts to
avoid negatives.
---
Duplicate comments:
In `@src/commands/run.tsx`:
- Around line 606-617: The git unmerged-check now correctly handles non-zero
exit from spawnSync('git', ['ls-files', '-u'], baseOptions) by recording exit
code and stderr into errors; keep this logic in the block that inspects
unmerged.status and the else branch that checks unmerged.stdout, ensuring the
variables unmerged, exitCode, and stderr are used exactly as shown and that the
error messages match the trackedStatus check format above (include exitCode and
stderr when present).
- Around line 504-510: The duration calculation in run.tsx has been corrected:
ensure finishedAt is resolved via finishedAt = params.finishedAt ?? new
Date().toISOString(), startedAtMs is computed with new
Date(params.startedAt).getTime(), finishedAtMs uses new
Date(finishedAt).getTime(), and durationMs uses Number.isFinite checks for both
timestamps to return Math.max(0, finishedAtMs - startedAtMs) or 0; keep this
logic as implemented in the durationMs calculation to avoid falling back to
Date.now() and to guard against invalid dates.
In `@src/tui/components/RunApp.tsx`:
- Around line 2675-2685: Update the width-clamping logic in RunApp.tsx to avoid
forcing a 72-column minimum: replace the old hard-min logic with a conditional
that sets parallelSummaryOverlayWidth to availableWidth when availableWidth < 72
otherwise Math.min(140, availableWidth); ensure parallelSummaryContentWidth
still computes using Math.max(1, Math.min(Math.max(0, availableWidth - 4),
Math.max(1, parallelSummaryOverlayWidth - 4)))) so the content width respects
the new overlay width and prevents overflow on narrow terminals (refer to the
variables availableWidth, parallelSummaryOverlayWidth, and
parallelSummaryContentWidth).
---
Nitpick comments:
In `@src/commands/run.tsx`:
- Around line 362-371: The fallback construction of metrics when
completionMetrics is null currently sets totalTasksFailed = totalTasks −
totalTasksCompleted and totalMergesCompleted = totalTasksCompleted which can be
misleading for interrupted runs; add a concise inline comment above the metrics
assignment (referencing completionMetrics, metrics, executorState,
totalTasksFailed, totalMergesCompleted) stating these are approximate fallback
values used only when detailed completionMetrics are unavailable, that “not
completed” is treated as pending/unknown rather than definitive failures, and
that merge counts may not reflect actual merges to avoid confusion in summary
output.
In `@src/tui/components/RunApp.tsx`:
- Around line 2706-2710: In wrapLineForOverlay, the inner variable named
availableWidth shadows the outer availableWidth; rename the inner variable
(e.g., remainingLineWidth or wrapWidth) and update all uses within the while
loop that computes prefix and slices remaining so the outer availableWidth
remains unshadowed and the wrapping logic is clearer (refer to function
wrapLineForOverlay and the while (remaining.length > 0) loop).
There was a problem hiding this comment.
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (1)
src/parallel/index.ts (1)
170-200:⚠️ Potential issue | 🟠 MajorUpdate task counters after successful manual conflict retry.
When a pending conflict is resolved via
retryConflictResolution, the task remains counted as failed and never incrementstotalTasksCompleted, so the final status/summary can still report an interrupted run even though the merge succeeded.🛠️ Suggested fix
if (allResolved) { // Success! Remove the resolved pending entry and mark task as complete. this.removePendingConflictByOperationId(operation.id); try { await this.tracker.completeTask(workerResult.task.id); } catch { // Log but don't fail after successful resolution } await this.mergeProgressFile(workerResult); this.totalConflictsResolved += resolutions.length; this.totalMergesCompleted++; + this.totalTasksCompleted++; + this.totalTasksFailed = Math.max(0, this.totalTasksFailed - 1); this.emitNextPendingConflictIfAny(); return true; }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/parallel/index.ts` around lines 170 - 200, After a successful manual conflict retry in retryConflictResolution, update the task counters so the resolved task is no longer counted as failed: after the successful resolution/merge (e.g. after await this.mergeProgressFile(workerResult)) increment this.totalTasksCompleted and, if a failed-counter exists (e.g. this.totalTasksFailed), decrement it accordingly; do this before emitNextPendingConflictIfAny() and ensure you only change counters when the resolution/merge succeeded (and handle the case where totalTasksFailed may be undefined).
♻️ Duplicate comments (2)
src/tui/components/RunApp.tsx (2)
2675-2685: Past concern resolved — width correctly clamped to terminal bounds.On terminals narrower than 72 columns
parallelSummaryOverlayWidthnow falls back toavailableWidthrather than enforcing the old 72-column minimum.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/tui/components/RunApp.tsx` around lines 2675 - 2685, The width-clamping logic for parallelSummaryOverlayWidth and parallelSummaryContentWidth in RunApp.tsx is correct (availableWidth now used for narrow terminals), so no code change is required; instead remove the duplicate review tag/comment from the PR and mark the approval, and keep the expressions in the consts parallelSummaryOverlayWidth and parallelSummaryContentWidth as shown to preserve the corrected behavior.
1390-1408: Past concern resolved — transition-guarded auto-show is correct.The
prevParallelSummaryLinesCountRefapproach correctly gates the auto-open to the0 → Ntransition only, so user dismissal is respected on subsequent renders.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/tui/components/RunApp.tsx` around lines 1390 - 1408, The previous concern about auto-showing the parallel summary has been resolved; keep the existing useEffect logic as-is: retain the prevParallelSummaryLinesCountRef, the dependency array [isParallelMode, parallelCompletionSummaryLines], and the calls to setShowParallelSummaryOverlay so the overlay only auto-opens on the 0→N transition while respecting user dismissal (refer to prevParallelSummaryLinesCountRef, useEffect block, and setShowParallelSummaryOverlay in RunApp.tsx).
🧹 Nitpick comments (2)
src/tui/components/RunApp.tsx (1)
2749-2768: Add a truncation indicator between head and tail sections.The head+tail approach correctly preserves the header, but when the two slices are concatenated without a separator, there is no visual cue that content has been omitted in the middle.
✨ Optional: add an ellipsis row between head and tail
return [ ...parallelSummaryOverlayLines.slice(0, headCount), + ' …', ...parallelSummaryOverlayLines.slice(-tailCount), ];🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/tui/components/RunApp.tsx` around lines 2749 - 2768, visibleParallelSummaryLines currently concatenates head and tail slices with no separator; update the useMemo in RunApp (visibleParallelSummaryLines) to insert a truncation indicator (e.g., an ellipsis string or a special row object) between the head slice and tail slice whenever tailCount > 0 and the concatenated length is less than parallelSummaryOverlayLines.length so the UI shows omitted content; ensure the indicator matches the item shape used in parallelSummaryOverlayLines so rendering components can display it as a distinct row.src/session/index.ts (1)
181-212: Past partial-lock concern resolved; consider closing the handle inside thetryblock for correctness parity withwriteFileAtomic.If
handle.close()in thefinallyblock throws afterreturn trueis pending, JavaScript replaces the return value with that exception — the caller sees a failure even though the lock file was fully written andsync()-ed.writeFileAtomicavoids this by closing the handle insidetryand settinghandle = nullbefore the next fallible operation.✨ Proposed alignment with
writeFileAtomicpatternhandle = await open(lockPath, 'wx'); await handle.writeFile(JSON.stringify(lock, null, 2), 'utf-8'); await handle.sync(); + await handle.close(); + handle = null; return true;With
handle = nullset, thefinallyblock'sif (handle)guard skips the redundant close and the clean-up path is unchanged.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/session/index.ts` around lines 181 - 212, Move the final handle.close() into the try block (after await handle.sync()) and set handle = null immediately after a successful close so the pending return isn't overwritten by a later exception in finally; keep the existing shouldCleanupPartialLock/writeError logic and the finally block's unlink cleanup unchanged (it will skip close when handle is null). Specifically, in the block that opens/writes/syncs the lock (variables handle, shouldCleanupPartialLock, writeError), call await handle.close(); handle = null; return true; instead of returning before closing, so the behavior matches the writeFileAtomic pattern.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Outside diff comments:
In `@src/parallel/index.ts`:
- Around line 170-200: After a successful manual conflict retry in
retryConflictResolution, update the task counters so the resolved task is no
longer counted as failed: after the successful resolution/merge (e.g. after
await this.mergeProgressFile(workerResult)) increment this.totalTasksCompleted
and, if a failed-counter exists (e.g. this.totalTasksFailed), decrement it
accordingly; do this before emitNextPendingConflictIfAny() and ensure you only
change counters when the resolution/merge succeeded (and handle the case where
totalTasksFailed may be undefined).
---
Duplicate comments:
In `@src/tui/components/RunApp.tsx`:
- Around line 2675-2685: The width-clamping logic for
parallelSummaryOverlayWidth and parallelSummaryContentWidth in RunApp.tsx is
correct (availableWidth now used for narrow terminals), so no code change is
required; instead remove the duplicate review tag/comment from the PR and mark
the approval, and keep the expressions in the consts parallelSummaryOverlayWidth
and parallelSummaryContentWidth as shown to preserve the corrected behavior.
- Around line 1390-1408: The previous concern about auto-showing the parallel
summary has been resolved; keep the existing useEffect logic as-is: retain the
prevParallelSummaryLinesCountRef, the dependency array [isParallelMode,
parallelCompletionSummaryLines], and the calls to setShowParallelSummaryOverlay
so the overlay only auto-opens on the 0→N transition while respecting user
dismissal (refer to prevParallelSummaryLinesCountRef, useEffect block, and
setShowParallelSummaryOverlay in RunApp.tsx).
---
Nitpick comments:
In `@src/session/index.ts`:
- Around line 181-212: Move the final handle.close() into the try block (after
await handle.sync()) and set handle = null immediately after a successful close
so the pending return isn't overwritten by a later exception in finally; keep
the existing shouldCleanupPartialLock/writeError logic and the finally block's
unlink cleanup unchanged (it will skip close when handle is null). Specifically,
in the block that opens/writes/syncs the lock (variables handle,
shouldCleanupPartialLock, writeError), call await handle.close(); handle = null;
return true; instead of returning before closing, so the behavior matches the
writeFileAtomic pattern.
In `@src/tui/components/RunApp.tsx`:
- Around line 2749-2768: visibleParallelSummaryLines currently concatenates head
and tail slices with no separator; update the useMemo in RunApp
(visibleParallelSummaryLines) to insert a truncation indicator (e.g., an
ellipsis string or a special row object) between the head slice and tail slice
whenever tailCount > 0 and the concatenated length is less than
parallelSummaryOverlayLines.length so the UI shows omitted content; ensure the
indicator matches the item shape used in parallelSummaryOverlayLines so
rendering components can display it as a distinct row.
ℹ️ Review info
Configuration used: Organization UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (12)
src/commands/run.tsxsrc/config/schema.tssrc/parallel/index.tssrc/parallel/merge-engine.tssrc/parallel/parallel-executor.test.tssrc/parallel/session.tssrc/parallel/worktree-manager.tssrc/session/atomic-write.tssrc/session/index.tssrc/session/lock.tssrc/session/persistence.tssrc/tui/components/RunApp.tsx
🚧 Files skipped from review as they are similar to previous changes (1)
- src/config/schema.ts
Summary
--target-branch/ config and add clearer return-to-original-branch error reportingWhy this fixes #328
Validation
bun test src/parallel/merge-engine.test.ts src/parallel/parallel-executor.test.ts src/commands/run.test.ts tests/commands/run.test.ts src/parallel/session.test.tsbun run typecheck && bun run buildUBS_MAX_DIR_SIZE_MB=5000 ubs --diffFixes #328
Summary by CodeRabbit
New Features
Improvements
Security