Skip to content

Fix removed workspace resurrecting via serialization race#52035

Merged
rtfeldman merged 1 commit intomainfrom
AI-96/fix-workspace-removal-race
Mar 23, 2026
Merged

Fix removed workspace resurrecting via serialization race#52035
rtfeldman merged 1 commit intomainfrom
AI-96/fix-workspace-removal-race

Conversation

@rtfeldman
Copy link
Copy Markdown
Contributor

In remove_workspace, the removed Entity<Workspace> could still have a pending serialize_workspace throttle timer (200ms). When that timer fired, serialize_workspace_internal would write the old session_id back to the DB — undoing the removal. On next restart, the workspace would reappear.

The race window opens whenever any state change (worktree change, breakpoint change, etc.) triggers serialize_workspace within 200ms before remove_workspace is called.

Fix: Before the DB cleanup task, update the removed workspace entity to:

  1. session_id.take() — so any in-flight serialization writes session_id: None
  2. _schedule_serialize_workspace.take() — cancel the pending throttle timer
  3. _serialize_workspace_task.take() — cancel any actively running serialization task

This mirrors what remove_from_session already does (clearing session_id), but remove_workspace was missing it.

Release Notes:

  • Fixed a bug where a removed workspace could reappear on next launch due to a serialization race.

In remove_workspace, the removed Entity<Workspace> could still have
a pending serialize_workspace throttle timer. When that timer fired,
serialize_workspace_internal would write the old session_id back to
the DB, undoing the removal and causing the workspace to reappear on
next launch.

Fix: clear session_id and cancel pending serialization tasks on the
removed workspace entity before the DB cleanup, mirroring what
remove_from_session already does.
@cla-bot cla-bot bot added the cla-signed The user has signed the Contributor License Agreement label Mar 20, 2026
@zed-community-bot zed-community-bot bot added the staff Pull requests authored by a current member of Zed staff label Mar 20, 2026
@zed-codeowner-coordinator zed-codeowner-coordinator bot requested review from a team, cameron1024 and reflectronic and removed request for a team March 20, 2026 16:54
@rtfeldman rtfeldman marked this pull request as draft March 20, 2026 16:55
@rtfeldman rtfeldman marked this pull request as ready for review March 23, 2026 15:46
@rtfeldman rtfeldman merged commit 4049a4c into main Mar 23, 2026
41 checks passed
@rtfeldman rtfeldman deleted the AI-96/fix-workspace-removal-race branch March 23, 2026 15:46
@zed-codeowner-coordinator zed-codeowner-coordinator bot requested a review from a team March 23, 2026 15:46
AmaanBilwar pushed a commit to AmaanBilwar/zed that referenced this pull request Mar 23, 2026
…ries#52035)

In `remove_workspace`, the removed `Entity<Workspace>` could still have
a pending `serialize_workspace` throttle timer (200ms). When that timer
fired, `serialize_workspace_internal` would write the old `session_id`
back to the DB — undoing the removal. On next restart, the workspace
would reappear.

The race window opens whenever any state change (worktree change,
breakpoint change, etc.) triggers `serialize_workspace` within 200ms
before `remove_workspace` is called.

**Fix**: Before the DB cleanup task, `update` the removed workspace
entity to:
1. `session_id.take()` — so any in-flight serialization writes
`session_id: None`
2. `_schedule_serialize_workspace.take()` — cancel the pending throttle
timer
3. `_serialize_workspace_task.take()` — cancel any actively running
serialization task

This mirrors what `remove_from_session` already does (clearing
`session_id`), but `remove_workspace` was missing it.

Release Notes:

- Fixed a bug where a removed workspace could reappear on next launch
due to a serialization race.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

cla-signed The user has signed the Contributor License Agreement staff Pull requests authored by a current member of Zed staff

Projects

None yet

Development

Successfully merging this pull request may close these issues.

4 participants