devcontainer: Fix OpenDevContainer action panic due to double workspace entity lease#49058
Conversation
|
If you build your changes and try your reproduction steps @oliverbarnes I think this still crashes Zed. Can you confirm? |
|
@oliverbarnes this still crashes Zed. |
|
@yeskunall it's working for me. Tried again just now - opening a devcontainer either from the command palette or from the modal (toast?) widget that pops up, both work |
|
@oliverbarnes not trying to be argumentative, but it does not. See recording (apologies for the quality -- having issues recording today). Built-in.Retina.Display.2026-02-12.15.06.19.mp4 |
|
@yeskunall all good :) had the same idea and was recording over here. The difference is that I'm passing a project in the command line. Will try your scenario as well. Next time we could just hop on a call and share our screens :) Screen.Recording.2026-02-12.at.20.13.53.720p.mov |
@oliverbarnes ah, that makes sense. If you take a look at the code that launches the modal for the Dev Containers, you’ll see there is another call site that causes a panic. I was actually making a PR to fix this, when I noticed yours.
@/KyleBarton is our Dev Containers SME, which is why you’ll see Mikayla assigned it to them. That being said -- happy to. My calendar is public and open to anyone. Book any available slot :) |
|
Got it :) But so, I'm not sure how to repro this other panic... I just ran Screen.Recording.2026-02-12.at.20.24.16.720p.mov |
|
Your call - if you have a fix that handles all scenarios, by all means ship it! Happy to close this if that's the case. |
|
Happy to help here! There is still a difference in how I’m testing, vs. how you are in your latest recording. Notice in mine, I do not accept the prompt to open the project in a Dev Container, and instead directly try to open it using the action.
If you can also reproduce the crash using my steps, that’d be great! |
|
OOoh, when I try to open from the command palette after canceling the initial build it does crash for me here. Gonna take this into account and iterate, then. Stay tuned |
|
@yeskunall tested all the scenarios we discussed here, all working for me now. I wish there was a way to cover all these with tests... |
|
@oliverbarnes this still applies:
I’ll let Kyle confirm if the changes are sound, but you at least did find the call site where the panic happens.
Yup, I agree with you. I’m from the front-end (web) world where we’d cover these outside of unit tests, but not sure how you’d write e2e/integration test(s) for Zed. Haven’t even given it a single thought. |
|
Yeah, no idea how E2E tests would work here, with external dependencies on the CLI, Docker... I guess it's doable, but super heavy duty. Claude suggested adding a couple of tests at the action dispatch level, following an existing testing pattern. Nice and lightweight. They manage to reproduce the panic when ran without the fix, and they pass with it, so I've gone ahead and pushed them. |
|
@oliverbarnes thanks for this - it looks right to me. @yeskunall can you just confirm that the new changes work for you as well, since you saw an earlier crash? I think there's probably additional work to make multi-workspace devcontainer selection nicer, but that should probably be in a future change. |
|
@KyleBarton I just pulled the changes and rebuilt Zed and it all looks good on my end now. Thank you both! |
@yeskunall coincidentally this came up on my feed today: https://simonwillison.net/2026/Feb/7/software-factory/ He writes about an experiment using what they call a Digital Twin Universe - a complete E2E/QA environment running mock clones of all the external services their software needs to integrate with (Okta, Slack, Jira, several Google services). All written by agents. |
Closes #49055
Heads up: This might be a naïve solution. I ran into the issue after merging latest main into #48896, and confirming that it was unrelated to that PR and incoming from upstream.
Agent one-shot the fix, it works and tests pass. But I'm still wrapping my head around the changes that led to the bug. I figured the breakage is bad enough (I couldn't open devcontainers at all) to submit a possibly naïve fix.
Fix
Hoists the
find_devcontainer_configscall out ofnew_dev_containerand into the call site, where we already have a direct&mut Workspacereference that doesn't go through the entity map. The computed configs are passed intonew_dev_containeras an argument.What was happening
After #48800 ("Re-add MultiWorkspace"),
with_active_or_new_workspacenests aWorkspaceentity lease inside aMultiWorkspaceentity lease. TheOpenDevContainerhandler was also changed from async to sync in the same PR, soRemoteServerProjects::new_dev_containernow runs whileWorkspaceis leased. Insidenew_dev_container, aWeakEntity<Workspace>::read_withcall tries to readWorkspacethrough the entity map, finds it already leased, and panics.Release Notes:
OpenDevContaineraction.