fix(runtimed): prevent pipe mode stream corruption by buffering outgoing frames (#613)#616
Merged
fix(runtimed): prevent pipe mode stream corruption by buffering outgoing frames (#613)#616
Conversation
In pipe mode, ReceiveFrontendSyncMessage was writing sync frames directly to client.stream inside the select! command handler. If the daemon was sending data at the same time, the select! would drop the pending socket read future, then the command handler would write to the stream, corrupting the framing. The daemon would then read payload bytes as a length prefix, producing bogus frame sizes (observed: 1.15 GB). Fix: buffer outgoing pipe frames in a VecDeque and flush them at the top of the loop BEFORE entering select!. This ensures writes only happen when no read is pending on the socket. The queue is drained synchronously before the next select! iteration. Full peer mode (runtimed-py) is unaffected — its writes go through sync_to_daemon() which owns the read/write sequence.
In pipe mode (#608), the Automerge sync path doesn't deliver output changes — the daemon's sync state tracks the relay peer, not the WASM peer, so all sync frames arrive with changed=false. materializeCells never runs after execution, and outputs never render. Re-enable the broadcast-driven output path (appendOutput via onOutput callback). The broadcast pipeline works correctly — outputs arrive, blob manifests resolve, and the external store updates. No duplicate risk: since sync frames have changed=false, materializeCells doesn't run after execution, so there's only one source of output updates (broadcasts). The proper fix is to align the sync states so the daemon talks directly to the WASM through the pipe (skip do_initial_sync in pipe mode). Tracked as a follow-up.
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.
Fixes #613.
Bug
In pipe mode (#608),
ReceiveFrontendSyncMessagewrote sync frames directly toclient.streaminside theselect!command handler. If the daemon was sending data simultaneously,select!would drop the pending socket read, then the write would corrupt the framing. The daemon reads payload bytes as a length prefix →frame too large: 1154398000 bytes.Fix
Buffer outgoing pipe frames in a
VecDequeand flush them at the top of the loop before enteringselect!. Writes only happen when no read is pending.Full peer mode (runtimed-py) is unaffected — its writes go through
sync_to_daemon()which owns the read/write sequence.Test plan
cargo test -p runtimed --lib— 234 passedcargo test -p runtimed --test '*'— 15 integration testscargo test -p notebook --lib— 126 passed