Add test for broken WAL delta after stream records abort#7787
Merged
Conversation
275d10b to
e4d6444
Compare
9 tasks
generall
approved these changes
Dec 17, 2025
* Make set_replica_state async * Add function called when active state of local replica changes * Add snapshot for newest clocks * Bump newest clocks snapshot on replica deactivation * Use newest clocks snapshot during recovery * Add enum for specifying whether to take or clear clocks snapshot * Store clock snapshot inside clock map, removing extra file This greatly simplifies state handling. It also prevent any kind of desynchronization because all newest clocks are always persisted atomically. * Immediately persist clocks after taking snapshot * Always update snapshot, only take if missing * Take clock snapshots through each shard flavor, including proxies * Propagate dedicated functions for taking and clearing clocks snapshot * Only persist clocks immediately if changed on snapshot/clear * Simplify recovery point logic, always take clocks snapshot if exists * Remove unwrap
f225e02 to
3790ef3
Compare
agourlay
approved these changes
Dec 17, 2025
Member
agourlay
left a comment
There was a problem hiding this comment.
Impressive integration test 👏
timvisee
added a commit
that referenced
this pull request
Dec 18, 2025
* Add test to reproduce broken WAL delta after aborting stream records * Add staging env var to slow down stream records transfers for test * Tweak test formatting and utilities a bit * Add comment to test, link to PR describing bug * Update test so it still succeeds with patched behavior * Fix broken WAL delta after stream records abort (#7791) * Make set_replica_state async * Add function called when active state of local replica changes * Add snapshot for newest clocks * Bump newest clocks snapshot on replica deactivation * Use newest clocks snapshot during recovery * Add enum for specifying whether to take or clear clocks snapshot * Store clock snapshot inside clock map, removing extra file This greatly simplifies state handling. It also prevent any kind of desynchronization because all newest clocks are always persisted atomically. * Immediately persist clocks after taking snapshot * Always update snapshot, only take if missing * Take clock snapshots through each shard flavor, including proxies * Propagate dedicated functions for taking and clearing clocks snapshot * Only persist clocks immediately if changed on snapshot/clear * Simplify recovery point logic, always take clocks snapshot if exists * Remove unwrap * Fix typo * Fix doc comment * Transfer driver is async, use Tokio sleep * Reduce visibility
Merged
1 task
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.
Fixed by #7791
Aborting a stream records (or other) transfer may break future WAL delta transfers.
This PR adds a test to reproduce the problematic behavior.
Specifically, any new updates coming in during a stream records (or other) transfer may bump the last seen clocks. This also happens on the node that is receiving the transfer. If we abort the stream records transfer, the last seen clocks may have jumped over a huge gap. A follow up WAL delta transfer will only transfer changes since the last seen clocks, missing a huge set of changes in that jump.
In practice, this sequence is problematic:
Imagine the initial stream records transfer only covered 10% of the points before it got aborted. The WAL delta transfer after it will then miss the remaining 90% of point changes, corrupting the target replica.
Test:
All Submissions:
devbranch. Did you create your branch fromdev?New Feature Submissions:
cargo +nightly fmt --allcommand prior to submission?cargo clippy --all --all-featurescommand?Changes to Core Features: