Restore local checkpoint tracker on promotion#25553
Restore local checkpoint tracker on promotion#25553jasontedor merged 4 commits intoelastic:masterfrom
Conversation
When a shard is promoted to replica, it's possible that it was previously a replica that started following a new primary. When it started following this new primary, the state of its local checkpoint tracker was reset. Upon promotion, it's possible that the state of the local checkpoint tracker has not yet restored from a successful primary-replica re-sync. To account for this, we must restore the state of the local checkpoint tracker when a replica shard is promoted to primary. To do this, we stream the operations in the translog, marking the operations that are in the translog as completed. We do this before we fill the gaps on the newly promoted primary, ensuring that we have a primary shard with a complete history up to the largest maximum sequence number it has ever seen.
4763341 to
950cede
Compare
| */ | ||
| public abstract void deactivateThrottling(); | ||
|
|
||
| /** |
There was a problem hiding this comment.
nit: this needs to be adapted to the last change
There was a problem hiding this comment.
can you also comment somewhere why it's needed? either here or in another place.
There was a problem hiding this comment.
The comment says "This is used to restore the state of the local checkpoint tracker on primary promotion." You think that more is needed?
There was a problem hiding this comment.
yeah, I think it's good to say why it can suddenly go out of sync. That's not normal.
There was a problem hiding this comment.
Okay, I pushed an explanation to the place in index shard where we use the method. 😄
| } | ||
|
|
||
| @Override | ||
| public void restoreLocalCheckpointFromTranslog() throws IOException { |
There was a problem hiding this comment.
nit: can we do this under lock and after ensureOPen, just to be safe?
| * Returns the maximum sequence number for the shard. | ||
| * | ||
| * @return the maximum sequence number | ||
| */ |
There was a problem hiding this comment.
since this is only used for tests, shall we use seqNoStats and remove this?
| }; | ||
| final int operations = randomIntBetween(0, 1024); | ||
| final Set<Long> expectedCompletedSeqNos = new HashSet<>(); | ||
| for (int i = 0; i < operations; i++) { |
There was a problem hiding this comment.
how do you feel about randomizing the order of insertions and doing some rollGeneration for fun?
* master: (42 commits) Harden global checkpoint tracker Remove deprecated created and found from index, delete and bulk (elastic#25516) fix testEnsureVersionCompatibility for 5.5.0 release fix Version.v6_0_0 min compatibility version to 5.5.0 Add bwc indices for 5.5.0 Add v5_5_1 constant [DOCS] revise high level client Search Scroll API docs (elastic#25599) Improve REST error handling when endpoint does not support HTTP verb, add OPTIONS support (elastic#24437) Avoid SecurityException in repository-S3 on DefaultS3OutputStream.flush() (elastic#25254) [Tests] Add tests for CompletionSuggestionBuilder#build() (elastic#25575) Enable cross-setting validation [Docs] Fix typo in bootstrap-checks.asciidoc (elastic#25597) Index ids in binary form. (elastic#25352) bwc checkout should fetch from all remotes IndexingIT should check for global checkpoints regardless of master version [Tests] Add tests for PhraseSuggestionBuilder#build() (elastic#25571) Remove unused class MinimalMap (elastic#25590) [Docs] Document Scroll API for Java High Level REST Client (elastic#25554) Disable date field mapping changing (elastic#25285) Allow BWC Testing against a specific branch (elastic#25510) ...
|
Okay, I pushed an explanation to the place in index shard where we use the method. |
When a shard is promoted to replica, it's possible that it was previously a replica that started following a new primary. When it started following this new primary, the state of its local checkpoint tracker was reset. Upon promotion, it's possible that the state of the local checkpoint tracker has not yet restored from a successful primary-replica re-sync. To account for this, we must restore the state of the local checkpoint tracker when a replica shard is promoted to primary. To do this, we stream the operations in the translog, marking the operations that are in the translog as completed. We do this before we fill the gaps on the newly promoted primary, ensuring that we have a primary shard with a complete history up to the largest maximum sequence number it has ever seen.
Relates #10708, relates #25355