Initialize primary term for shrunk indices#25307
Conversation
Today when an index is shrunk, the primary terms for its shards start from one. Yet, this is a problem as the index will already contain assigned sequence numbers across primary terms. To ensure document-level sequence number semantics, the primary terms of the target shards must start from the maximum of all the shards in the source index. This commit causes this to be the case.
bleskes
left a comment
There was a problem hiding this comment.
this look great. My only ask is to add a unit test to MetaDataCreateIndexServiceTests
| tmpImdBuilder.settings(actualIndexSettings); | ||
|
|
||
| if (shrinkFromIndex != null) { | ||
| final IndexMetaData sourceMetaData = currentState.metaData().getIndexSafe(shrinkFromIndex); |
There was a problem hiding this comment.
can you please add a comment explaining why we do this?
| ensureGreen(); | ||
|
|
||
| // restart random data nodes to force the primary term for some shards to increase | ||
| for (int i = 0; i < randomIntBetween(0, 16); i++) { |
There was a problem hiding this comment.
why do we need up to 16 restarts? Maybe it's faster to fail shards by getting IndexShard instances from internalCluster? also this if loop calls randomIntBetween(0, 16) many times... I'm not sure that's what you intended
There was a problem hiding this comment.
Sure, I pushed a change that does this.
|
@bleskes Actually this was my first approach before opting for the integration test in |
|
Back to you @bleskes. |
fair enough |
Today when an index is shrunk, the primary terms for its shards start from one. Yet, this is a problem as the index will already contain assigned sequence numbers across primary terms. To ensure document-level sequence number semantics, the primary terms of the target shards must start from the maximum of all the shards in the source index. This commit causes this to be the case.
Relates #10708