Skip to content

Cluster state and recovery constructs for in-place shard split#20979

Merged
shwetathareja merged 1 commit intoopensearch-project:mainfrom
vikasvb90:online_shard_split
Apr 1, 2026
Merged

Cluster state and recovery constructs for in-place shard split#20979
shwetathareja merged 1 commit intoopensearch-project:mainfrom
vikasvb90:online_shard_split

Conversation

@vikasvb90
Copy link
Copy Markdown
Contributor

Description

Add cluster state infrastructure for in-place shard split

This PR adds the cluster state update service and supporting POJO changes needed to trigger an in-place shard split.

Changes:

Cluster state update service:

  • MetadataInPlaceShardSplitService: Submits an acked cluster state update task that updates SplitShardsMetadata on the target index and triggers a reroute. Validates that the index exists, the shard is not already splitting or already split, and virtual shards are not enabled on the index.
  • InPlaceShardSplitClusterStateUpdateRequest: Request POJO holding index name, shard ID, split-into count, and cause.
  • ClusterManagerTask: Added IN_PLACE_SHARD_SPLIT task key for cluster manager task throttling.

Routing POJO changes to support shard split lifecycle:

  • ShardRoutingState: Added SPLITTING state for parent shards undergoing an in-place split.
  • RecoverySource: Added InPlaceShardSplitRecoverySource type for child shards recovering from a parent shard on the same node.
  • UnassignedInfo: Added CHILD_SHARD_CREATED reason for child shards pending allocation after a split.
  • AllocationId: Added splitChildAllocationIds and parentAllocationId fields with version-gated serialization (V_3_6_0), factory methods (newSplit, newTargetSplit, cancelSplit, finishSplit), and updated equals/hashCode/toXContent.
  • ShardRouting: Added recoveringChildShards and parentShardId fields, new constructor accepting split fields, and query methods (splitting(), isSplitTarget(), getRecoveringChildShards(), getParentShardId()).

The REST API is not exposed in this PR. The routing allocation logic to actually assign child shards to nodes will follow in a subsequent PR.

Related Issues

Resolves #[Issue number to be closed when this PR is merged]

Check List

  • Functionality includes testing.
  • API changes companion pull request created, if applicable.
  • Public documentation issue/PR created, if applicable.

By submitting this pull request, I confirm that my contribution is made under the terms of the Apache 2.0 license.
For more information on following Developer Certificate of Origin and signing off your commits, please check here.

@vikasvb90 vikasvb90 requested a review from a team as a code owner March 24, 2026 03:52
@github-actions
Copy link
Copy Markdown
Contributor

github-actions bot commented Mar 24, 2026

PR Reviewer Guide 🔍

(Review updated until commit 6867e76)

Here are some key observations to aid the review process:

🧪 PR contains tests
🔒 No security concerns identified
✅ No TODO sections
🔀 Multiple PR themes

Sub-PR theme: Routing POJO changes for in-place shard split lifecycle

Relevant files:

  • server/src/main/java/org/opensearch/cluster/routing/AllocationId.java
  • server/src/main/java/org/opensearch/cluster/routing/RecoverySource.java
  • server/src/main/java/org/opensearch/cluster/routing/ShardRoutingState.java
  • server/src/main/java/org/opensearch/cluster/routing/UnassignedInfo.java
  • server/src/main/java/org/opensearch/cluster/routing/ShardRouting.java
  • server/src/test/java/org/opensearch/cluster/routing/AllocationIdSplitTests.java
  • server/src/test/java/org/opensearch/cluster/routing/RecoverySourceSplitTests.java
  • server/src/test/java/org/opensearch/cluster/routing/RecoverySourceTests.java
  • server/src/test/java/org/opensearch/cluster/routing/ShardRoutingStateSplitTests.java
  • server/src/test/java/org/opensearch/cluster/routing/UnassignedInfoTests.java
  • test/framework/src/main/java/org/opensearch/cluster/routing/TestShardRouting.java

Sub-PR theme: Cluster state update service for in-place shard split

Relevant files:

  • server/src/main/java/org/opensearch/action/admin/indices/split/InPlaceSplitShardClusterStateUpdateRequest.java
  • server/src/main/java/org/opensearch/action/admin/indices/split/package-info.java
  • server/src/main/java/org/opensearch/cluster/metadata/MetadataInPlaceSplitShardService.java
  • server/src/main/java/org/opensearch/cluster/service/ClusterManagerTask.java
  • server/src/test/java/org/opensearch/cluster/metadata/MetadataInPlaceSplitShardServiceTests.java

⚡ Recommended focus areas for review

Incorrect Assert Logic

The second assert for expectedShardSize >= 0 uses || (OR) between the negated state conditions, which means it is always true and never actually validates anything meaningful. The original code used || as well, but with the addition of SPLITTING state, this logic should be carefully reviewed. It should likely use && (AND) to properly enforce that expectedShardSize >= 0 when the shard is in INITIALIZING, RELOCATING, or SPLITTING state.

assert expectedShardSize >= 0
    || state != ShardRoutingState.INITIALIZING
    || state != ShardRoutingState.RELOCATING
    || state != ShardRoutingState.SPLITTING : expectedShardSize + " state: " + state;
Transient Fields Not Serialized

The recoveringChildShards and parentShardId fields are marked as transient (not serialized on the wire) and are always set to null during deserialization. The comment says they are "populated by RoutingNodes constructor", but there is no validation or documentation of when/how these fields get populated after deserialization, which could lead to subtle bugs if callers rely on them being non-null.

// These fields are transient - populated by RoutingNodes constructor, not serialized on the wire.
recoveringChildShards = null;
parentShardId = null;
Missing Shard Count Validation

The applySplitShardRequest method does not validate that splitInto is greater than 1 (splitting into 1 is a no-op) or that it is a valid multiplier relative to the current shard count. The test testApplySplitShardRequestThrowsForSplitIntoZero expects an ArithmeticException (implying a divide-by-zero somewhere downstream), but there is no explicit guard or meaningful error message for invalid splitInto values.

static ClusterState applySplitShardRequest(
    ClusterState currentState,
    InPlaceSplitShardClusterStateUpdateRequest request,
    BiFunction<ClusterState, String, ClusterState> rerouteRoutingTable
) {
    IndexMetadata curIndexMetadata = currentState.metadata().index(request.getIndex());
    if (curIndexMetadata == null) {
        throw new IllegalArgumentException("Index [" + request.getIndex() + "] not found");
    }

    if (curIndexMetadata.getNumberOfVirtualShards() != -1) {
        throw new IllegalArgumentException(
            "In-place shard split is not supported on index [" + request.getIndex() + "] with virtual shards enabled"
        );
    }

    if (currentState.nodes().getMinNodeVersion().equals(currentState.nodes().getMaxNodeVersion()) == false
        || currentState.nodes().getMinNodeVersion().before(Version.V_3_7_0)) {
        throw new IllegalArgumentException(
            "In-place shard split requires all nodes to be on the same version, at or above " + Version.V_3_7_0
        );
    }

    int shardId = request.getShardId();
    SplitShardsMetadata splitShardsMetadata = curIndexMetadata.getSplitShardsMetadata();

    if (splitShardsMetadata.getInProgressSplitShardIds().contains(shardId)) {
        throw new IllegalArgumentException("Splitting of shard [" + shardId + "] is already in progress");
    }

    if (splitShardsMetadata.isSplitParent(shardId)) {
        throw new IllegalArgumentException("Shard [" + shardId + "] has already been split.");
    }

    ShardRouting primaryShard = currentState.routingTable()
        .shardRoutingTable(curIndexMetadata.getIndex().getName(), shardId)
        .primaryShard();
    if (primaryShard.relocating()) {
        throw new IllegalArgumentException(
            "Cannot split shard [" + shardId + "] on index [" + request.getIndex() + "] because it is currently relocating"
        );
    }
    if (primaryShard.started() == false) {
        throw new IllegalArgumentException(
            "Cannot split shard ["
                + shardId
                + "] on index ["
                + request.getIndex()
                + "] because the primary shard is not started, current state: "
                + primaryShard.state()
        );
    }

    RoutingTable.Builder routingTableBuilder = RoutingTable.builder(currentState.routingTable());
    Metadata.Builder metadataBuilder = Metadata.builder(currentState.metadata());
    IndexMetadata.Builder indexMetadataBuilder = IndexMetadata.builder(curIndexMetadata);

    SplitShardsMetadata.Builder splitMetadataBuilder = new SplitShardsMetadata.Builder(splitShardsMetadata);
    splitMetadataBuilder.splitShard(shardId, request.getSplitInto());
    indexMetadataBuilder.splitShardsMetadata(splitMetadataBuilder.build());

    RoutingTable routingTable = routingTableBuilder.build();
    metadataBuilder.put(indexMetadataBuilder);

    ClusterState updatedState = ClusterState.builder(currentState).metadata(metadataBuilder).routingTable(routingTable).build();
    return rerouteRoutingTable.apply(updatedState, "shard [" + shardId + "] of index [" + request.getIndex() + "] split");
}
Serialization Version Mismatch

The PR description states version-gated serialization uses V_3_6_0, but the actual code uses Version.V_3_7_0 for both read and write. This inconsistency between the description and code should be verified to ensure the correct version gate is used.

    if (in.getVersion().onOrAfter(Version.V_3_7_0)) {
        List<String> childIds = in.readOptionalStringList();
        splitChildAllocationIds = childIds == null ? null : Collections.unmodifiableList(childIds);
        parentAllocationId = in.readOptionalString();
    } else {
        splitChildAllocationIds = null;
        parentAllocationId = null;
    }
}

@Override
public void writeTo(StreamOutput out) throws IOException {
    out.writeString(this.id);
    out.writeOptionalString(this.relocationId);
    if (out.getVersion().onOrAfter(Version.V_3_7_0)) {
        out.writeOptionalStringCollection(splitChildAllocationIds);
        out.writeOptionalString(parentAllocationId);
Enum Ordinal Stability

IN_PLACE_SPLIT_SHARD was added after REMOTE_STORE in the Type enum, but the readFrom method uses Type.values()[in.readByte()] (ordinal-based deserialization). Adding a new enum value at the end is safe, but if the serialization uses ordinals, any future reordering would break backward compatibility. This is a fragile pattern worth noting.

public enum Type {
    EMPTY_STORE,
    EXISTING_STORE,
    PEER,
    SNAPSHOT,
    LOCAL_SHARDS,
    REMOTE_STORE,
    IN_PLACE_SPLIT_SHARD
}

@github-actions
Copy link
Copy Markdown
Contributor

github-actions bot commented Mar 24, 2026

PR Code Suggestions ✨

Latest suggestions up to 6867e76

Explore these optional code suggestions:

CategorySuggestion                                                                                                                                    Impact
Possible issue
Fix overly strict version check logic

The version check uses equals to verify all nodes are on the same version, but this
is overly strict. The real requirement is that all nodes are at or above V_3_7_0.
Two nodes could be on different versions both above V_3_7_0 and the split should
still be allowed. The condition should only check that the minimum node version is
at or above V_3_7_0.

server/src/main/java/org/opensearch/cluster/metadata/MetadataInPlaceSplitShardService.java [108-109]

-if (currentState.nodes().getMinNodeVersion().equals(currentState.nodes().getMaxNodeVersion()) == false
-    || currentState.nodes().getMinNodeVersion().before(Version.V_3_7_0)) {
+if (currentState.nodes().getMinNodeVersion().before(Version.V_3_7_0)) {
Suggestion importance[1-10]: 7

__

Why: The current check requires all nodes to be on the exact same version AND at or above V_3_7_0. The first condition (equals) is overly strict — two nodes could be on different versions both above V_3_7_0 and the split should still be allowed. Only the minimum version check is necessary.

Medium
Routing table not updated with new child shards

The routingTableBuilder is created but no modifications are made to it before
calling build(). The routing table is built from the original state without any
changes, which means the new child shards are not added to the routing table. The
reroute call may handle allocation, but the routing table should reflect the split
state (e.g., adding unassigned child shards) before the reroute is triggered.

server/src/main/java/org/opensearch/cluster/metadata/MetadataInPlaceSplitShardService.java [145-154]

-RoutingTable.Builder routingTableBuilder = RoutingTable.builder(currentState.routingTable());
 Metadata.Builder metadataBuilder = Metadata.builder(currentState.metadata());
 IndexMetadata.Builder indexMetadataBuilder = IndexMetadata.builder(curIndexMetadata);
 
 SplitShardsMetadata.Builder splitMetadataBuilder = new SplitShardsMetadata.Builder(splitShardsMetadata);
 splitMetadataBuilder.splitShard(shardId, request.getSplitInto());
 indexMetadataBuilder.splitShardsMetadata(splitMetadataBuilder.build());
 
-RoutingTable routingTable = routingTableBuilder.build();
 metadataBuilder.put(indexMetadataBuilder);
+// NOTE: routing table changes (adding child shards) should be applied here before reroute
+RoutingTable routingTable = currentState.routingTable(); // placeholder - child shards must be added
Suggestion importance[1-10]: 5

__

Why: The routingTableBuilder is created but no modifications are made before build(), meaning the routing table is unchanged. This could be a real issue if child shards need to be added to the routing table before the reroute. However, the improved_code is incomplete (uses a placeholder comment), making it unclear if this is intentional design or a bug.

Low
General
Ensure enum ordinal stability for serialization

The IN_PLACE_SPLIT_SHARD case is inserted before REMOTE_STORE in the switch
statement, but the Type enum appends IN_PLACE_SPLIT_SHARD after REMOTE_STORE. The
switch statement order doesn't affect correctness, but the readFrom method reads a
byte ordinal. Since IN_PLACE_SPLIT_SHARD has ordinal 6 (after REMOTE_STORE at 5),
the switch case ordering is fine — however, the Type enum ordinal must remain
stable. Verify that existing serialized data using REMOTE_STORE (ordinal 5) is not
broken by the new enum value appended at ordinal 6.

server/src/main/java/org/opensearch/cluster/routing/RecoverySource.java [94-96]

+case REMOTE_STORE:
+    return new RemoteStoreRecoverySource(in);
 case IN_PLACE_SPLIT_SHARD:
     return InPlaceSplitShardRecoverySource.INSTANCE;
-case REMOTE_STORE:
Suggestion importance[1-10]: 3

__

Why: The suggestion asks to verify ordinal stability, which is already confirmed by the test in RecoverySourceTests.java and RecoverySourceSplitTests.java. The improved_code just reorders the switch cases which doesn't affect correctness, and the suggestion is more of a verification request than a real fix.

Low
Ensure symmetric null handling in serialization

The writeTo method only writes splitChildAllocationIds and parentAllocationId when
the version is at or after V_3_7_0, but readOptionalStringList is used for reading.
If writeOptionalStringCollection writes a null as a specific marker, ensure the
read/write methods are symmetric. Also, writeOptionalStringCollection accepts a
Collection but splitChildAllocationIds is a List — verify the null case is handled
identically on both sides (i.e., both write and read treat null the same way).

server/src/main/java/org/opensearch/cluster/routing/AllocationId.java [122-129]

 if (in.getVersion().onOrAfter(Version.V_3_7_0)) {
     List<String> childIds = in.readOptionalStringList();
-    splitChildAllocationIds = childIds == null ? null : Collections.unmodifiableList(childIds);
+    splitChildAllocationIds = childIds == null ? null : Collections.unmodifiableList(new ArrayList<>(childIds));
     parentAllocationId = in.readOptionalString();
 } else {
     splitChildAllocationIds = null;
     parentAllocationId = null;
 }
Suggestion importance[1-10]: 2

__

Why: The improved_code wraps the result in new ArrayList<>() before making it unmodifiable, but this is a minor defensive copy that doesn't address a real bug. The readOptionalStringList already returns a new list, so the change offers negligible benefit.

Low

Previous suggestions

Suggestions up to commit 188e237
CategorySuggestion                                                                                                                                    Impact
Possible issue
Fix incorrect version check logic

The version check logic is incorrect. When minVersion == maxVersion but both are
below V_3_7_0, the first condition is false so the || short-circuits and the check
passes incorrectly. The condition should use && (all nodes same version AND at or
above minimum) or restructure to check the minimum version independently of the
equality check.

server/src/main/java/org/opensearch/cluster/metadata/MetadataInPlaceSplitShardService.java [108-109]

-if (currentState.nodes().getMinNodeVersion().equals(currentState.nodes().getMaxNodeVersion()) == false
-    || currentState.nodes().getMinNodeVersion().before(Version.V_3_7_0)) {
+if (currentState.nodes().getMinNodeVersion().before(Version.V_3_7_0)
+    || currentState.nodes().getMinNodeVersion().equals(currentState.nodes().getMaxNodeVersion()) == false) {
Suggestion importance[1-10]: 8

__

Why: The original condition uses || which means if minVersion == maxVersion (first condition is false), the second condition (minVersion.before(V_3_7_0)) is never evaluated, allowing clusters where all nodes are on the same version below V_3_7_0 to bypass the check. The suggested fix correctly reorders the conditions so the version floor check is evaluated independently.

Medium
Fix tautological assertion with wrong logical operator

The assertion using || with != conditions is a tautology — it is always true because
a state cannot simultaneously equal both INITIALIZING and RELOCATING. The original
code used && for the negated conditions, which correctly asserts that
expectedShardSize >= 0 when the state is any of those states. This should use && to
preserve the intended invariant.

server/src/main/java/org/opensearch/cluster/routing/ShardRouting.java [149-152]

 assert expectedShardSize >= 0
-    || state != ShardRoutingState.INITIALIZING
-    || state != ShardRoutingState.RELOCATING
-    || state != ShardRoutingState.SPLITTING : expectedShardSize + " state: " + state;
+    || (state != ShardRoutingState.INITIALIZING
+        && state != ShardRoutingState.RELOCATING
+        && state != ShardRoutingState.SPLITTING) : expectedShardSize + " state: " + state;
Suggestion importance[1-10]: 7

__

Why: The assertion state != INITIALIZING || state != RELOCATING || state != SPLITTING is always true (a state can't be all three simultaneously), making the assertion meaningless. Using && instead of || correctly enforces that expectedShardSize >= 0 when the state is any of those three states.

Medium
General
Normalize empty list to null on deserialization

The writeTo method only writes splitChildAllocationIds and parentAllocationId when
the version is at or after V_3_7_0, but readOptionalStringList may return an empty
list (not null) when the collection was written as empty. The
Collections.unmodifiableList wrapping of an empty list is fine, but the null-check
should also handle the case where splitChildAllocationIds was written as an empty
list to avoid inconsistency with the null sentinel used elsewhere. Consider
normalizing empty lists to null on read.

server/src/main/java/org/opensearch/cluster/routing/AllocationId.java [122-129]

 if (in.getVersion().onOrAfter(Version.V_3_7_0)) {
     List<String> childIds = in.readOptionalStringList();
-    splitChildAllocationIds = childIds == null ? null : Collections.unmodifiableList(childIds);
+    splitChildAllocationIds = (childIds == null || childIds.isEmpty()) ? null : Collections.unmodifiableList(childIds);
     parentAllocationId = in.readOptionalString();
 } else {
     splitChildAllocationIds = null;
     parentAllocationId = null;
 }
Suggestion importance[1-10]: 4

__

Why: If splitChildAllocationIds is written as an empty collection, readOptionalStringList may return an empty list rather than null, causing inconsistency with the null sentinel used to indicate "no split in progress". Normalizing empty lists to null on read prevents subtle bugs in code that checks getSplitChildAllocationIds() == null.

Low
Preserve switch case order matching enum ordinals

The IN_PLACE_SPLIT_SHARD case is inserted before REMOTE_STORE in the switch
statement, but in the Type enum IN_PLACE_SPLIT_SHARD is declared after REMOTE_STORE.
The readFrom method uses Type.values()[in.readByte()] which relies on ordinal order,
so the switch case ordering doesn't matter for correctness here, but the enum
ordinal of IN_PLACE_SPLIT_SHARD (6) must match what was written. However, inserting
IN_PLACE_SPLIT_SHARD between LOCAL_SHARDS and REMOTE_STORE in the switch while it
has ordinal 6 (after REMOTE_STORE at ordinal 5) is fine for the switch, but the
RecoverySourceTests confirms ordinal 6. Verify that any persisted cluster state
using the old REMOTE_STORE byte value (5) is not broken by the new enum ordering
where IN_PLACE_SPLIT_SHARD is appended at ordinal 6.

server/src/main/java/org/opensearch/cluster/routing/RecoverySource.java [94-97]

+case REMOTE_STORE:
+    return new RemoteStoreRecoverySource(in);
 case IN_PLACE_SPLIT_SHARD:
     return InPlaceSplitShardRecoverySource.INSTANCE;
-case REMOTE_STORE:
Suggestion importance[1-10]: 2

__

Why: The switch statement order doesn't affect correctness since it matches on enum values, not ordinals. The readFrom method uses Type.values()[in.readByte()] to get the type before switching, so the case ordering in the switch is irrelevant. This is a cosmetic suggestion with no functional impact.

Low
Suggestions up to commit 43aefb5
CategorySuggestion                                                                                                                                    Impact
Possible issue
Fix always-true assertion with incorrect logical operators

The assert condition a || b || c || d where b, c, d are state != X will always be
true because a state cannot simultaneously be INITIALIZING, RELOCATING, and
SPLITTING — at least two of those != conditions will always be true. This makes the
assertion meaningless. The original code used && for the negated conditions, which
correctly asserts that expectedShardSize >= 0 when the state IS one of those values.

server/src/main/java/org/opensearch/cluster/routing/ShardRouting.java [149-152]

 assert expectedShardSize >= 0
-    || state != ShardRoutingState.INITIALIZING
-    || state != ShardRoutingState.RELOCATING
-    || state != ShardRoutingState.SPLITTING : expectedShardSize + " state: " + state;
+    || (state != ShardRoutingState.INITIALIZING
+        && state != ShardRoutingState.RELOCATING
+        && state != ShardRoutingState.SPLITTING) : expectedShardSize + " state: " + state;
Suggestion importance[1-10]: 8

__

Why: This is a real bug — the assertion state != INITIALIZING || state != RELOCATING || state != SPLITTING is always true (a value can't be all three simultaneously), making the assertion meaningless. The fix correctly uses && to ensure the assertion fires when expectedShardSize < 0 and the state is one of those active states.

Medium
Ensure symmetric serialization read/write methods

The writeTo method writes splitChildAllocationIds using
writeOptionalStringCollection, but the readFrom reads it back using
readOptionalStringList. These two methods may not be symmetric —
writeOptionalStringCollection writes a Collection while readOptionalStringList
returns a List. Verify that these are compatible; if writeOptionalStringCollection
writes a null marker differently than readOptionalStringList expects,
deserialization will be corrupted. It is safer to use a consistent pair such as
writeOptionalStringArray/readOptionalStringArray or ensure the stream format
matches.

server/src/main/java/org/opensearch/cluster/routing/AllocationId.java [119-130]

 AllocationId(StreamInput in) throws IOException {
     this.id = in.readString();
     this.relocationId = in.readOptionalString();
     if (in.getVersion().onOrAfter(Version.V_3_7_0)) {
         List<String> childIds = in.readOptionalStringList();
         splitChildAllocationIds = childIds == null ? null : Collections.unmodifiableList(childIds);
         parentAllocationId = in.readOptionalString();
     } else {
         splitChildAllocationIds = null;
         parentAllocationId = null;
     }
 }
 
+@Override
+public void writeTo(StreamOutput out) throws IOException {
+    out.writeString(this.id);
+    out.writeOptionalString(this.relocationId);
+    if (out.getVersion().onOrAfter(Version.V_3_7_0)) {
+        out.writeOptionalStringList(splitChildAllocationIds);  // use writeOptionalStringList for symmetry
+        out.writeOptionalString(parentAllocationId);
+    }
+}
+
Suggestion importance[1-10]: 6

__

Why: The suggestion raises a valid concern about potential asymmetry between writeOptionalStringCollection and readOptionalStringList. If these methods use different wire formats, deserialization would be corrupted. However, the improved_code is essentially the same as the existing_code with only the writeTo method changed, making it hard to evaluate the full fix.

Low
General
Separate version validation into distinct checks

The version check logic is incorrect. When minVersion != maxVersion (mixed cluster),
the second condition minVersion.before(V_3_7_0) is not evaluated due to
short-circuit OR, so a mixed cluster where both nodes are >= V_3_7_0 would still be
rejected. The intent appears to be: reject if mixed-version OR if min version is too
old. The current logic actually does this correctly via OR, but the error message
says "same version" which is misleading when the real failure is the version being
too old. More critically, a mixed cluster where min >= V_3_7_0 should arguably also
be rejected, but the current logic rejects it — which may be intentional. However,
the two conditions should be separated with distinct error messages to avoid
confusion.

server/src/main/java/org/opensearch/cluster/metadata/MetadataInPlaceSplitShardService.java [108-113]

-if (currentState.nodes().getMinNodeVersion().equals(currentState.nodes().getMaxNodeVersion()) == false
-    || currentState.nodes().getMinNodeVersion().before(Version.V_3_7_0)) {
+if (currentState.nodes().getMinNodeVersion().equals(currentState.nodes().getMaxNodeVersion()) == false) {
+    throw new IllegalArgumentException(
+        "In-place shard split requires all nodes to be on the same version"
+    );
+}
+if (currentState.nodes().getMinNodeVersion().before(Version.V_3_7_0)) {
+    throw new IllegalArgumentException(
+        "In-place shard split requires all nodes to be on the same version, at or above " + Version.V_3_7_0
+    );
+}
Suggestion importance[1-10]: 5

__

Why: The suggestion correctly identifies that the combined condition produces a single error message that doesn't distinguish between a mixed-version cluster and a cluster running an older version. Separating them improves error clarity, though the current logic is functionally correct in rejecting both cases.

Low
Add missing experimental API annotation

The IN_PLACE_SPLIT_SHARD type is appended at the end of the enum, but readFrom uses
Type.values()[in.readByte()] which relies on ordinal order. Adding
IN_PLACE_SPLIT_SHARD after REMOTE_STORE changes the ordinal of nothing existing,
which is fine. However, REMOTE_STORE was previously the last entry and its ordinal
(5) is preserved. The readFrom switch uses named cases, not ordinals, so this is
safe. But the writeTo in RecoverySource writes getType().ordinal() as a byte —
verify that IN_PLACE_SPLIT_SHARD (ordinal 6) is correctly handled in readFrom since
the switch case was added. This appears correct, but the @ExperimentalApi annotation
is missing on InPlaceSplitShardRecoverySource while the class javadoc says
@opensearch.experimental.

server/src/main/java/org/opensearch/cluster/routing/RecoverySource.java [259-263]

-public enum Type {
-    EMPTY_STORE,
-    EXISTING_STORE,
-    PEER,
-    SNAPSHOT,
-    LOCAL_SHARDS,
-    REMOTE_STORE,
-    IN_PLACE_SPLIT_SHARD
-}
+/**
+ * Recovery of child shards from a source shard on the same node during in-place shard split.
+ *
+ * @opensearch.experimental
+ */
+@ExperimentalApi
+public static class InPlaceSplitShardRecoverySource extends RecoverySource {
Suggestion importance[1-10]: 3

__

Why: The @ExperimentalApi annotation is indeed missing from InPlaceSplitShardRecoverySource while the javadoc marks it as @opensearch.experimental. This is a valid but minor consistency issue.

Low
Suggestions up to commit f69deab
CategorySuggestion                                                                                                                                    Impact
Possible issue
Ensure read/write methods are properly paired

The readOptionalStringList method may not exist on StreamInput — the standard
OpenSearch streaming API typically uses readOptionalWriteable or reads collections
differently. If readOptionalStringList is not a valid method, this will cause a
compilation or runtime error. The corresponding writeTo uses
writeOptionalStringCollection, so the read side should use a compatible method.
Verify that readOptionalStringList is a valid StreamInput method; if not, use
in.readBoolean() ? in.readStringList() : null or the appropriate paired method.

server/src/main/java/org/opensearch/cluster/routing/AllocationId.java [119-130]

-AllocationId(StreamInput in) throws IOException {
-    this.id = in.readString();
-    this.relocationId = in.readOptionalString();
-    if (in.getVersion().onOrAfter(Version.V_3_7_0)) {
-        List<String> childIds = in.readOptionalStringList();
-        splitChildAllocationIds = childIds == null ? null : Collections.unmodifiableList(childIds);
-        parentAllocationId = in.readOptionalString();
-    } else {
-        splitChildAllocationIds = null;
-        parentAllocationId = null;
-    }
+if (in.getVersion().onOrAfter(Version.V_3_7_0)) {
+    boolean hasChildIds = in.readBoolean();
+    List<String> childIds = hasChildIds ? in.readStringList() : null;
+    splitChildAllocationIds = childIds == null ? null : Collections.unmodifiableList(childIds);
+    parentAllocationId = in.readOptionalString();
+} else {
+    splitChildAllocationIds = null;
+    parentAllocationId = null;
 }
Suggestion importance[1-10]: 5

__

Why: The concern about readOptionalStringList not being a standard StreamInput method is valid — if this method doesn't exist, it would cause a compilation error. However, writeOptionalStringCollection and readOptionalStringList may be paired methods in OpenSearch's stream API, so this may be a non-issue depending on the codebase.

Low
Routing table not updated for split state transition

The routingTableBuilder is created but no modifications are made to it before
calling build(). The routing table is built unchanged, and the reroute is expected
to handle allocation of child shards. However, if the intent is to add child shard
routing entries to the routing table (e.g., marking the parent shard as SPLITTING
and adding child shards as INITIALIZING), this needs to happen before building the
routing table. If the reroute function handles this entirely, it should be
documented; otherwise, the routing table update for the split state transition is
missing.

server/src/main/java/org/opensearch/cluster/metadata/MetadataInPlaceSplitShardService.java [145-154]

-RoutingTable.Builder routingTableBuilder = RoutingTable.builder(currentState.routingTable());
 Metadata.Builder metadataBuilder = Metadata.builder(currentState.metadata());
 IndexMetadata.Builder indexMetadataBuilder = IndexMetadata.builder(curIndexMetadata);
 
 SplitShardsMetadata.Builder splitMetadataBuilder = new SplitShardsMetadata.Builder(splitShardsMetadata);
 splitMetadataBuilder.splitShard(shardId, request.getSplitInto());
 indexMetadataBuilder.splitShardsMetadata(splitMetadataBuilder.build());
 
-RoutingTable routingTable = routingTableBuilder.build();
+// TODO: Update routing table to reflect SPLITTING state for parent shard and add child shards
+RoutingTable routingTable = RoutingTable.builder(currentState.routingTable()).build();
 metadataBuilder.put(indexMetadataBuilder);
Suggestion importance[1-10]: 5

__

Why: The observation that routingTableBuilder is created but never modified before build() is valid and worth noting. If the reroute function is expected to handle all routing table changes for the split, this should be documented; otherwise, the parent shard's SPLITTING state transition and child shard initialization entries may be missing from the routing table update.

Low
General
Fix version check ordering for clarity and correctness

The version check logic is incorrect. When minVersion.equals(maxVersion) is false
(mixed cluster), the condition short-circuits and throws, which is correct. However,
when all nodes are on the same version but that version is before V_3_7_0, the
before check correctly throws. But if minVersion.equals(maxVersion) is true and the
version is at or above V_3_7_0, the condition is false || false = false, which is
correct. The real issue is that a mixed-version cluster where all nodes happen to be
below V_3_7_0 would only hit the first condition — but actually the logic is subtly
wrong: a uniform cluster below V_3_7_0 satisfies equals == true, so the first part
is false, and then before(V_3_7_0) is true, so it throws correctly. However, a mixed
cluster where min is above V_3_7_0 but versions differ would throw due to the first
condition, which may be overly strict. More critically, the condition should use ||
between the two checks but the intent should be: throw if mixed OR if below minimum
version. The current logic is actually correct but the equals check for version
uniformity should use compareTo or check both min and max against V_3_7_0 to be
clearer and avoid subtle bugs.

server/src/main/java/org/opensearch/cluster/metadata/MetadataInPlaceSplitShardService.java [108-109]

-if (currentState.nodes().getMinNodeVersion().equals(currentState.nodes().getMaxNodeVersion()) == false
-    || currentState.nodes().getMinNodeVersion().before(Version.V_3_7_0)) {
+if (currentState.nodes().getMinNodeVersion().before(Version.V_3_7_0)
+    || currentState.nodes().getMinNodeVersion().equals(currentState.nodes().getMaxNodeVersion()) == false) {
Suggestion importance[1-10]: 2

__

Why: The existing logic is functionally correct — the suggestion reorders the conditions but produces the same behavior. The improved_code is semantically equivalent to the existing_code, so this is a minor style preference with no real correctness impact.

Low
Verify enum ordinal serialization compatibility

The Type enum uses ordinal values (via Type.values()[in.readByte()]) for
serialization in readFrom. Adding IN_PLACE_SPLIT_SHARD at the end is safe for new
clusters, but REMOTE_STORE was previously the last entry at ordinal 5. The new
IN_PLACE_SPLIT_SHARD is at ordinal 6. However, the readFrom method uses
Type.values()[in.readByte()] which relies on ordinal position — if any older
serialized data or mixed-version nodes send byte value 5 expecting REMOTE_STORE,
this still works. But the test testTypeEnumOrdinalStability in
RecoverySourceSplitTests only checks that IN_PLACE_SPLIT_SHARD is last, without
verifying REMOTE_STORE remains at ordinal 5. The existing
RecoverySourceTests.testRecoverySourceTypeOrder does verify this. This is fine, but
the readFrom switch should be verified to handle the new type correctly — currently
it does via the added case IN_PLACE_SPLIT_SHARD. No code change needed here, but the
writeTo method in InPlaceSplitShardRecoverySource is inherited and should write the
type byte — verify the base class writeTo writes getType().ordinal() as a byte
correctly for ordinal 6 which fits in a byte.

server/src/main/java/org/opensearch/cluster/routing/RecoverySource.java [122-130]

+public enum Type {
+    EMPTY_STORE,
+    EXISTING_STORE,
+    PEER,
+    SNAPSHOT,
+    LOCAL_SHARDS,
+    REMOTE_STORE,
+    IN_PLACE_SPLIT_SHARD
+}
 
-
Suggestion importance[1-10]: 2

__

Why: The existing_code and improved_code are identical, and the suggestion only asks to verify behavior rather than proposing an actual code change. The serialization concern is already addressed by the existing readFrom switch statement.

Low
Suggestions up to commit 9b63cb3
CategorySuggestion                                                                                                                                    Impact
Possible issue
Routing table is not updated during split state application

The routingTableBuilder is created but no modifications are made to it before
calling build(). The routing table update (e.g., adding child shard entries or
updating the parent shard state to SPLITTING) appears to be missing. Without
updating the routing table, the reroute call may not correctly allocate child shards
since they won't have routing entries.

server/src/main/java/org/opensearch/cluster/metadata/MetadataInPlaceSplitShardService.java [145-154]

-RoutingTable.Builder routingTableBuilder = RoutingTable.builder(currentState.routingTable());
-Metadata.Builder metadataBuilder = Metadata.builder(currentState.metadata());
-IndexMetadata.Builder indexMetadataBuilder = IndexMetadata.builder(curIndexMetadata);
-
-SplitShardsMetadata.Builder splitMetadataBuilder = new SplitShardsMetadata.Builder(splitShardsMetadata);
-splitMetadataBuilder.splitShard(shardId, request.getSplitInto());
-indexMetadataBuilder.splitShardsMetadata(splitMetadataBuilder.build());
-
+// routingTableBuilder should be updated to reflect the new split state
+// e.g., update parent shard to SPLITTING state and add unassigned child shards
+// before building the routing table
 RoutingTable routingTable = routingTableBuilder.build();
 metadataBuilder.put(indexMetadataBuilder);
Suggestion importance[1-10]: 7

__

Why: The routingTableBuilder is built without any modifications, meaning child shard routing entries and the parent shard's SPLITTING state are never added to the routing table. This could prevent the reroute from correctly allocating child shards, representing a potentially significant functional gap.

Medium
Fix overly strict version check for split eligibility

The version check uses equals to verify all nodes are on the same version, but this
is overly strict. Two nodes could be on different versions both at or above V_3_7_0
and the split should still be allowed. The condition should only reject if the
minimum node version is below V_3_7_0, regardless of whether all nodes are on the
same version.

server/src/main/java/org/opensearch/cluster/metadata/MetadataInPlaceSplitShardService.java [108-109]

-if (currentState.nodes().getMinNodeVersion().equals(currentState.nodes().getMaxNodeVersion()) == false
-    || currentState.nodes().getMinNodeVersion().before(Version.V_3_7_0)) {
+if (currentState.nodes().getMinNodeVersion().before(Version.V_3_7_0)) {
Suggestion importance[1-10]: 6

__

Why: The current check rejects splits when nodes are on different versions even if all are at or above V_3_7_0. The condition should only require the minimum node version to be at or above V_3_7_0, not that all nodes are on the same version. This is a valid logic concern but may be intentional for consistency during rolling upgrades.

Low
Exclude transient split fields from equals and hashCode

The equals method now includes splitChildAllocationIds and parentAllocationId, but
these fields are described as "transient" in ShardRouting (not serialized on the
wire for older versions). This means two AllocationId objects that are logically
equal (same id and relocationId) but differ in split fields (e.g., one deserialized
from an old node) will not be considered equal. This could break allocation
tracking. Consider whether split fields should be excluded from equals/hashCode, or
document this behavior explicitly.

server/src/main/java/org/opensearch/cluster/routing/AllocationId.java [271-283]

-AllocationId(StreamInput in) throws IOException {
-    this.id = in.readString();
-    this.relocationId = in.readOptionalString();
-    if (in.getVersion().onOrAfter(Version.V_3_7_0)) {
-        List<String> childIds = in.readOptionalStringList();
-        splitChildAllocationIds = childIds == null ? null : Collections.unmodifiableList(childIds);
-        parentAllocationId = in.readOptionalString();
-    } else {
-        splitChildAllocationIds = null;
-        parentAllocationId = null;
-    }
+@Override
+public boolean equals(Object o) {
+    if (this == o) return true;
+    if (!(o instanceof AllocationId)) return false;
+    AllocationId that = (AllocationId) o;
+    return Objects.equals(id, that.id)
+        && Objects.equals(relocationId, that.relocationId);
 }
 
+@Override
+public int hashCode() {
+    return Objects.hash(id, relocationId);
+}
+
Suggestion importance[1-10]: 6

__

Why: Including splitChildAllocationIds and parentAllocationId in equals/hashCode could break allocation tracking when comparing AllocationId objects deserialized from nodes running different versions, since older nodes won't have these fields. This is a legitimate concern about cross-version compatibility.

Low
General
Verify enum ordinal stability for serialization safety

The Type enum uses ordinal values (via Type.values()[in.readByte()]) for
serialization in readFrom. Adding IN_PLACE_SPLIT_SHARD at the end is safe, but
REMOTE_STORE was previously the last entry at ordinal 5. Any existing serialized
data with REMOTE_STORE (ordinal 5) will still deserialize correctly, but the switch
statement in readFrom handles IN_PLACE_SPLIT_SHARD before REMOTE_STORE, which is
fine. However, the test testTypeEnumOrdinalStability in RecoverySourceSplitTests
asserts IN_PLACE_SPLIT_SHARD is last — this is correct and should be maintained. No
code change needed here, but ensure the readFrom switch handles the new type before
the default case, which it does.

server/src/main/java/org/opensearch/cluster/routing/RecoverySource.java [122-130]

+public enum Type {
+    EMPTY_STORE,
+    EXISTING_STORE,
+    PEER,
+    SNAPSHOT,
+    LOCAL_SHARDS,
+    REMOTE_STORE,
+    IN_PLACE_SPLIT_SHARD
+}
 
-
Suggestion importance[1-10]: 1

__

Why: The suggestion only asks to verify existing behavior and the improved_code is identical to the existing_code, making it a no-op suggestion with no actionable change.

Low
Suggestions up to commit 3cbc357
CategorySuggestion                                                                                                                                    Impact
Possible issue
Child shards missing from routing table before reroute

The routingTableBuilder is created but never modified before calling .build(), so
the routing table in the updated cluster state is identical to the current one.
Child shards need to be added to the routing table here (before the reroute) so the
allocation service can assign them. Without this, the reroute call has no new
unassigned shards to allocate.

server/src/main/java/org/opensearch/cluster/metadata/MetadataInPlaceShardSplitService.java [145-154]

 RoutingTable.Builder routingTableBuilder = RoutingTable.builder(currentState.routingTable());
 Metadata.Builder metadataBuilder = Metadata.builder(currentState.metadata());
 IndexMetadata.Builder indexMetadataBuilder = IndexMetadata.builder(curIndexMetadata);
 
 SplitShardsMetadata.Builder splitMetadataBuilder = new SplitShardsMetadata.Builder(splitShardsMetadata);
 splitMetadataBuilder.splitShard(shardId, request.getSplitInto());
-indexMetadataBuilder.splitShardsMetadata(splitMetadataBuilder.build());
+SplitShardsMetadata updatedSplitMetadata = splitMetadataBuilder.build();
+indexMetadataBuilder.splitShardsMetadata(updatedSplitMetadata);
+
+// Add child shards as unassigned entries in the routing table so the allocator can assign them
+// (child shard routing entries should be added here based on updatedSplitMetadata)
 
 RoutingTable routingTable = routingTableBuilder.build();
 metadataBuilder.put(indexMetadataBuilder);
Suggestion importance[1-10]: 6

__

Why: The routingTableBuilder is created but never modified before .build(), meaning no child shards are added to the routing table. However, this may be intentional if the reroute/allocation service is expected to handle child shard creation based on the updated SplitShardsMetadata. The suggestion raises a valid concern but the improved code only adds a comment without actual implementation, making it speculative.

Low
Mismatched serialization read/write methods for list field

The writeTo method writes splitChildAllocationIds using
writeOptionalStringCollection, but the readFrom uses readOptionalStringList. These
two methods may not be symmetric — writeOptionalStringCollection writes a Collection
which may not guarantee list ordering or the same wire format as
readOptionalStringList expects. Use matching read/write pairs (e.g.,
writeOptionalStringList / readOptionalStringList) to ensure correct deserialization.

server/src/main/java/org/opensearch/cluster/routing/AllocationId.java [122-129]

+if (in.getVersion().onOrAfter(Version.V_3_6_0)) {
+    List<String> childIds = in.readOptionalStringList();
+    splitChildAllocationIds = childIds == null ? null : Collections.unmodifiableList(childIds);
+    parentAllocationId = in.readOptionalString();
+} else {
+    splitChildAllocationIds = null;
+    parentAllocationId = null;
+}
 
-
Suggestion importance[1-10]: 4

__

Why: The concern about writeOptionalStringCollection vs readOptionalStringList asymmetry is valid in principle, but the existing_code and improved_code are identical, meaning no actual fix is demonstrated. The suggestion asks the user to verify rather than providing a concrete correction.

Low
General
Switch case order should match enum declaration order

IN_PLACE_SHARD_SPLIT is added after REMOTE_STORE in the enum but is handled before
REMOTE_STORE in the switch statement. While functionally correct, the readFrom
switch uses Type.values()[in.readByte()] (ordinal-based), so the enum ordinal order
is what matters for wire compatibility. The enum has IN_PLACE_SHARD_SPLIT at ordinal
6 (after REMOTE_STORE at 5), but the RecoverySourceTests confirms this. Ensure the
switch case order matches the enum declaration order to avoid future confusion and
maintenance errors.

server/src/main/java/org/opensearch/cluster/routing/RecoverySource.java [94-97]

+case REMOTE_STORE:
+    return new RemoteStoreRecoverySource(in);
 case IN_PLACE_SHARD_SPLIT:
     return InPlaceShardSplitRecoverySource.INSTANCE;
-case REMOTE_STORE:
Suggestion importance[1-10]: 3

__

Why: Reordering the switch cases to match enum declaration order improves readability and reduces maintenance confusion, but has no functional impact since the switch matches on enum values not ordinals.

Low
Clarify version check ordering for correctness

The version check uses equals to verify all nodes are on the same version, but this
is overly strict — it rejects valid homogeneous clusters where min equals max but
both are above V_3_6_0. The real intent is to reject mixed-version clusters AND
clusters below the minimum version. The condition should be: reject if minVersion !=
maxVersion OR if minVersion < V_3_6_0, which is what the code does — however, the
equals check is correct for same-version enforcement. The real bug is that a cluster
where all nodes are on V_3_6_0 but minVersion.equals(maxVersion) is true would still
pass, so this is fine. But using equals on Version objects should be verified to
work correctly vs. using onOrAfter/before comparisons for consistency and
correctness.

server/src/main/java/org/opensearch/cluster/metadata/MetadataInPlaceShardSplitService.java [108-109]

-if (currentState.nodes().getMinNodeVersion().equals(currentState.nodes().getMaxNodeVersion()) == false
-    || currentState.nodes().getMinNodeVersion().before(Version.V_3_6_0)) {
+if (currentState.nodes().getMinNodeVersion().before(Version.V_3_6_0)
+    || currentState.nodes().getMinNodeVersion().equals(currentState.nodes().getMaxNodeVersion()) == false) {
Suggestion importance[1-10]: 2

__

Why: The suggestion reorders the two conditions in the version check but the logic is functionally equivalent. The author acknowledges the original code is correct, making this a purely stylistic change with minimal impact.

Low

@vikasvb90 vikasvb90 self-assigned this Mar 24, 2026
@github-actions
Copy link
Copy Markdown
Contributor

❌ Gradle check result for 8600fa4: null

Please examine the workflow log, locate, and copy-paste the failure(s) below, then iterate to green. Is the failure a flaky test unrelated to your change?

@vikasvb90 vikasvb90 force-pushed the online_shard_split branch from 8600fa4 to 8450d14 Compare March 24, 2026 08:57
@github-actions
Copy link
Copy Markdown
Contributor

Persistent review updated to latest commit 8450d14

@vikasvb90 vikasvb90 force-pushed the online_shard_split branch from 8450d14 to 8dbd619 Compare March 24, 2026 09:42
@github-actions
Copy link
Copy Markdown
Contributor

Persistent review updated to latest commit 8dbd619

@github-actions
Copy link
Copy Markdown
Contributor

❌ Gradle check result for 8dbd619: FAILURE

Please examine the workflow log, locate, and copy-paste the failure(s) below, then iterate to green. Is the failure a flaky test unrelated to your change?

@github-actions
Copy link
Copy Markdown
Contributor

Failed to generate code suggestions for PR

@github-actions
Copy link
Copy Markdown
Contributor

❌ Gradle check result for 7f2da9b: ABORTED

Please examine the workflow log, locate, and copy-paste the failure(s) below, then iterate to green. Is the failure a flaky test unrelated to your change?

@github-actions
Copy link
Copy Markdown
Contributor

Failed to generate code suggestions for PR

@github-actions
Copy link
Copy Markdown
Contributor

❌ Gradle check result for d78724a: FAILURE

Please examine the workflow log, locate, and copy-paste the failure(s) below, then iterate to green. Is the failure a flaky test unrelated to your change?

@vikasvb90 vikasvb90 force-pushed the online_shard_split branch from d78724a to 7b2c720 Compare March 28, 2026 14:04
@github-actions
Copy link
Copy Markdown
Contributor

Failed to generate code suggestions for PR

1 similar comment
@github-actions
Copy link
Copy Markdown
Contributor

Failed to generate code suggestions for PR

@vikasvb90 vikasvb90 force-pushed the online_shard_split branch 2 times, most recently from 7b2c720 to b1ffae2 Compare March 28, 2026 14:17
@github-actions
Copy link
Copy Markdown
Contributor

Failed to generate code suggestions for PR

1 similar comment
@github-actions
Copy link
Copy Markdown
Contributor

Failed to generate code suggestions for PR

@github-actions
Copy link
Copy Markdown
Contributor

❌ Gradle check result for b1ffae2: FAILURE

Please examine the workflow log, locate, and copy-paste the failure(s) below, then iterate to green. Is the failure a flaky test unrelated to your change?

@vikasvb90 vikasvb90 force-pushed the online_shard_split branch from b1ffae2 to f3dfee0 Compare March 28, 2026 14:52
@github-actions
Copy link
Copy Markdown
Contributor

Failed to generate code suggestions for PR

@github-actions
Copy link
Copy Markdown
Contributor

❌ Gradle check result for 9e29086: null

Please examine the workflow log, locate, and copy-paste the failure(s) below, then iterate to green. Is the failure a flaky test unrelated to your change?

@vikasvb90 vikasvb90 force-pushed the online_shard_split branch from 9e29086 to e7f348d Compare March 30, 2026 17:13
@github-actions
Copy link
Copy Markdown
Contributor

Failed to generate code suggestions for PR

@github-actions
Copy link
Copy Markdown
Contributor

✅ Gradle check result for e7f348d: SUCCESS

@github-actions
Copy link
Copy Markdown
Contributor

Persistent review updated to latest commit 9b63cb3

@github-actions
Copy link
Copy Markdown
Contributor

❌ Gradle check result for 9b63cb3: FAILURE

Please examine the workflow log, locate, and copy-paste the failure(s) below, then iterate to green. Is the failure a flaky test unrelated to your change?

@github-actions
Copy link
Copy Markdown
Contributor

Persistent review updated to latest commit f69deab

@github-actions
Copy link
Copy Markdown
Contributor

✅ Gradle check result for f69deab: SUCCESS

@github-actions
Copy link
Copy Markdown
Contributor

github-actions bot commented Apr 1, 2026

Persistent review updated to latest commit 43aefb5

@vikasvb90 vikasvb90 force-pushed the online_shard_split branch 2 times, most recently from f69deab to 188e237 Compare April 1, 2026 03:17
@github-actions
Copy link
Copy Markdown
Contributor

github-actions bot commented Apr 1, 2026

Persistent review updated to latest commit 188e237

@github-actions
Copy link
Copy Markdown
Contributor

github-actions bot commented Apr 1, 2026

❌ Gradle check result for 188e237: FAILURE

Please examine the workflow log, locate, and copy-paste the failure(s) below, then iterate to green. Is the failure a flaky test unrelated to your change?

@vikasvb90 vikasvb90 force-pushed the online_shard_split branch from 188e237 to 2e95e9c Compare April 1, 2026 03:50
Signed-off-by: vikasvb90 <vikasvb@amazon.com>
@vikasvb90 vikasvb90 force-pushed the online_shard_split branch from 2e95e9c to 6867e76 Compare April 1, 2026 03:51
@github-actions
Copy link
Copy Markdown
Contributor

github-actions bot commented Apr 1, 2026

Persistent review updated to latest commit 2e95e9c

@github-actions
Copy link
Copy Markdown
Contributor

github-actions bot commented Apr 1, 2026

Persistent review updated to latest commit 6867e76

@github-actions
Copy link
Copy Markdown
Contributor

github-actions bot commented Apr 1, 2026

❕ Gradle check result for 6867e76: UNSTABLE

Please review all flaky tests that succeeded after retry and create an issue if one does not already exist to track the flaky failure.

@shwetathareja shwetathareja merged commit f716dfc into opensearch-project:main Apr 1, 2026
16 checks passed
bharath-techie pushed a commit to bharath-techie/OpenSearch that referenced this pull request Apr 2, 2026
aparajita31pandey pushed a commit to aparajita31pandey/OpenSearch that referenced this pull request Apr 18, 2026
…earch-project#20979)

Signed-off-by: vikasvb90 <vikasvb@amazon.com>
Signed-off-by: Aparajita Pandey <aparajita31pandey@gmail.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants