Skip to content

Segment Replication - Fix NoSuchFileException errors caused when computing metadata snapshot on primary shards.#4422

Merged
dreamer-89 merged 1 commit intoopensearch-project:2.xfrom
dreamer-89:mch2_fileNotFound_2x
Sep 6, 2022
Merged

Segment Replication - Fix NoSuchFileException errors caused when computing metadata snapshot on primary shards.#4422
dreamer-89 merged 1 commit intoopensearch-project:2.xfrom
dreamer-89:mch2_fileNotFound_2x

Conversation

@dreamer-89
Copy link
Copy Markdown
Member

Manual backport of #4366 to 2.x

…uting metadata snapshot on primary shards. (opensearch-project#4366)

* Segment Replication - Fix NoSuchFileException errors caused when computing metadata snapshot on primary shards.

This change fixes the errors that occur when computing metadata snapshots on primary shards from the latest in-memory SegmentInfos.  The error occurs when a segments_N file that is referenced by the in-memory infos is deleted as part of a concurrent commit.  The segments themselves are incref'd by IndexWriter.incRefDeleter but the commit file (Segments_N) is not.  This change resolves this by ignoring the segments_N file when computing metadata for CopyState and only sending incref'd segment files to replicas.

Signed-off-by: Marc Handalian <handalm@amazon.com>

* Fix spotless.

Signed-off-by: Marc Handalian <handalm@amazon.com>

* Update StoreTests.testCleanupAndPreserveLatestCommitPoint to assert additional segments are deleted.

Signed-off-by: Marc Handalian <handalm@amazon.com>

* Rename snapshot to metadataMap in CheckpointInfoResponse.

Signed-off-by: Marc Handalian <handalm@amazon.com>

* Refactor segmentReplicationDiff method to compute off two maps instead of MetadataSnapshots.

Signed-off-by: Marc Handalian <handalm@amazon.com>

* Fix spotless.

Signed-off-by: Marc Handalian <handalm@amazon.com>

* Revert catchall in SegmentReplicationSourceService.

Signed-off-by: Marc Handalian <handalm@amazon.com>

* Revert log lvl change.

Signed-off-by: Marc Handalian <handalm@amazon.com>

* Fix SegmentReplicationTargetTests

Signed-off-by: Marc Handalian <handalm@amazon.com>

* Cleanup unused logger.

Signed-off-by: Marc Handalian <handalm@amazon.com>

Signed-off-by: Marc Handalian <handalm@amazon.com>
Co-authored-by: Suraj Singh <surajrider@gmail.com>
@dreamer-89 dreamer-89 requested review from a team and reta as code owners September 6, 2022 17:36
@github-actions
Copy link
Copy Markdown
Contributor

github-actions bot commented Sep 6, 2022

Gradle Check (Jenkins) Run Completed with:

@dreamer-89
Copy link
Copy Markdown
Member Author

Last run failed with below flaky test failure. Refiring!

REPRODUCE WITH: ./gradlew ':plugins:repository-s3:yamlRestTest' --tests "org.opensearch.repositories.s3.RepositoryS3ClientYamlTestSuiteIT.test {yaml=repository_s3/20_repository_permanent_credentials/Snapshot and Restore with repository-s3 using permanent credentials}" -Dtests.seed=2EAC0D04131BEEB7 -Dtests.security.manager=true -Dtests.jvm.argline="-XX:TieredStopAtLevel=1 -XX:ReservedCodeCacheSize=64m" -Dtests.locale=en-CA -Dtests.timezone=Africa/Libreville -Druntime.java=17 -Dtests.rest.denylist=repository_s3/30_repository_temporary_credentials/*,repository_s3/40_repository_ec2_credentials/*,repository_s3/50_repository_ecs_credentials/*,repository_s3/60_repository_eks_credentials/*

org.opensearch.repositories.s3.RepositoryS3ClientYamlTestSuiteIT > test {yaml=repository_s3/20_repository_permanent_credentials/Snapshot and Restore with repository-s3 using permanent credentials} FAILED
    java.lang.AssertionError: Failure at [repository_s3/20_repository_permanent_credentials:201]: expected [2xx] status code but api [snapshot.create] returned [500 Internal Server Error] [{"error":{"root_cause":[{"type":"repository_exception","reason":"[repository_permanent] concurrent modification of the index-N file, expected current generation [17544] but it was not found in the repository","stack_trace":"RepositoryException[[repository_permanent] concurrent modification of the index-N file, expected current generation [17544] but it was not found in the repository]\n\tat org.opensearch.repositories.blobstore.BlobStoreRepository.ensureSafeGenerationExists(BlobStoreRepository.java:2199)\n\tat org.opensearch.repositories.blobstore.BlobStoreRepository.lambda$writeIndexGen$57(BlobStoreRepository.java:2080)\n\tat org.opensearch.action.ActionListener$1.onResponse(ActionListener.java:80)\n\tat org.opensearch.common.util.concurrent.ListenableFuture$1.doRun(ListenableFuture.java:126)\n\tat org.opensearch.common.util.concurrent.AbstractRunnable.run(AbstractRunnable.java:52)\n\tat org.opensearch.common.util.concurrent.OpenSearchExecutors$DirectExecutorService.execute(OpenSearchExecutors.java:343)\n\tat org.opensearch.common.util.concurrent.ListenableFuture.notifyListener(ListenableFuture.java:120)\n\tat org.opensearch.common.util.concurrent.ListenableFuture.lambda$done$0(ListenableFuture.java:112)\n\tat java.****/java.util.ArrayList.forEach(ArrayList.java:1511)\n\tat org.opensearch.common.util.concurrent.ListenableFuture.done(ListenableFuture.java:112)\n\tat org.opensearch.common.util.concurrent.BaseFuture.set(BaseFuture.java:160)\n\tat org.opensearch.common.util.concurrent.ListenableFuture.onResponse(ListenableFuture.java:141)\n\tat org.opensearch.action.StepListener.innerOnResponse(StepListener.java:77)\n\tat org.opensearch.action.NotifyOnceListener.onResponse(NotifyOnceListener.java:55)\n\tat org.opensearch.repositories.blobstore.BlobStoreRepository.lambda$writeIndexGen$55(BlobStoreRepository.java:2064)\n\tat org.opensearch.action.ActionRunnable$2.doRun(ActionRunnable.java:88)\n\tat org.opensearch.common.util.concurrent.ThreadContext$ContextPreservingAbstractRunnable.doRun(ThreadContext.java:806)\n\tat org.opensearch.common.util.concurrent.AbstractRunnable.run(AbstractRunnable.java:52)\n\tat java.****/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1136)\n\tat java.****/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:635)\n\tat java.****/java.lang.Thread.run(Thread.java:833)\n"}],"type":"snapshot_exception","reason":"[repository_permanent:snapshot-two/KStlb4grTqG_DBCYMHglNg] failed to update snapshot in repository","caused_by":{"type":"repository_exception","reason":"[repository_permanent] concurrent modification of the index-N file, expected current generation [17544] but it was not found in the repository","stack_trace":"RepositoryException[[repository_permanent] concurrent modification of the index-N file, expected current generation [17544] but it was not found in the repository]\n\tat org.opensearch.repositories.blobstore.BlobStoreRepository.ensureSafeGenerationExists(BlobStoreRepository.java:2199)\n\tat org.opensearch.repositories.blobstore.BlobStoreRepository.lambda$writeIndexGen$57(BlobStoreRepository.java:2080)\n\tat org.opensearch.action.ActionListener$1.onResponse(ActionListener.java:80)\n\tat org.opensearch.common.util.concurrent.ListenableFuture$1.doRun(ListenableFuture.java:126)\n\tat org.opensearch.common.util.concurrent.AbstractRunnable.run(AbstractRunnable.java:52)\n\tat org.opensearch.common.util.concurrent.OpenSearchExecutors$DirectExecutorService.execute(OpenSearchExecutors.java:343)\n\tat org.opensearch.common.util.concurrent.ListenableFuture.notifyListener(ListenableFuture.java:120)\n\tat org.opensearch.common.util.concurrent.ListenableFuture.lambda$done$0(ListenableFuture.java:112)\n\tat java.****/java.util.ArrayList.forEach(ArrayList.java:1511)\n\tat org.opensearch.common.util.concurrent.ListenableFuture.done(ListenableFuture.java:112)\n\tat org.opensearch.common.util.concurrent.BaseFuture.set(BaseFuture.java:160)\n\tat org.opensearch.common.util.concurrent.ListenableFuture.onResponse(ListenableFuture.java:141)\n\tat org.opensearch.action.StepListener.innerOnResponse(StepListener.java:77)\n\tat org.opensearch.action.NotifyOnceListener.onResponse(NotifyOnceListener.java:55)\n\tat org.opensearch.repositories.blobstore.BlobStoreRepository.lambda$writeIndexGen$55(BlobStoreRepository.java:2064)\n\tat org.opensearch.action.ActionRunnable$2.doRun(ActionRunnable.java:88)\n\tat org.opensearch.common.util.concurrent.ThreadContext$ContextPreservingAbstractRunnable.doRun(ThreadContext.java:806)\n\tat org.opensearch.common.util.concurrent.AbstractRunnable.run(AbstractRunnable.java:52)\n\tat java.****/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1136)\n\tat java.****/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:635)\n\tat java.****/java.lang.Thread.run(Thread.java:833)\n"},"stack_trace":"SnapshotException[[repository_permanent:snapshot-two/KStlb4grTqG_DBCYMHglNg] failed to update snapshot in repository]; nested: RepositoryException[[repository_permanent] concurrent modification of the index-N file, expected current generation [17544] but it was not found in the repository];\n\tat org.opensearch.repositories.blobstore.BlobStoreRepository.lambda$finalizeSnapshot$39(BlobStoreRepository.java:1389)\n\tat org.opensearch.action.ActionListener$1.onFailure(ActionListener.java:88)\n\tat org.opensearch.repositories.blobstore.BlobStoreRepository$8.onResponse(BlobStoreRepository.java:2207)\n\tat org.opensearch.repositories.blobstore.BlobStoreRepository$8.onResponse(BlobStoreRepository.java:2204)\n\tat org.opensearch.repositories.blobstore.BlobStoreRepository$4.clusterStateProcessed(BlobStoreRepository.java:1870)\n\tat org.opensearch.cluster.service.MasterService$SafeClusterStateTaskListener.clusterStateProcessed(MasterService.java:655)\n\tat org.opensearch.cluster.service.MasterService$TaskOutputs.lambda$processedDifferentClusterState$1(MasterService.java:524)\n\tat java.****/java.util.ArrayList.forEach(ArrayList.java:1511)\n\tat org.opensearch.cluster.service.MasterService$TaskOutputs.processedDifferentClusterState(MasterService.java:524)\n\tat org.opensearch.cluster.service.MasterService.onPublicationSuccess(MasterService.java:352)\n\tat org.opensearch.cluster.service.MasterService.publish(MasterService.java:344)\n\tat org.opensearch.cluster.service.MasterService.runTasks(MasterService.java:321)\n\tat org.opensearch.cluster.service.MasterService$Batcher.run(MasterService.java:196)\n\tat org.opensearch.cluster.service.TaskBatcher.runIfNotProcessed(TaskBatcher.java:176)\n\tat org.opensearch.cluster.service.TaskBatcher$BatchedTask.run(TaskBatcher.java:214)\n\tat org.opensearch.common.util.concurrent.ThreadContext$ContextPreservingRunnable.run(ThreadContext.java:747)\n\tat 

@github-actions
Copy link
Copy Markdown
Contributor

github-actions bot commented Sep 6, 2022

Gradle Check (Jenkins) Run Completed with:

@codecov-commenter
Copy link
Copy Markdown

codecov-commenter commented Sep 6, 2022

Codecov Report

❌ Patch coverage is 88.63636% with 5 lines in your changes missing coverage. Please review.
✅ Project coverage is 70.55%. Comparing base (b202fd1) to head (ebed94f).
⚠️ Report is 2756 commits behind head on 2.x.

Files with missing lines Patch % Lines
...rc/main/java/org/opensearch/index/store/Store.java 93.10% 0 Missing and 2 partials ⚠️
...ch/indices/replication/CheckpointInfoResponse.java 50.00% 2 Missing ⚠️
.../indices/replication/SegmentReplicationTarget.java 85.71% 1 Missing ⚠️
Additional details and impacted files
@@            Coverage Diff             @@
##                2.x    #4422    +/-   ##
==========================================
  Coverage     70.54%   70.55%            
- Complexity    56942    57101   +159     
==========================================
  Files          4572     4584    +12     
  Lines        273816   274453   +637     
  Branches      40152    40220    +68     
==========================================
+ Hits         193170   193629   +459     
- Misses        64455    64595   +140     
- Partials      16191    16229    +38     

☔ View full report in Codecov by Sentry.
📢 Have feedback on the report? Share it here.

🚀 New features to boost your workflow:
  • ❄️ Test Analytics: Detect flaky tests, report on failures, and find test suite problems.

final List<ShardSegments> replicaShardSegments = segmentListMap.get(false);
// if we don't have any segments yet, proceed.
final ShardSegments primaryShardSegments = primaryShardSegmentsList.stream().findFirst().get();
logger.debug("Primary Segments: {}", primaryShardSegments.getSegments());
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Did you mean to leave this in?

Copy link
Copy Markdown
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yeah, I think this can remain.

final Map<String, Segment> latestPrimarySegments = getLatestSegments(primaryShardSegments);
final Long latestPrimaryGen = latestPrimarySegments.values().stream().findFirst().map(Segment::getGeneration).get();
for (ShardSegments shardSegments : replicaShardSegments) {
logger.debug("Replica {} Segments: {}", shardSegments.getShardRouting(), shardSegments.getSegments());
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Same as above comment.

Copy link
Copy Markdown
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Same as above

return new MetadataSnapshot(segmentInfos, directory, logger);
}

/**
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Nit - could add in a line explaining why we're leaving out the segments_n files

Copy link
Copy Markdown
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks @Poojita-Raj for the comment. This change is needed to fix the file not found exception.
PR against main #4366 contains more details around the issue and fix.

@dreamer-89 dreamer-89 merged commit 35ddbd8 into opensearch-project:2.x Sep 6, 2022
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

4 participants