Enabling Support for Conditional Single-Part Upload [AWS S3]#18092
Enabling Support for Conditional Single-Part Upload [AWS S3]#18092x-INFiN1TY-x wants to merge 2 commits intoopensearch-project:mainfrom
Conversation
|
❌ Gradle check result for f1c411e: FAILURE Please examine the workflow log, locate, and copy-paste the failure(s) below, then iterate to green. Is the failure a flaky test unrelated to your change? |
|
Flaky Test : : #14509
|
plugins/repository-s3/src/main/java/org/opensearch/repositories/s3/S3BlobContainer.java
Outdated
Show resolved
Hide resolved
e38f1fd to
3951775
Compare
Codecov ReportAttention: Patch coverage is
Additional details and impacted files@@ Coverage Diff @@
## main #18092 +/- ##
============================================
- Coverage 72.74% 72.64% -0.10%
+ Complexity 67767 67688 -79
============================================
Files 5497 5498 +1
Lines 311815 311899 +84
Branches 45261 45268 +7
============================================
- Hits 226822 226588 -234
- Misses 66504 66829 +325
+ Partials 18489 18482 -7 ☔ View full report in Codecov by Sentry. 🚀 New features to boost your workflow:
|
|
This PR is stalled because it has been open for 30 days with no activity. |
|
❌ Gradle check result for 109a7cf: FAILURE Please examine the workflow log, locate, and copy-paste the failure(s) below, then iterate to green. Is the failure a flaky test unrelated to your change? |
….ConditionalWriteResponse
plugins/repository-s3/src/main/java/org/opensearch/repositories/s3/S3BlobContainer.java plugins/repository-s3/src/test/java/org/opensearch/repositories/s3/S3BlobStoreContainerTests.java
9ef21e3 to
9a429f2
Compare
|
This PR is stalled because it has been open for 30 days with no activity. |
| .expectedBucketOwner(blobStore.expectedBucketOwner()); | ||
|
|
||
| // Apply conditional logic based on options | ||
| if (options.isIfMatch()) { |
There was a problem hiding this comment.
nit: Can be extracted out to a separate function like applyConditions for better code readability. Let's also analyse if this method can be promoted to the BlobContainer interface. That way, we can separate out the logic of applying conditions for each and every flavour of blob container.
| * @param listener listener to handle the resulting response or error notifications | ||
| * @throws IOException if an error occurs during upload or if validations fail | ||
| */ | ||
| void executeSingleUploadConditionally( |
There was a problem hiding this comment.
Most of the logic seems to be exactly the same as executeSingleUpload, except for appending the conditions to the resultant S3 PutObjectRequest. Can we refactor some of the code across executeSingleUpload and executeSingleUploadConditionally to improve readability?
| * The main entry points are {@link ConditionalWriteOptions} for specifying conditions, and | ||
| * {@link ConditionalWriteResponse} for receiving the result of a conditional write. | ||
| */ | ||
| public final class ConditionalWrite { |
There was a problem hiding this comment.
Same as https://github.com/opensearch-project/OpenSearch/pull/18064/files#r2702249073
Let's see if that is feasible
Enable Conditional Single-Part Uploads via
executeSingleUploadConditionallyIntroduce a condition-aware single-part upload method to
S3BlobStore, leveragingConditionalWriteOptionsfor “If-Match” and “If-None-Match” semantics and returning structuredConditionalWriteResponseresults.Summary of Changes
Add
executeSingleUploadConditionally(...)toS3BlobStoreIntroduce
ConditionalWriteOptionsto model “If-Match” and “If-None-Match” conditionsReturn a
ConditionalWriteResponse.success(etag)on successTranslate HTTP 412 (precondition failed) into
OpenSearchException("stale_primary_shard")Wrap other S3/SDK errors in
IOExceptionand propagate vialistener.onFailure(...)Enforce blob size limits (≤ 5 GB and ≤ buffer size) before making any S3 requests
Apply custom metadata and server-side encryption (AES256 or KMS) settings consistently
Core Workflow
Validate Blob Size
If
blobSize > 5 GBorblobSize > bufferSize, throwIllegalArgumentException.Build and Send
PutObjectRequestSet
bucket,key,contentLength,storageClass,acl, and metrics.Apply conditional headers from
ConditionalWriteOptions(If-MatchorIf-None-Match).Attach metadata and configure encryption (AES256 or KMS).
Upload via
putObject(...).Handle Response
If
response.eTag()is non-null, returnConditionalWriteResponse.success(etag).If
response.eTag()is null, throwIOExceptionand invokelistener.onFailure(...).Error Handling
HTTP 412 → listener receives
OpenSearchException("stale_primary_shard"), then throwIOExceptionto abort.Other S3Exception/SdkException → wrap in
IOException, invokelistener.onFailure(...), and rethrow.Test Coverage Highlights
testExecuteSingleUploadConditionallyPreconditionFailed: ETag mismatch (HTTP 412) →OpenSearchException("stale_primary_shard").testExecuteSingleUploadConditionallyS3Exception: S3 error (e.g., HTTP 403) →IOException.testExecuteSingleUploadConditionallySuccess: Successful upload with metadata, AES256/KMS, and correct ETag response.testExecuteSingleUploadConditionallySdkException: AWS SDK failure →IOException.testExecuteSingleUploadConditionallyWithNullETag: Null ETag →IOException.testExecuteSingleUploadConditionallyNullOrEmptyInput: Zero-length blob or empty ETag handling.Related Issues
Implements RFC #17763
Parent Meta: #17859
Developer Checklist
Add
executeSingleUploadConditionally(...)toS3BlobStoreIntroduce
ConditionalWriteOptions/ConditionalWriteResponseValidate size limits before request
Apply conditional headers correctly
Configure metadata and encryption settings
Map HTTP 412 to
OpenSearchException("stale_primary_shard")+ abortWrap other exceptions in
IOExceptionExpand tests for conditional and boundary scenarios
Sign off commits per DCO
By submitting this PR, I confirm that my contribution is made under the terms of the Apache 2.0 license.
# Enable Conditional Single-Part Uploads via `executeSingleUploadConditionally` (#XXXX)At a Glance:
Introduce a condition-aware single-part upload method to
S3BlobStore, leveragingConditionalWriteOptionsfor “If-Match” and “If-None-Match” semantics and returning structuredConditionalWriteResponseresults.executeSingleUploadConditionallyS3BlobStoreSummary of Changes
executeSingleUploadConditionally(...)toS3BlobStoreConditionalWriteOptionsto model “If-Match” and “If-None-Match” conditionsConditionalWriteResponse.success(etag)on successOpenSearchException("stale_primary_shard")IOExceptionand propagate vialistener.onFailure(...)Core Workflow
Validate Blob Size
blobSize > 5 GBorblobSize > bufferSize, throwIllegalArgumentException.Build and Send
PutObjectRequestbucket,key,contentLength,storageClass,acl, and metrics.ConditionalWriteOptions(If-MatchorIf-None-Match).putObject(...).Handle Response
response.eTag()is non-null, returnConditionalWriteResponse.success(etag).response.eTag()is null, throwIOExceptionand invokelistener.onFailure(...).Error Handling
OpenSearchException("stale_primary_shard"), then throwIOExceptionto abort.IOException, invokelistener.onFailure(...), and rethrow.Test Coverage
testExecuteSingleUploadConditionallyPreconditionFailed: ETag mismatch (HTTP 412) →OpenSearchException("stale_primary_shard").testExecuteSingleUploadConditionallyS3Exception: S3 error (e.g., HTTP 403) →IOException.testExecuteSingleUploadConditionallySuccess: Successful upload with metadata, AES256/KMS, and correct ETag response.testExecuteSingleUploadConditionallySdkException: AWS SDK failure →IOException.testExecuteSingleUploadConditionallyWithNullETag: Null ETag →IOException.testExecuteSingleUploadConditionallyNullOrEmptyInput: Zero-length blob or empty ETag handling.Related Issue
Concerned RFC : RFC #17763
Parent Meta Issue : Meta #17859
By submitting this pull request, I confirm that my contribution is made under the terms of the Apache 2.0 license.
For more information on following Developer Certificate of Origin and signing off your commits, please check here.