-
Notifications
You must be signed in to change notification settings - Fork 89
feat: initial preview implementation of Storage over gRPC #1738
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Merged
Conversation
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
…dec (#1344) * Introduce new internal API Codec<From, To> which provides a symmetric pair of conversion functions between two types * Add ApiaryConversions.java to contain all the new Codecs for each of the existing apiary toPb/fromPb methods * Update StorageImpl and associated classes to use new codecs rather than old model methods * Add clirr rule to allow removing accidental public HmacKeyMetadata#toPb * Add BucketInfo#asBucket to convert from a BucketInfo to the syntax class Bucket with a specified instance of Storage * Add BlobInfo#asBlob to convert from a BlobInfo to the syntax class Blob with a specified instance of Storage * Add Utils#ifNonNull to help tersify a lot of the conversion code in ApiaryConversions
* Add net.jqwik:jqwik dependency for property based testing * Add maven-surefire-plugin config to include surefire-junit-platform so jqwik test are picked up and ran during `mvn test` * Add new profile `idea-jqwik` that will add junit-platform associated libraries on a conditional basis so that IntelliJ Idea will detect jqwik tests as unit tests.
Co-authored-by: BenWhitehead <BenWhitehead@users.noreply.github.com>
…entation Http specific in HttpStorageOptions 1. Make StorageOptions and its Builder abstract classes. According to clirr this is a breaking change, however both classes had private constructors affording us this grey area to change things. 2. Add HttpStorageOptions which contains the http specific stuff from StorageOptions. 3. Deprecate the public inner classes of StorageOptions in favor of the HttpStorageOptions counterparts. 4. Refactor StorageImpl to always use HttpStorageOptions 5. Update all tests to use HttpStorageOptions 6. Remove unnecessary downloadTo tests from Blob now that the implementation is delegated to Storage 7. CopyWriter is still hard-coupled to HttpStorageRpc, this will need to be broken at a later point *Compatibility Note*: This factoring is client API and ANI compatible, however, due to the structural changes Serializable objects will not be loadable from previous versions. serialVersionUID's have been updated for the new class structures.
* Add new grpc implementation of Storage - GrpcStorageImpl * Add new method StorageOptions#http() for an http specific builder * Refactor internal usages of StorageOptions#newBuilder() to StorageOptions#http() Everything is skeleton right now, but should allow for relatively quick implementation from multiple work streams.
For reasons I've not yet been able to root-cause, our dependency check builds are failing with the following error: ``` ~/work/java-storage/java-storage/google-cloud-storage ~/work/java-storage/java-storage 2022-05-09 23:27:50 Generating dependency list using original pom... 2022-05-09 23:27:57 Comparing dependency lists... 2022-05-09 23:27:57 Diff found. See below: 2022-05-09 23:27:57 You can also check .diff.txt file located in google-cloud-storage. 34a35 > [INFO] com.google.re2j:re2j:jar:1.5:runtime 43a45 > [INFO] io.grpc:grpc-services:jar:1.45.1:runtime 47a50 > [INFO] io.opencensus:opencensus-proto:jar:0.2.0:runtime 49a53,54 > [INFO] org.bouncycastle:bcpkix-jdk15on:jar:1.67:runtime > [INFO] org.bouncycastle:bcprov-jdk15on:jar:1.67:runtime ~/work/java-storage/java-storage ``` It seems that something with runtime -> runtime -> runtime dependencies isn't properly showing up in the list of dependencies. In order to force them to show up, I've explicitly added a dependency on io.grpc:grpc-xds. This explicitly forced dependency is primarily temporary, as in the not too distant future we will have a dependency on io.grpc:grpc-xds to support DirectPath.
Provides compiler validation of trying to concatenate two crc32c values
…etGetOptions...) (#1385) * Introduce StorageArbitraries to contain common arbitraries which aren't globally registered by type. Co-authored-by: BenWhitehead <BenWhitehead@users.noreply.github.com>
* Add Codec<DateTime, OffsetDateTime> for apiary conversion * Add Utils#RFC_3339_DATE_TIME_FORMATTER * Add Utils#millisOffsetDateTimeCodec to provide compatibility with existing Long centric apis * Update BucketInfo and BucketInfo.Builder to prefer java.time types instead of Long * Update ApiaryConversions for BucketInfo to use new java.time types Update the following files to now be OffestDateTime instead of Long * BlobInfo#customTime * BlobInfo#deleteTime * BlobInfo#updateTime * BlobInfo#createTime * BlobInfo#timeStorageClassUpdated * BlobInfo#retentionExpirationTime * BucketInfo#retentionEffectiveTime * IamConfiguration#uniformBucketLevelAccessLockedTime
#1384) * Add conversion for com.google.storage.v2.ServiceAccount * Add property test to ensure round trip decode -> endode -> decode succeeds * Add ServiceAccountArbitraryProvider
All Convertable types share a common set of properties that they need to provide. This new class will ensure in a central location they do. First property: a round trip through the codec should provide a value equal to the initial input
Fixes an issue where upperbounds and flattend-pom were detecting lower versions via pubsub test deps.
### What's been added Fields which map to apiary DateTime RFC 3339 fields will now also have a preferred java.time.OffsetDateTime getter & setter. Fields which map to apiary long to represent a duration in epoch millis will no also have a preferred java.time.Duration getter & setter. All new methods currently have the name of the corresponding java.time type appended to the property name. #### Refactor Long -> java.time.OffsetDateTime * BlobInfo#createTime * BlobInfo#customTime * BlobInfo#deleteTime * BlobInfo#retentionExpirationTime * BlobInfo#timeStorageClassUpdated * BlobInfo#updateTime * Blob#createTime * Blob#customTime * Blob#deleteTime * Blob#retentionExpirationTime * Blob#timeStorageClassUpdated * Blob#updateTime * BucketInfo.CreatedBeforeDeleteRule * BucketInfo.DeleteRule * BucketInfo.IsLiveDeleteRule * BucketInfo.NumNewerVersionsDeleteRule * BucketInfo.RawDeleteRule * BucketInfo#createTime * BucketInfo#updateTime * Bucket#createTime * Bucket#updateTime * Bucket#retentionEffectiveTime * HmacKey.HmacKeyMetadata#createTime * HmacKey.HmacKeyMetadata#updateTime #### Refactor Long -> java.time.Duration * BucketInfo#retentionPeriod * Bucket#retentionPeriod ### Deprecation of existing methods All existing "Long" based methods have been marked as `@Deprecated` and had their javadocs updated to point to the new preferred corresponding java.time method. When the existing "Long" based methods are removed, the corresponding java.time method will replace it at which point the java.time suffix methods will be deprecated for later cleanup. All existing "Long" based methods take care of performing the necessary conversion to the respective java.time type and will continue to work until their removal. ##### Why append the java.time type as a suffix? 1. Getters can't be overloaded, so we already need something to disambiguate which type should be returned. 2. For setters: we theoretically could overload since we have different method arguments, however, if a customer is already passing `null` as the method argument we've now created an ambiguity as to which method to invoke. By suffixing the setters as well, we avoid creating this possible ambiguity.
* Add StorageFixture to manage the test scoped lifecycle of a storage instance * Update ITStorageTest to use new fixture * Update ITBlobReadChannelTest to use new fixture
…ecated methods (#1415) * Add GrpcConversions#timestampCodec convert between java.time.OffsetDateTime and com.google.protobuf.Timestamp * Add property test for timestampCodec * Replace all usage of "Long" methods in GrpcConversions with their appropriate java.time alternatives
Add initial implementation of a GrpcBlobWriteChannel and GrpcBlobReadChannel ### GrpcStorageImpl GrpcStorageImpl#reader and GrpcStorageImpl#writer have been defined to construct the respective channel. Other read/write methods on GrpcStorageImpl have been implemented in terms of `#reader` and `#writer`, more appropriate specific implementations will be implemented later on. #### Wired up methods * `GrpcStorageImpl#create(BlobInfo blobInfo, BlobTargetOption... options)` * `GrpcStorageImpl#create(BlobInfo blobInfo, byte[] content, BlobTargetOption... options)` * `GrpcStorageImpl#create(BlobInfo blobInfo, byte[] content, int offset, int length, BlobTargetOption... options)` * `GrpcStorageImpl#create(BlobInfo blobInfo, InputStream content, BlobWriteOption... options)` * `GrpcStorageImpl#createFrom(BlobInfo blobInfo, Path path, BlobWriteOption... options)` * `GrpcStorageImpl#createFrom(BlobInfo blobInfo, Path path, int bufferSize, BlobWriteOption... options)` * `GrpcStorageImpl#createFrom(BlobInfo blobInfo, InputStream content, int bufferSize, BlobWriteOption... options)` * `GrpcStorageImpl#readAllBytes(String bucket, String blob, BlobSourceOption... options)` * `GrpcStorageImpl#readAllBytes(BlobId blob, BlobSourceOption... options)` * `GrpcStorageImpl#reader(String bucket, String blob, BlobSourceOption... options)` * `GrpcStorageImpl#reader(BlobId blob, BlobSourceOption... options)` * `GrpcStorageImpl#downloadTo(BlobId blob, Path path, BlobSourceOption... options)` * `GrpcStorageImpl#downloadTo(BlobId blob, OutputStream outputStream, BlobSourceOption... options)` * `GrpcStorageImpl#writer(BlobInfo blobInfo, BlobWriteOption... options)` ### New channel session data types `BlobWriteChannel`s existing implementation provides an internal handle to access the resulting `StorageObject` which is created as part of a resumable upload session. Additionally, performing a resumable upload is a multi-leg set of operations. The new `*ByteChannelSession` interfaces and implementations model this multi-leg lifecycle. A `*ByteChannelSession` allows for opening of the channel for either read or write along with keeping a reference to the underlying object associated with the operation (the initial object including in the first message on read, or the final object returned upon completing a write operation). ### New channels approach Channels are inherently buffered already, in order to facilitate more efficient and appropriate use of additional buffering all channels have Unbuffered and Buffered implementations, where the buffered channels sole responsibility is narrowed to the intermediate buffering otherwise delegating all operations to an underlying unbuffered channel. Buffers which are allocated for use with a BufferedChannel are appropriately reused, rather than thrown away and/or resized in an ad-hoc manner. This should provide a more predictable behavior for memory use. All Channels now properly synchronize their read/write/flush methods to ensure proper ordering from multiple threads. ### move over DataGeneration classes from benchmarks Add DataGenerator, DataChain and TmpFile to test classes from benchmarks. Various tools to dynamically generate data sets either in memory or on disk.
* chore: add initial skeleton of Retrying for Grpc
* rename RetryAlgorithmManager -> HttpRetryAlgorithmManager
* Add GrpcRetryAlgorithmManager
* Update Retrying.run(GrpcStorageOptions, ...) to take a Decoder instead
of a Function (this helps narrow things a bit to our decoders)
* Add GrpcStorageImpl#SyntaxDecoders
* For our "Syntax" types - Blob, Bucket, etc - Decoding an instance is
always of the form message -> info -> syntax. This new class gives
us a decoder that can use simply for message -> syntax by binding
to the enclosing storage instance.
* update "Syntax" return points to use new decoders
* feat: BucketInfo conversion for gRPC * address feedback * fix retentionPolicy encode * address feedback * lint * use correct setter * include proto-google-common-protos
Co-authored-by: BenWhitehead <BenWhitehead@users.noreply.github.com>
New property test against arbitrarily generated data along with buffer size, and app read size. Ensure the expected number of reads and their sizes are returned for a specific data size. Ensure channel is closed after full read, and throws a ClosedChannelException when attempted to read after close
…StorageImpl (#1439) Add TransformPageDecorator which provides a Page implementation which knows how to transform the result from the delegate page by using a specific mapping function. Co-authored-by: BenWhitehead <BenWhitehead@users.noreply.github.com>
…d contract (#1446) According to the official Javadocs, there is a contract that must exist between #equals(Object)[1] and #hashCode()[2] for every class. Prior to this change the contract was not upheld for the model classes due to multiple reasons: 1. Some classes did not define equals or hashCode instead relying on the default implementations from java.lang.Object 2. Some classes were using different sets of fields between the two methods 3. Some classes were not performing deep evaluation 4. Some classes were serializing themselves to the apiary models before calling equals on the serialized instances All model classes have been updated to ensure: 1. They define an equals and hashCode 2. The contract is upheld for these methods Additionally, all internal usage of the deprecated DeleteRule has been removed from BlobInfo. Instead, LifecycleRule are used exclusively. Deprecated getDeleteRule and setDeleteRule have had their implementations changed to perform an immediate conversion to/from LifecycleRule as necessary. ApiaryConversions has been updated to ignore DeleteRule when encoding & decoding. A codec for converting between DeleteRule <-> LifecycleRule has been added to BackwardCompatibilityUtils. Corresponding tests for conversions between the various types of DeleteRules have been added in BackwardCompatibilityUtilsTest. Utils.ifNonNull has been updated to test the return value of the mapping function before consuming it. Data.<Boolean>nullOf(Boolean.class) returns `true` which can lead to unexpected behavior if a null value is provided to it indirectly. New Codec#andThen(Codec) has been added to allow easily connecting two Codecs. [1] https://docs.oracle.com/javase/8/docs/api/java/lang/Object.html#equals-java.lang.Object- [2] https://docs.oracle.com/javase/8/docs/api/java/lang/Object.html#hashCode--
test(deps): bump junit-platform.version to 5.9.1 Also removed a dangling bucket deletion in ITNotificationTest which was resulting in an exception at cleanup time.
…ll etags are still rolling out (#1693)
A Retention Periods duration is expressed in seconds, not milliseconds. Update test to ensure the retention period duration that is processed is interpreted in the correct units.
* test: enable grpc config for ITBlobReadChannelTest * fix: update GrpcBlobReadChannel to set limit as a number of bytes rather than end_offset * fix: update StorageV2ProtoUtils#validation to treat limit as number of bytes to read rather than end_offset * test: mark com.google.cloud.storage.it.ITBlobReadChannelTest.testReadChannelFailUpdatedGeneration as json only * fix: update GrpcBlobReadChannel to match automatic decompression behavior of json * fix: update GrpcBlobReadChannel to match exception behavior for failed preconditions Split and renamed testReadChannelFail into the three scenarios it was testing 1. testReadChannel_preconditionFailureResultsInIOException_metagenerationMatch_specified 2. testReadChannel_preconditionFailureResultsInIOException_generationMatch_specified 3. testReadChannel_preconditionFailureResultsInIOException_generationMatch_extractor Also update BlobReadChannel to remove RetryHelper$RetryHelperException from its cause chain
Add new builder method to all setting attempt of direct path connection. If enabled, the underlying gapic client will be configured to attempt connecting via direct path. deps: add a direct dependency to io.grpc:grpc-googleapis Rumour has it this dependency will be removed from gax, and we need to pull it up to our level to ensure it's there for storage direct path.
…rc32c value (#1696) Storage#createFrom and Storage#writer allow specifying an explicit crc32c value which should be passed to GCS for validation. Update our uploads to allow these explicitly defined values to be passed in the `finish_write` message to GCS. Update UnifiedOpts.Crc32cValue to represent its value as an int rather than the b64 string from apiary. Internally when the value is necessary for apiary it will be converted to the b64 string representation. Map gRPC Status `DATA_LOSS` to HTTP 400 Bad Request to mirror the behavior of JSON when a checksum validation fails. Add a new Hasher.constant instant which will function to specify a constant value for the cumulative value of a write. Various plumbing tweaks to allow passing this value all the way down and also attempting to "nullSafeConcat" Add new integration tests to try the negative case to ensue we are getting checksum failures when we pass mis-match values and content.
* test: Enable GRPC for ITKmsTest
* test: enable gRPC config for ITAccessTest * fix: fixup ITAccessTest#testRetentionPolicyNoLock to pass for gRPC Introduce TemporaryBucket to provide an AutoClosable bucket which will cleanup on close * fix: fixup ITAccessTest#testUBLAWithPublicAccessPreventionOnBucket to pass for gRPC Split test into two different tests to cover each of the scenarios: 1. changingPAPDoesNotAffectUBLA 2. changingUBLADoesNotAffectPAP * chore(test): fixup ITAccessTest generic acl and default acl tests to only run for JSON * bucketAcl_requesterPays_false * bucketAcl_requesterPays_true * testBlobAcl * testBucketDefaultAcl * fix: fixup ITAccessTest related to Bucket IAM Policy * testBucketPolicyV1 * testBucketPolicyV1RequesterPays * testBucketPolicyV3 Add IamPolicyPropertyTest and corresponding provider. * chore(test): set ITAccessTest#testBucketWithBucketPolicyOnlyEnabled to json only * chore(test): set ITAccessTest#testBucketWithUniformBucketLevelAccessEnabled to json only * chore: fmt * fix: fixup ITAccessTest#testEnableAndDisableBucketPolicyOnlyOnExistingBucket to work with grpc instance * fix: fixup ITAccessTest#testEnableAndDisableUniformBucketLevelAccessOnExistingBucket to work with grpc instance * fix: fixup ITAccessTest#testEnforcedPublicAccessPreventionOnBucket to work with grpc instance * fix: fixup ITAccessTest#testUnspecifiedPublicAccessPreventionOnBucket to work with grpc instance
…alues ahead of merge to main (#1704) * Update javadocs on all new @BetaApi methods to state they are preview apis and subject to breaking changes * Update TransportCompatibility annotations to apply to more classes and methods * Regenerate all serialVersionUID values to ensure no leaking version numbers compared with other classes
… http codes (#1713) Move all mapping from GrpcStorageImpl and BackwardCompatibilityUtils to GrpcToHttpStatusCodeTranslation. Define new GrpcToHttpStatusCodeTranslation.StatusCodeMapping to explicitly define a mapping between io.grpc.Status.Code and our http code Update tests to be explicit and not depend on utility methods which can also be used by the code under test. Add new tests to ensure expected status codes are returned when evaluating DefaultStorageRetryStrategy
…shCode to include match prefix and suffix (#1729) Add new test class ITBucketLifecycleRulesTest to validate some bucket mutations related to lifecycle rules behave as expected. Update GrpcConversions to detect an empty list and preserve null value if the value from the server is empty.
* Define 3 new content size scenarios to be validated 1. data is small enough to fit within a single message of a single stream 2. data requires multiple messages within a single stream 3. data requires multiple messages within multiple streams * Refactor checksum validation centric tests out of ITObjectTest into new ITObjectChecksumSupportTest * Move ITOptionRegressionTest.Content to top level as ChecksummedTestContent and flush it out a bit more for cross usage by ITObjectChecksumSupportTest
* fix: fix GrpcBlobReadChannel#isOpen Previously a subbed method was still being used, now it will check the lazy channel. Removed duplicate memoization of channel factory * chore: update assertion for possible RPO values via gRPC
danielduhh
approved these changes
Oct 26, 2022
Contributor
danielduhh
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🚢
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Labels
api: storage
Issues related to the googleapis/java-storage API.
size: xl
Pull request size is extra large.
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
Google Cloud Storage gRPC API Preview
The first release of
google-cloud-storagewith support for a subset of the Google Cloud Storage gRPC API which is in private preview. The most common operations have all been implemented and are available for experimentation.Given not all of the public api surface of
google-cloud-storageclasses are supported for gRPC a new annotation@TransportCompatibilityhas been added to various classes, methods and fields/enum values to signal where that thing can be expected to work. As we implement more of the operations these annotations will be updated.All new gRPC related APIs are annotated with
@BetaApito denote they are in preview and the possibility of breaking change is present. At this time, opting to use any of the gRPC transport mode means you are okay with the possibility of a breaking change happening. When the APIs are out of preview, we will remove the@BetaApiannotation to signal they are now considered stable and will not break outside a major version.NOTICE: Using the gRPC transport is exclusive. Any operations which have not yet been implemented for gRPC will result in a runtime error. For those operations which are not yet implemented, please continue to use the existing HTTP transport.
Special thanks (in alphabetical order) to @BenWhitehead, @frankyn, @JesseLovelace and @sydney-munro for their hard work on this effort.
Notable Improvements
For all gRPC media related operations (upload/download) we are now more resource courteous then the corresponding HTTP counterpart. Buffers are fixed to their specified size (can't arbitrarily grow without bounds), are allocated lazily and only if necessary.
Preview support for Accessing GCS via gRPC
GOOGLE_CLOUD_ENABLE_DIRECT_PATH_XDS=true, then run your program.StorageOptionsmimic the following:https://storage.googleapis.com:443will be transformed to the applicablegoogle-c2p-experimental:///storage.googleapis.comSupport for
java.timetypes on model classesjava.time.OffsetDateTime, while durations are represented withjava.time.DurationLongcentric methods are still present, but have been deprecated in favor of their correspondingjava.timevariantjava.timeand thejava.timevariant methods will be deprecatedcom.google.cloud.storage.Storagenow extendsjava.lang.AutoClosablethereby allowing it to be used in a try-with-resource block.Storage#close()when complete so it can clean up the gRPC middleware and resources.Storage#close()will gracefully no-op, allowing for the same style of use regardless of transport.When downloading an object via gRPC idle stream detection is now present which will restart a stream if it is determined to be idle and has remaining retry budget
Update equals()/hashCode() methods to follow the expected contract
The new gRPC transport based implementation continues to provide idempotency aware automatic retries the same as HTTP
Expanded test suite which should bring improved stability and reliability to both HTTP and gRPC transport implementations
New
com.google.cloud:google-cloud-storage-bommaven bom available to use for coordinated dependency version resolution for multiple storage artifactsNot yet implemented
All ACL specific operations.
All Notification related operations
ReadChannel#capture(),RestorableState<ReadChannel>#restore(),WriteChannel#capture(),RestorableState<WriteChannel>#restore(),CopyWriter#capture()andRestorableState<CopyWriter>#capture()are not yet implemented.Batch and "bulk" operations which depend on batch
Storage#batch()is only supported for HTTP transport.Storage#batch()are currently only supported for HTTP transportcom.google.cloud.storage.Storage#get(com.google.cloud.storage.BlobId...)com.google.cloud.storage.Storage#get(java.lang.Iterable<com.google.cloud.storage.BlobId>)com.google.cloud.storage.Storage#update(com.google.cloud.storage.BlobInfo...)com.google.cloud.storage.Storage#update(java.lang.Iterable<com.google.cloud.storage.BlobInfo>)com.google.cloud.storage.Storage#delete(com.google.cloud.storage.BlobId...)com.google.cloud.storage.Storage#delete(java.lang.Iterable<com.google.cloud.storage.BlobId>)One-Time Inconveniences
All classes under
com.google.cloud.storagewhich areSerializablehave newserialVersionUIDsand are incompatible with any previous version.google-cloud-storagein both locations.The cause chains of some Exceptions have changed.
StorageExceptioncauses will use the correspondingcom.google.api.gax.rpc.ApiExceptionfor the failure type instead of the HTTPcom.google.api.client.googleapis.json.GoogleJsonErrorStorageExceptionpreserving the integrity ofStorageException#getCode()Not Supported
Given the nature of the gRPC transport a few things are explicitly not supported when using gRPC, and require HTTP transport. Attempting to use any of the following methods will result in a runtime error stating they are not supported for gRPC transport.
Storage#writer(URL)does not work for gRPC. gRPC does not provide a means of exchanging an HTTP url for a resumable session idStorage#signUrlis not supported for gRPC transport. Signed URLs explicitly generate HTTP urls and are only supported for the HTTP transport based implementation.Storage#generateSignedPostPolicyV4is not supported for gRPC transport. Signed URLs explicitly generate HTTP urls and are only supported for the HTTP transport based implementation.Known Issues
userProjectoptions results in error from gcs when used with credentials which define quota_project_id #1736GrpcStorageOptions.setAttemptDirectPath(true)andGOOGLE_CLOUD_ENABLE_DIRECT_PATH_XDS=true#1737