-
Notifications
You must be signed in to change notification settings - Fork 136
Compression: Further specify ByteStream WriteResponse committed_size field for compressed blobs #212
Description
Context: bazelbuild/bazel#14654
There is an ambiguity with the current way that the ByteStream.Write protocol is specified for compressed blobs. The relevant part of the spec with the ambiguity is here:
remote-apis/build/bazel/remote/execution/v2/remote_execution.proto
Lines 256 to 264 in 636121a
| // When attempting an upload, if another client has already completed the upload | |
| // (which may occur in the middle of a single upload if another client uploads | |
| // the same blob concurrently), the request will terminate immediately with | |
| // a response whose `committed_size` is the full size of the uploaded file | |
| // (regardless of how much data was transmitted by the client). If the client | |
| // completes the upload but the | |
| // [Digest][build.bazel.remote.execution.v2.Digest] does not match, an | |
| // `INVALID_ARGUMENT` error will be returned. In either case, the client should | |
| // not attempt to retry the upload. |
The ambiguity is: what does "full size of the uploaded file" mean? Does it mean "compressed size" or "uncompressed size"? If it's compressed size, then it's not useful info to be returned in the response, since compressed size can vary depending on the compression level used by the other client that uploaded the file.
Note also that Bazel 5.0.0 effectively expects this to match the compressed size that it uploaded. Whether or not this is a bug, it means that servers have to wait for Bazel 5.0.0 to upload all chunks before they can reply with a committed_size that Bazel 5.0.0 will be happy with, effectively preventing the "early-exit" strategy when an object already exists in the cache.
From @mostynb on that thread:
Maybe the best we can do is update the REAPI spec to advise clients to ignore committed_size for compressed writes and to rely on the error status instead in that case?
I'm not sure how the early-exit mechanism is useful in practice actually. As you mentioned the client calls Send until it thinks it has sent all the data, and only then calls CloseAndRecv to get the WriteResponse (at least in the go bindings). At this point the client has sent all the data even if the server decided to return early. So instead of returning early the server could have discarded all the received data and just counted how much compressed data was sent and returned that number. So maybe we should instead update the REAPI spec to advise servers to do that for compressed-blobs writes instead of returning early?
This workaround of discarding all received data will work, but it's unclear how significant of a performance impact this will have in practice, compared to early-exit. Hoping that some folks with REAPI experience can chime in.