oci: unpack: slurp up raw layer stream before Close()#360
Merged
cyphar merged 3 commits intoopencontainers:masterfrom Mar 30, 2021
Merged
oci: unpack: slurp up raw layer stream before Close()#360cyphar merged 3 commits intoopencontainers:masterfrom
cyphar merged 3 commits intoopencontainers:masterfrom
Conversation
0e60e6a to
1a1635d
Compare
Member
Author
|
Hmmm, the failure is real but it comes from |
Member
Author
|
We need klauspost/pgzip#40 before we can do this. |
1a1635d to
6e0a77f
Compare
Member
|
FWIW, this LGTM whenever all the stuff you need to land lands, feel free to merge. |
Member
Author
|
Thanks, I might just merge this as-is since the pgzip stuff isn't actually necessary now that I've implemented it by just copying from the actual file stream. |
Member
|
Sure, sounds good to me.
…On Mon, Mar 29, 2021, at 6:58 AM, Aleksa Sarai wrote:
Thanks, I might just merge this as-is since the pgzip stuff isn't actually necessary now that I've implemented it by just copying from the actual file stream.
—
You are receiving this because you commented.
Reply to this email directly, view it on GitHub <#360 (comment)>, or unsubscribe <https://github.com/notifications/unsubscribe-auth/AAF7VV33FKDULFFLLI4SRU3TGB2QRANCNFSM4X2F34EQ>.
|
This is to ensure that we handle both good and bad cases sanely -- namely, that we always are digesting the *full* blob when Close() is called. This avoids us producing intermittent errors if blobs have extra trailing bytes (such as garbage at the end of a gzip stream), as well as hardening us against incorrect digests (though it should be noted this case cannot be triggered without disabling size verification -- the size verficiation we have already blocks this issue). Signed-off-by: Aleksa Sarai <cyphar@cyphar.com>
It's possible there's some trailing bytes after the gzip stream (which truth be told, probably shouldn't be there), so in order to avoid spurious errors on layer unpacking with slightly-broken images, just slurp up that data. But we should output logging information since this indicates that the original image builder probably has a bug of some kind. Signed-off-by: Aleksa Sarai <cyphar@cyphar.com>
6e0a77f to
6353208
Compare
Member
Author
|
LGTM. |
3c9687b to
b7f13af
Compare
Since we are reading blobs we should definitely be returning errors from the lookup. Signed-off-by: Aleksa Sarai <cyphar@cyphar.com>
b7f13af to
0976fbb
Compare
Codecov Report
@@ Coverage Diff @@
## master #360 +/- ##
==========================================
- Coverage 74.27% 73.27% -1.01%
==========================================
Files 59 59
Lines 3079 3098 +19
==========================================
- Hits 2287 2270 -17
- Misses 463 498 +35
- Partials 329 330 +1
|
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.
It's possible there's some trailing bytes after the gzip stream (which
truth be told, probably shouldn't be there), so in order to avoid
spurious errors on layer unpacking with slightly-broken images, just
slurp up that data.
But we should output logging information since this indicates that the
original image builder probably has a bug of some kind.
We also add code to slurp trailing bytes in VerifiedReadCloser.Close()
This is to ensure that we handle both good and bad cases sanely --
namely, that we always are digesting the full blob when Close() is
called. This avoids us producing intermittent errors if blobs have extra
trailing bytes (such as garbage at the end of a gzip stream), as well as
hardening us against incorrect digests (though it should be noted this
case cannot be triggered without disabling size verification -- the size
verficiation we have already blocks this issue).
Fixes #358
Signed-off-by: Aleksa Sarai cyphar@cyphar.com