-
Notifications
You must be signed in to change notification settings - Fork 18.9k
c8d/import: Don't close compressed stream twice #46418
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
The compressor is already closed a few lines below and there's no error returns between so the defer is not needed. Calling Close twice on a writerCloserWrapper is unsafe as it causes it to put the same buffer to the pool multiple times. Signed-off-by: Paweł Gronowski <pawel.gronowski@docker.com>
| if err != nil { | ||
| return "", "", errdefs.InvalidParameter(err) | ||
| } | ||
| defer compressor.Close() |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Is there any situation where archive.CompressStream should be closed? (mostly thinking if it should be returning a io.Nopcloser in that case)
If it's specific to this case, perhaps we should replace this line with a comment to outline "why".
Also wondering if something like a io.TeeReader would be something to involve 🤔
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It is closed later down
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Oh! Should've expanded the diff.
So do they have to be closed in a specific order? Because I also see that pr and pw are potentially closed multiple times ("with" or "without" error);
defer pr.Close()
defer pw.Close()
// ... do things
go func() {
// ...
pr.CloseWithError(err)
// ...
}()
// ... do things
compressor.Close()
pw.CloseWithError(err)There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The reader and writer returned from io.Pipe are explicitly stated by the godoc to be fine when closed multiple times.
Compressor must be closed first because it produces input for the other streams.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
In that case, would the reverse work? Keep this defer, and remove the other close further down? (would that be more readable?) The defers are executed in reverse order, so in this case,
defer pr.Close()
defer pw.Close()
defer compressor.Close()Would be executed as;
pw.CloseWithError(err)
pr.CloseWithError(err)
# defers:
compressor.Close()
pw.Close() // redundant, but ok
pr.Close() // redundant, but okThere was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
No, the compressor must be closed before compressedDigest := <-writeChan and before pr and pw are closed.
Closing pr or pw before compressor closes the writer the compressor writes to, so it doesn't have the chance to write all data.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
DOH! Looking at that channel twice, and still not considering it for the order of events in my comment. 🙈 yup, makes sense.
thaJeztah
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM
The compressor is already closed a few lines below and there's no error
returns between so the defer is not needed.
Calling Close twice on a writerCloserWrapper is unsafe as it causes it
to put the same buffer to the pool multiple times.
- What I did
Fixed random stack overflow daemon panics after import is performed.
- How I did it
Removed double
Close.- How to verify it
$ make DOCKER_GRAPHDRIVER=overlayfs TEST_INTEGRATION_USE_SNAPSHOTTER=1 TEST_FILTER='TestImport' test-integrationCheck the daemon doesn't panic with stack overflow.
- Description for the changelog
- A picture of a cute animal (not mandatory but encouraged)