chunked: fix reuse of the layers cache#2024
chunked: fix reuse of the layers cache#2024openshift-merge-bot[bot] merged 6 commits intocontainers:mainfrom
Conversation
|
[APPROVALNOTIFIER] This PR is APPROVED This pull-request has been approved by: giuseppe The full list of commands accepted by this bot can be found here. The pull request process is described here DetailsNeeds approval from an approver in each of these files:
Approvers can indicate their approval by writing |
|
LGTM |
mtrmac
left a comment
There was a problem hiding this comment.
Thanks for handling this!
A fairly brief skim for now…
(It would be also convenient to have the locking rules documented in detail — e.g. which fields of layersCache and layers are immutable; which are protected by a lock (and which one).
Especially the lock hierarchy of layersCache.mutex vs. the locks of store/layerStore seems fairly non-obvious to me, given how far the code locations are from each other and how dissimilar the situation seems.
But all of that is blocked on having settled on design in the first place.)
| return cache | ||
| } | ||
| cache := &layersCache{ | ||
| cache = &layersCache{ |
There was a problem hiding this comment.
(I wonder about making this kind of mistake harder to do … naming the global cacheSingleton, globalCache, or something, might make it less likely to conflict with a local. OTOH that’s very likely an overreaction.)
|
If this is merged on or before August 12, 2024, please cherry-pick this to the release-1.55 branch |
This is just my opinion but basically this is attempting to optimize zstd:chunked some, and introduces some open questions around concurrency. It doesn't make sense to rush it, and if it merged it's the type of change that should only be cherry picked if there was a known reason to do so. |
c324874 to
2b77871
Compare
|
I think the new version is a reasonable solution to the deadlock issues. I've added a patch to split There are some other nice side effects:
|
ee8e7f4 to
6dc4f68
Compare
mtrmac
left a comment
There was a problem hiding this comment.
I like the approach but I don’t think it works with the current API.
(Start with
918081c to
ed6b4c1
Compare
ed6b4c1 to
58b5e88
Compare
|
@mtrmac thanks for the review. Addressed the comments in the last version |
455b8dc to
7fceea5
Compare
d1ea3b4 to
33e8277
Compare
Signed-off-by: Giuseppe Scrivano <gscrivan@redhat.com>
it is not clear if it is needed, so simplify it. Signed-off-by: Giuseppe Scrivano <gscrivan@redhat.com>
Signed-off-by: Giuseppe Scrivano <gscrivan@redhat.com>
Signed-off-by: Giuseppe Scrivano <gscrivan@redhat.com>
the global singleton was never updated, causing the cache to be always recreated for each layer. It is not possible to keep the layersCache mutex for the entire load() since it calls into some store APIs causing a deadlock since findDigestInternal() is already called while some store locks are held. Another benefit is that now only one goroutine can run load() preventing multiple calls to load() to happen in parallel doing the same work. Closes: containers#2023 Signed-off-by: Giuseppe Scrivano <gscrivan@redhat.com>
Signed-off-by: Giuseppe Scrivano <gscrivan@redhat.com>
33e8277 to
a315bbf
Compare
the global singleton was never updated, causing the cache to be always recreated for each layer.
It is not possible to keep the layersCache mutex for the entire load() since it calls into some store APIs causing a deadlock since findDigestInternal() is already called while some store locks are held.
Closes: #2023