Share disk usage computation results between concurrent invocations#42715
Share disk usage computation results between concurrent invocations#42715thaJeztah merged 2 commits intomoby:masterfrom
Conversation
82c1da4 to
9b26c2b
Compare
This comment has been minimized.
This comment has been minimized.
|
Seems like we can use the |
03a6cb9 to
57b7519
Compare
|
Thanks, didn't know about existence of this package! Does it maybe make sense to add a wrapper for singleflight, which would add easy-to-use context support? (basically just wrapping the What do you think? @cpuguy83 @thaJeztah |
thaJeztah
left a comment
There was a problem hiding this comment.
I do find the previous approach simpler and easier to read, mostly due to the lack of context handling in singleflight.
Yeah, the handling is slightly more complex. That said, I do like that we won't have to maintain yet-another-package.
the returned channel is never closed and would leak a goroutine if we did not ensure a read on caller side, which means we have to start a goroutine on ctx.Done().
That's a bit ugly, yes. @cpuguy83 any other suggestions for that?
A possible minor advantage, is that we can reuse the "group" for e.g. improving prune UX.
Ah, just left a comment about that. Was your idea to reuse the same groups, or to use the same approach ?
Does it maybe make sense to add a wrapper for singleflight, which would add easy-to-use context support? (basically just wrapping the select block)
Always a bit cautious with abstracting away things too much. You mean something like;
package singleflight
import (
"context"
"golang.org/x/sync/singleflight"
)
func Do(ctx context.Context, sf *singleflight.Group, key string, fn func() (interface{}, error)) (interface{}, error)
ch := sf.DoChan(key, fn)
select {
case <-ctx.Done():
go func() { <-ch }()
return nil, ctx.Err()
case res := <-ch:
if res.Err != nil {
return nil, res.Err
}
return res.Val, nil
}
}
volume/service/service.go
Outdated
| ds ds | ||
| pruneRunning int32 | ||
| eventLogger VolumeEventLogger | ||
| singleflightGroup singleflight.Group |
There was a problem hiding this comment.
perhaps just singleFlight (slightly shorter, and reduces reformatting/reflowing other lines)
Or (if we will be implement the same for pruning) perhaps make the singleFlight more descriptive to what it's for. e.g. usage (which would then be called as usage.DoChan()), then we can use prune for the other one (prune.DoChan())
Same for the other singleFlightGroups
| startupDone chan struct{} | ||
| singleflightGroup singleflight.Group | ||
|
|
||
| pruneRunning int32 |
There was a problem hiding this comment.
Not for this PR, but should we use the same approach for pruning, and use a singleFlightGroup for that as well?
9b31dc1 to
700a0c4
Compare
Given the library API it seems that the expected usage is to reuse the group. I have hard time understanding why so much functionality would be present for key support otherwise. Personally, reusing anything here does not seem like the right approach, but if that's the "idiomatic" usage, then I suppose we should follow.
Yes, exactly that, but it doesn't seem like it should deserve it's own package. It's something I would like to put in a |
|
I don't see how we'd leak anything since the channel is buffered. |
Yes, you're right, we don't actually need the goroutine |
700a0c4 to
3ba9168
Compare
|
For reviewers; use the |
thaJeztah
left a comment
There was a problem hiding this comment.
changes look good, but can you squash/combine the first and last commit?
3ba9168 to
8e01003
Compare
Signed-off-by: Roman Volosatovs <roman.volosatovs@docker.com>
Signed-off-by: Roman Volosatovs <roman.volosatovs@docker.com>
8e01003 to
135cec5
Compare
|
Hmm @StefanScherer could this be because the machines were updated? (if so, do we need #42527 to be merged?) |
|
merged #42527. Let me trigger CI again to see if it picks up that change |
|
Yup! All green again now |
- What I did
Share disk usage computation results between concurrent invocations instead of throwing an error
- How I did it
x/sync/singleflight.Group, which ensures computations are simultaneously performed by at most one goroutine and the results are propagated to all goroutines simultaneously calling the method.- How to verify it
E.g.
Or:
Such invocations do not error anymore, but just return the result once computed by one of the goroutines
** description for changeling**