storage: cancel liveness update context on Stopper cancellation#27888
storage: cancel liveness update context on Stopper cancellation#27888craig[bot] merged 2 commits intocockroachdb:masterfrom
Conversation
Fixes cockroachdb#27878. The test was flaky because a NodeLiveness update was getting stuck when killing a majority of nodes in a Range. The fix was to tie NodeLiveness' update context to its stopper so that liveness updates are canceled when their node begins to shut down. This was a real issue that I suspect would occasionally make nodes hang on shutdown when their liveness range was becoming unavailable. There are probably other issues in the same class of bugs, but stressing `TestLogGrowthWhenRefreshingPendingCommands` isn't showing anything else. In doing so, the change needed to extend `stopper.WithCancel` into `stopper.WithCancelOnQuiesce` and `stopper.WithCancelOnStop`. Release note: None
6db2fd4 to
553db12
Compare
tbg
left a comment
There was a problem hiding this comment.
The code change is probably fine, but on a meta level, isn't (*Stopper).WithCancel a recipe for leaking memory? Seems dangerous to have that around. Sure, we could call it only in code paths that don't run more than O(1) which is hopefully what we do today, but this looks dangerous.
Before this change, `WithCancelOnQuiesce` and `WithCancelOnStop` were dangerous to use because they would never clean up memory. This meant that any use of the methods that happened more than a constant number of times would slowly leak memory. This was an issue in `client.Txn.rollback`. This change fixes the methods so that it's possible for callers to clean up after themselves. Added a warning to `Stopper.AddCloser` because this had a similar issue. Release note: None
|
Yeah you're right, this is pretty dangerous. So is |
benesch
left a comment
There was a problem hiding this comment.
Reviewed 7 of 7 files at r1, 7 of 7 files at r2.
Reviewable status:complete! 0 of 0 LGTMs obtained (and 1 stale)
|
TFTR! bors r+ |
27888: storage: cancel liveness update context on Stopper cancellation r=nvanbenschoten a=nvanbenschoten Fixes #27878. The test was flaky because a NodeLiveness update was getting stuck when killing a majority of nodes in a Range. The fix was to tie NodeLiveness' update context to its stopper so that liveness updates are canceled when their node begins to shut down. This was a real issue that I suspect would occasionally make nodes hang on shutdown when their liveness range was becoming unavailable. There are probably other issues in the same class of bugs, but stressing `TestLogGrowthWhenRefreshingPendingCommands` isn't showing anything else. In doing so, the change needed to extend `stopper.WithCancel` into `stopper.WithCancelOnQuiesce` and `stopper.WithCancelOnStop`. Release note: None Co-authored-by: Nathan VanBenschoten <nvanbenschoten@gmail.com>
Build succeeded |
Fixes #27878.
The test was flaky because a NodeLiveness update was getting stuck when
killing a majority of nodes in a Range. The fix was to tie NodeLiveness'
update context to its stopper so that liveness updates are canceled when
their node begins to shut down.
This was a real issue that I suspect would occasionally make nodes hang on
shutdown when their liveness range was becoming unavailable. There are
probably other issues in the same class of bugs, but stressing
TestLogGrowthWhenRefreshingPendingCommandsisn't showing anything else.In doing so, the change needed to extend
stopper.WithCancelintostopper.WithCancelOnQuiesceandstopper.WithCancelOnStop.Release note: None