Skip to content

storage: cancel liveness update context on Stopper cancellation#27888

Merged
craig[bot] merged 2 commits intocockroachdb:masterfrom
nvb:nvanbenschoten/cancelLiveness
Jul 24, 2018
Merged

storage: cancel liveness update context on Stopper cancellation#27888
craig[bot] merged 2 commits intocockroachdb:masterfrom
nvb:nvanbenschoten/cancelLiveness

Conversation

@nvb
Copy link
Copy Markdown
Contributor

@nvb nvb commented Jul 24, 2018

Fixes #27878.

The test was flaky because a NodeLiveness update was getting stuck when
killing a majority of nodes in a Range. The fix was to tie NodeLiveness'
update context to its stopper so that liveness updates are canceled when
their node begins to shut down.

This was a real issue that I suspect would occasionally make nodes hang on
shutdown when their liveness range was becoming unavailable. There are
probably other issues in the same class of bugs, but stressing
TestLogGrowthWhenRefreshingPendingCommands isn't showing anything else.

In doing so, the change needed to extend stopper.WithCancel into
stopper.WithCancelOnQuiesce and stopper.WithCancelOnStop.

Release note: None

@nvb nvb requested review from a team and tbg July 24, 2018 16:33
@cockroach-teamcity
Copy link
Copy Markdown
Member

This change is Reviewable

Fixes cockroachdb#27878.

The test was flaky because a NodeLiveness update was getting stuck when
killing a majority of nodes in a Range. The fix was to tie NodeLiveness'
update context to its stopper so that liveness updates are canceled when
their node begins to shut down.

This was a real issue that I suspect would occasionally make nodes hang on
shutdown when their liveness range was becoming unavailable. There are
probably other issues in the same class of bugs, but stressing
`TestLogGrowthWhenRefreshingPendingCommands` isn't showing anything else.

In doing so, the change needed to extend `stopper.WithCancel` into
`stopper.WithCancelOnQuiesce` and `stopper.WithCancelOnStop`.

Release note: None
@nvb nvb force-pushed the nvanbenschoten/cancelLiveness branch from 6db2fd4 to 553db12 Compare July 24, 2018 16:37
Copy link
Copy Markdown
Member

@tbg tbg left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The code change is probably fine, but on a meta level, isn't (*Stopper).WithCancel a recipe for leaking memory? Seems dangerous to have that around. Sure, we could call it only in code paths that don't run more than O(1) which is hopefully what we do today, but this looks dangerous.

Before this change, `WithCancelOnQuiesce` and `WithCancelOnStop` were
dangerous to use because they would never clean up memory. This meant
that any use of the methods that happened more than a constant number
of times would slowly leak memory. This was an issue in `client.Txn.rollback`.
This change fixes the methods so that it's possible for callers to clean
up after themselves.

Added a warning to `Stopper.AddCloser` because this had a similar
issue.

Release note: None
@nvb
Copy link
Copy Markdown
Contributor Author

nvb commented Jul 24, 2018

Yeah you're right, this is pretty dangerous. So is Stopper.AddCloser. I added a second commit that fixes this for the WithCancel(ctx) methods by returning a cancel function that cancels and cleans up resources. Stopper.AddCloser is a little different and doesn't seem quite as easy to misinterpret so I just added a big warning to that instead. PTAL.

Copy link
Copy Markdown
Contributor

@benesch benesch left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

:lgtm:

Reviewed 7 of 7 files at r1, 7 of 7 files at r2.
Reviewable status: :shipit: complete! 0 of 0 LGTMs obtained (and 1 stale)

@nvb
Copy link
Copy Markdown
Contributor Author

nvb commented Jul 24, 2018

TFTR!

bors r+

craig bot pushed a commit that referenced this pull request Jul 24, 2018
27888: storage: cancel liveness update context on Stopper cancellation r=nvanbenschoten a=nvanbenschoten

Fixes #27878.

The test was flaky because a NodeLiveness update was getting stuck when
killing a majority of nodes in a Range. The fix was to tie NodeLiveness'
update context to its stopper so that liveness updates are canceled when
their node begins to shut down.

This was a real issue that I suspect would occasionally make nodes hang on
shutdown when their liveness range was becoming unavailable. There are
probably other issues in the same class of bugs, but stressing
`TestLogGrowthWhenRefreshingPendingCommands` isn't showing anything else.

In doing so, the change needed to extend `stopper.WithCancel` into
`stopper.WithCancelOnQuiesce` and `stopper.WithCancelOnStop`.

Release note: None

Co-authored-by: Nathan VanBenschoten <nvanbenschoten@gmail.com>
@craig
Copy link
Copy Markdown
Contributor

craig bot commented Jul 24, 2018

Build succeeded

@craig craig bot merged commit d746369 into cockroachdb:master Jul 24, 2018
@nvb nvb deleted the nvanbenschoten/cancelLiveness branch July 26, 2018 15:39
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

storage: TestLogGrowthWhenRefreshingPendingCommands failed under stress

4 participants