Skip to content

fix: avoid data race when setting memdb footprint hook#621

Merged
sticnarf merged 3 commits intotikv:masterfrom
ekexium:footprint-race
Nov 24, 2022
Merged

fix: avoid data race when setting memdb footprint hook#621
sticnarf merged 3 commits intotikv:masterfrom
ekexium:footprint-race

Conversation

@ekexium
Copy link
Contributor

@ekexium ekexium commented Nov 23, 2022

No description provided.

@disksing
Copy link
Collaborator

/cc @sticnarf

@sticnarf
Copy link
Collaborator

How does it avoid the race? Will the read at db.allocator.memChangeHook == nil race with db.allocator.memChangeHook = innerHook?

Signed-off-by: ekexium <eke@fastmail.com>
@ekexium
Copy link
Contributor Author

ekexium commented Nov 23, 2022

Will the read at db.allocator.memChangeHook == nil race with db.allocator.memChangeHook = innerHook?

Well I have no idea how it works :(. But the race test passes. Maybe use CAS instead?

Signed-off-by: ekexium <eke@fastmail.com>
db.allocator.memChangeHook = innerHook
db.vlog.memChangeHook = innerHook
atomic.CompareAndSwapPointer((*unsafe.Pointer)(unsafe.Pointer(&db.allocator.memChangeHook)), nil, unsafe.Pointer(&innerHook))
atomic.CompareAndSwapPointer((*unsafe.Pointer)(unsafe.Pointer(&db.vlog.memChangeHook)), nil, unsafe.Pointer(&innerHook))
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Should we skip the second CAS if the first fails? Then, we won't assign different hooks for allocator and vlog.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Sure we can but I think it unnecessary. By design they are always set together.

Signed-off-by: ekexium <eke@fastmail.com>
@sticnarf sticnarf merged commit 92f0a82 into tikv:master Nov 24, 2022
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants