concurrency: use lock modes to find conflicts during lock table scans#104620
concurrency: use lock modes to find conflicts during lock table scans#104620craig[bot] merged 1 commit intocockroachdb:masterfrom
Conversation
9282c07 to
080c20b
Compare
nvb
left a comment
There was a problem hiding this comment.
Lots of comments, but this is really coming out quite nicely!
Reviewed 3 of 3 files at r3, 2 of 2 files at r4, all commit messages.
Reviewable status:complete! 0 of 0 LGTMs obtained (waiting on @arulajmani)
pkg/kv/kvserver/concurrency/lock_table.go line 273 at r4 (raw file):
// between separate concurrency.Manager instances. // // finalizedTxnCacheMu provides mutual exclusion to cache accesses. It must be
TODO: we had discussed a desire to get rid of this. It's a performance concern and also implies a subtle need for a consistent finalized txn cache snapshot across scanAndMaybeEnqueue.
pkg/kv/kvserver/concurrency/lock_table.go line 792 at r4 (raw file):
} conflicts := func() bool { l.mu.Lock()
Closures are a bit of a clutch to avoid creating a clean, explicit abstraction around an operation. That can certainly be ok in some cases, but this feels too central to the lock table to not be given a name and an explicit set of inputs and outputs. That's especially true because we're using this to establish a critical section under l.mu. In this sense, I do think the current structure of tryActiveWait is getting it right.
pkg/kv/kvserver/concurrency/lock_table.go line 795 at r4 (raw file):
defer l.mu.Unlock() if l.isEmptyLock() { locksToGC = append(locksToGC, l)
We had talked about getting rid of this.
pkg/kv/kvserver/concurrency/lock_table.go line 798 at r4 (raw file):
return false } conflicts := l.scanAndMaybeEnqueue(g, notify)
nit: what is the "scan" part of this name referring to? Haven't we already scanned through the lock table and found the lock?
pkg/kv/kvserver/concurrency/lock_table.go line 799 at r4 (raw file):
} conflicts := l.scanAndMaybeEnqueue(g, notify) // TODO(arul): It's weird that we're checking for a non-empty lock
If we get rid of the previous append to locksToGC and then address this TODO, does that let us push the locking into scanAndMaybeEnqueue and then get rid of this closure? Should we address the TODO now? Or maybe fall back to the transitionedToFree bool return value until we do address the TODO?
pkg/kv/kvserver/concurrency/lock_table.go line 1630 at r4 (raw file):
// conflict resolution with the supplied request. It may[1] enqueue the request // in the receiver's wait queues. The return value indicates if a conflict was // found or not. If a conflict is found, the caller is expected to suspend its
I'm not sure whether conflicts is an improvement over wait. Is conflicts well-defined? Is it clear that a enqueuer that does enqueue and immediately claims the lock is not considered to "conflict" with it? Similarly, is it clear to a caller how it is expected to behave in response to the conflicts bool?
wait also parallels lockTableGuard.ShouldWait, so it's clearer how ties together.
pkg/kv/kvserver/concurrency/lock_table.go line 1636 at r4 (raw file):
// 3 separate cases: // 1. Transactional requests of the locking nature are always enqueued. // 2.Transactional requests that are non-locking are only enqueued if there is a
nit: missing space
pkg/kv/kvserver/concurrency/lock_table.go line 1690 at r4 (raw file):
// If that is the case, we designate this request to be such. if l.distinguishedWaiter == nil { l.distinguishedWaiter = g
Does this allow a waitSelf waiter to also become the distinguishedWaiter?
pkg/kv/kvserver/concurrency/lock_table.go line 1731 at r4 (raw file):
if finalizedTxn != nil { // Clean up the lock before proceeding. // TODO(arul): Instead of this special casing for unreplicated locks, we
This TODO is actually suggesting something different than I had thought. I thought the idea was to immediately call clearLockHolder and lockIsFree to clean up the lock holder, but then to also enqueue on the lock even if it became free. That way we don't need to have the awkward transitionedToFree transition, but we also don't lose the claim.
Are there benefits to one approach over the other?
Also, I mentioned this above, but I'll ask here as well. This case seems quite awkward and it's kind of getting in your way. Would things be easier if we just addressed this TODO now (either as a preceding commit or PR) and then made the rest of this change?
pkg/kv/kvserver/concurrency/lock_table.go line 1794 at r4 (raw file):
// // REQUIRES: l.mu to be locked. func (l *lockState) alreadyHoldsLockAndIsAllowedToProceed(g *lockTableGuardImpl) bool {
We had talked about adjusting the wait-queue ordering so that this case acquires a claim on the lock, right? That's a change you'd want to make in a future PR? Doing so SGTM.
pkg/kv/kvserver/concurrency/lock_table.go line 1829 at r4 (raw file):
return false // the lock isn't held; no conflict to speak of } assert(!g.isSameTxn(lockHolderTxn), "lock already held by the request's transaction")
Could you add a comment explaining that this has already been checked?
pkg/kv/kvserver/concurrency/lock_table.go line 1862 at r4 (raw file):
// correctness issue in doing so. // // TODO(arul): I'm not entirely sure I understand why we have the
I'm also not sure. Looking forward to seeing what you find.
pkg/kv/kvserver/concurrency/lock_table.go line 1874 at r4 (raw file):
// for conflicts. var reqMode lock.Mode switch g.curStrength() {
Should we add a g.getLockMode() method that we can use here and during the iteration in shouldRequestActivelyWait?
pkg/kv/kvserver/concurrency/lock_table.go line 1891 at r4 (raw file):
// // REQUIRES: l.mu to be locked. func (l *lockState) maybeEnqueueRequest(g *lockTableGuardImpl) error {
We discussed changing the signature of this function to return (ok, lengthExceeded bool).
pkg/kv/kvserver/concurrency/lock_table.go line 1892 at r4 (raw file):
// REQUIRES: l.mu to be locked. func (l *lockState) maybeEnqueueRequest(g *lockTableGuardImpl) error { switch g.curStrength() {
Should this be if g.curStrength() == lock.None { ... } else { ... } for the same reason as the discussion in #103267 (review)? The choice of which queue to use should be locking strength agnostic, right?
pkg/kv/kvserver/concurrency/lock_table.go line 1912 at r4 (raw file):
assert(g.curStrength() == lock.None, "unexpected locking strength; expected read") if !l.conflictsWithLockHolder(g) { return // no conflit, no need to enqueue
"conflict"
pkg/kv/kvserver/concurrency/lock_table.go line 1917 at r4 (raw file):
} // enqueueNonLockingReader enqueues the supplied non-locking request in the
It feels like this function could be inlined to avoid the repeated assertion, but I don't feel strongly.
pkg/kv/kvserver/concurrency/lock_table.go line 1999 at r4 (raw file):
} if g.curStrength() == lock.None {
If we do return an "enqueued" bool from maybeEnqueueRequest, we'll want to move this check above conflictsWithLockHolder and have it return true. // Non-locking read requests always actively wait.
pkg/kv/kvserver/concurrency/lock_table.go line 2005 at r4 (raw file):
} // Iterate through the head of the queue until we find the request. As we
s/Iterate through the head of the queue/Start at the front of the queue and iterate backwards/
pkg/kv/kvserver/concurrency/lock_table.go line 2008 at r4 (raw file):
of the requests
Of the other requests?
pkg/kv/kvserver/concurrency/lock_table.go line 2033 at r4 (raw file):
} } panic("lock table bug")
nit: mind using this as a comment to explain why it's a bug? Something like panic("lock table bug: did not find enqueued request")
pkg/kv/kvserver/concurrency/lock_table.go line 2047 at r4 (raw file):
// REQUIRES: l.mu to be locked. func (l *lockState) maybeClaimBeforeProceeding(g *lockTableGuardImpl) { if g.curStrength() == lock.None {
This check feels out of place here. Can we scope this method to locking requests? I believe a few of the comments above outline a path to get there.
pkg/kv/kvserver/concurrency/lock_table.go line 2073 at r4 (raw file):
if qqg.guard == g { if g.txn == nil { // NB: Before calling into removeWriter, we override the active
This all leads me to think that removeWriter is not what you want to be calling here, and should instead just call l.queuedWriters.Remove(e). removeWriter has a few assumptions that it is being called on behalf of another request.
Also, separately, why does removeWriter only check if g == l.distinguishedWaiter if qg.active. Is that assuming that the first condition implies the second? Do we need to tie the conditions together?
Previous to this patch, tryActiveWait would clear a lock's holder if it belonged to a finalized transaction and was only held with unreplicated durability. It would also nudge some request's in the lock's wait queue to proceed. This state transition inside of tryActiveWait for other requests was subtle. Arguably, the function which decides whether to wait on a lock or not should not be responsible for performing such state transitions for other requests. This patch slightly improves this situation. We no longer return a transitionedToFree boolean and expect the caller to take some action based on it. Instead, we accumulate unreplicated locks that belong to finalized transactions in the request guard, as it sequences. The request then assumes the responsibility for clearing such locks and nudging waiters (if possible). This last part is still not great. However, we need it for practical purposes. In the ideal world, we would address the TODO in TransactionIsFinalized, and perform this state transition there. But until then, this patch moves the needle slightly. More importantly, it allows us to remove some special casing when tryActiveWait is removed and replaced, over in cockroachdb#104620. In particular, locking requests that come across unreplicated locks that belong to finalized transactions and have empty wait queues will be able to acquire claims on such locks before proceeding. This can be seen in the test diff for `clear_finalized_txn_locks`. Release note: None
Previous to this patch, tryActiveWait would clear a lock's holder if it belonged to a finalized transaction and was only held with unreplicated durability. It would also nudge some request's in the lock's wait queue to proceed. This state transition inside of tryActiveWait for other requests was subtle. Arguably, the function which decides whether to wait on a lock or not should not be responsible for performing such state transitions for other requests. This patch slightly improves this situation. We no longer return a transitionedToFree boolean and expect the caller to take some action based on it. Instead, we accumulate unreplicated locks that belong to finalized transactions in the request guard, as it sequences. The request then assumes the responsibility for clearing such locks and nudging waiters (if possible). This last part is still not great. However, we need it for practical purposes. In the ideal world, we would address the TODO in TransactionIsFinalized, and perform this state transition there. But until then, this patch moves the needle slightly. More importantly, it allows us to remove some special casing when tryActiveWait is removed and replaced, over in cockroachdb#104620. In particular, locking requests that come across unreplicated locks that belong to finalized transactions and have empty wait queues will be able to acquire claims on such locks before proceeding. This can be seen in the test diff for `clear_finalized_txn_locks`. Release note: None
Previous to this patch, tryActiveWait would clear a lock's holder if it belonged to a finalized transaction and was only held with unreplicated durability. It would also nudge some request's in the lock's wait queue to proceed. This state transition inside of tryActiveWait for other requests was subtle. Arguably, the function which decides whether to wait on a lock or not should not be responsible for performing such state transitions for other requests. This patch slightly improves this situation. We no longer return a transitionedToFree boolean and expect the caller to take some action based on it. Instead, we accumulate unreplicated locks that belong to finalized transactions in the request guard, as it sequences. The request then assumes the responsibility for clearing such locks and nudging waiters (if possible). This last part is still not great. However, we need it for practical purposes. In the ideal world, we would address the TODO in TransactionIsFinalized, and perform this state transition there. But until then, this patch moves the needle slightly. More importantly, it allows us to remove some special casing when tryActiveWait is removed and replaced, over in cockroachdb#104620. In particular, locking requests that come across unreplicated locks that belong to finalized transactions and have empty wait queues will be able to acquire claims on such locks before proceeding. This can be seen in the test diff for `clear_finalized_txn_locks`. Release note: None
Previous to this patch, tryActiveWait would clear a lock's holder if it belonged to a finalized transaction and was only held with unreplicated durability. It would also nudge some request's in the lock's wait queue to proceed. This state transition inside of tryActiveWait for other requests was subtle. Arguably, the function which decides whether to wait on a lock or not should not be responsible for performing such state transitions for other requests. This patch slightly improves this situation. We no longer return a transitionedToFree boolean and expect the caller to take some action based on it. Instead, we accumulate unreplicated locks that belong to finalized transactions in the request guard, as it sequences. The request then assumes the responsibility for clearing such locks and nudging waiters (if possible). This last part is still not great. However, we need it for practical purposes. In the ideal world, we would address the TODO in TransactionIsFinalized, and perform this state transition there. But until then, this patch moves the needle slightly. More importantly, it allows us to remove some special casing when tryActiveWait is removed and replaced, over in cockroachdb#104620. In particular, locking requests that come across unreplicated locks that belong to finalized transactions and have empty wait queues will be able to acquire claims on such locks before proceeding. This can be seen in the test diff for `clear_finalized_txn_locks`. Release note: None
104782: concurrency: do not clear lock holders in tryActiveWait r=nvanbenschoten a=arulajmani Previous to this patch, tryActiveWait would clear a lock's holder if it belonged to a finalized transaction and was only held with unreplicated durability. It would also nudge some request's in the lock's wait queue to proceed. This state transition inside of tryActiveWait for other requests was subtle. Arguably, the function which decides whether to wait on a lock or not should not be responsible for performing such state transitions for other requests. This patch slightly improves this situation. We no longer return a transitionedToFree boolean and expect the caller to take some action based on it. Instead, we accumulate unreplicated locks that belong to finalized transactions in the request guard, as it sequences. The request then assumes the responsibility for clearing such locks and nudging waiters (if possible). This last part is still not great. However, we need it for practical purposes. In the ideal world, we would address the TODO in TransactionIsFinalized, and perform this state transition there. But until then, this patch moves the needle slightly. More importantly, it allows us to remove some special casing when tryActiveWait is removed and replaced, over in #104620. In particular, locking requests that come across unreplicated locks that belong to finalized transactions and have empty wait queues will be able to acquire claims on such locks before proceeding. This can be seen in the test diff for `clear_finalized_txn_locks`. Release note: None 105146: multitenant: make the system tenant appear to have all capabilities r=yuzefovich a=knz Epic: CRDB-26691 Fixes #98749. The system tenant is currently defined to have access to all services. Yet, the output of `SHOW TENANT system WITH CAPABILITIES` suggested that was not true. This patch fixes that. 105157: ui: improve timeperiod display r=maryliag a=maryliag Previously, the period being shown on SQL Activity page could be confusing, since we no longer refresh the page automatically. This could result in a scenario where a query is executed, the user click on Apply on the Search Criteria on X:05, but the page shows that the results goes up to X:59, but then if you executed new statements they won't show until Apply is clicked again, but because we still show the message that the results are up to X:59 this is misleading. This commit updates the value being displayed to use the time the request was made, so we know the end window value, and even if the user changes page and go back, the value is still the X:05, making more obvious they need to click on Apply again if they want to see newer results. Here an example of the previous behaviour on Transactions page and the new Behaviour on Statement page: (to clarify, this PR make this update on Statement, Statement Details, Transaction and Transaction Details pages) https://www.loom.com/share/ec19aa79a5144aea9e44bec59a5101a4 Epic: none Release note: None Co-authored-by: Arul Ajmani <arulajmani@gmail.com> Co-authored-by: Raphael 'kena' Poss <knz@thaumogen.net> Co-authored-by: maryliag <marylia@cockroachlabs.com>
080c20b to
88d936f
Compare
arulajmani
left a comment
There was a problem hiding this comment.
Thanks for the detailed reviews on this and accompanying PRs. RFAL.
Reviewable status:
complete! 0 of 0 LGTMs obtained (waiting on @nvanbenschoten)
pkg/kv/kvserver/concurrency/lock_table.go line 273 at r4 (raw file):
Previously, nvanbenschoten (Nathan VanBenschoten) wrote…
TODO: we had discussed a desire to get rid of this. It's a performance concern and also implies a subtle need for a consistent finalized txn cache snapshot across
scanAndMaybeEnqueue.
Got rid of it by pushing the use of this cache in just one place.
pkg/kv/kvserver/concurrency/lock_table.go line 792 at r4 (raw file):
Previously, nvanbenschoten (Nathan VanBenschoten) wrote…
Closures are a bit of a clutch to avoid creating a clean, explicit abstraction around an operation. That can certainly be ok in some cases, but this feels too central to the lock table to not be given a name and an explicit set of inputs and outputs. That's especially true because we're using this to establish a critical section under
l.mu. In this sense, I do think the current structure oftryActiveWaitis getting it right.
We no longer need this now that we've landed #104782 and we no longer have locksToGC as a thing.
pkg/kv/kvserver/concurrency/lock_table.go line 795 at r4 (raw file):
Previously, nvanbenschoten (Nathan VanBenschoten) wrote…
We had talked about getting rid of this.
Done.
pkg/kv/kvserver/concurrency/lock_table.go line 798 at r4 (raw file):
Previously, nvanbenschoten (Nathan VanBenschoten) wrote…
nit: what is the "scan" part of this name referring to? Haven't we already scanned through the lock table and found the lock?
Yeah, but I was imagining there could be more than 1 locks on this key (once we have shared locks). So we would still be scanning through more than 1 locks in this method.
pkg/kv/kvserver/concurrency/lock_table.go line 799 at r4 (raw file):
Previously, nvanbenschoten (Nathan VanBenschoten) wrote…
If we get rid of the previous append to
locksToGCand then address this TODO, does that let us push the locking intoscanAndMaybeEnqueueand then get rid of this closure? Should we address the TODO now? Or maybe fall back to thetransitionedToFree boolreturn value until we do address the TODO?
Yup, like you noted, the closure goes away post #104782.
pkg/kv/kvserver/concurrency/lock_table.go line 1630 at r4 (raw file):
Previously, nvanbenschoten (Nathan VanBenschoten) wrote…
I'm not sure whether
conflictsis an improvement overwait. Isconflictswell-defined? Is it clear that a enqueuer that does enqueue and immediately claims the lock is not considered to "conflict" with it? Similarly, is it clear to a caller how it is expected to behave in response to theconflictsbool?
waitalso parallelslockTableGuard.ShouldWait, so it's clearer how ties together.
Ack, switched to wait instead and fixed up the commentary here.
pkg/kv/kvserver/concurrency/lock_table.go line 1690 at r4 (raw file):
Previously, nvanbenschoten (Nathan VanBenschoten) wrote…
Does this allow a
waitSelfwaiter to also become thedistinguishedWaiter?
Fixed
pkg/kv/kvserver/concurrency/lock_table.go line 1731 at r4 (raw file):
Previously, nvanbenschoten (Nathan VanBenschoten) wrote…
This TODO is actually suggesting something different than I had thought. I thought the idea was to immediately call
clearLockHolderandlockIsFreeto clean up the lock holder, but then to also enqueue on the lock even if it became free. That way we don't need to have the awkwardtransitionedToFreetransition, but we also don't lose the claim.Are there benefits to one approach over the other?
Also, I mentioned this above, but I'll ask here as well. This case seems quite awkward and it's kind of getting in your way. Would things be easier if we just addressed this TODO now (either as a preceding commit or PR) and then made the rest of this change?
We spoke about this offline and landed #104782.
pkg/kv/kvserver/concurrency/lock_table.go line 1794 at r4 (raw file):
Previously, nvanbenschoten (Nathan VanBenschoten) wrote…
We had talked about adjusting the wait-queue ordering so that this case acquires a claim on the lock, right? That's a change you'd want to make in a future PR? Doing so SGTM.
Yeah
pkg/kv/kvserver/concurrency/lock_table.go line 1874 at r4 (raw file):
Previously, nvanbenschoten (Nathan VanBenschoten) wrote…
Should we add a
g.getLockMode()method that we can use here and during the iteration inshouldRequestActivelyWait?
Done; I named to curLockMode for the same mutability reasons we called it g.curStrength().
pkg/kv/kvserver/concurrency/lock_table.go line 1891 at r4 (raw file):
Previously, nvanbenschoten (Nathan VanBenschoten) wrote…
We discussed changing the signature of this function to return
(ok, lengthExceeded bool).
I implemented our original idea first, but after seeing how it looked, I ended up getting rid of this function entirely and moving the switch[1] into scanAndMaybeEnqueue. IMO, this is a bit easier to understand, but let me know if you disagree and I can switch back to the old thing.
[1] rewrote the switch as an if-else given the comment below.
pkg/kv/kvserver/concurrency/lock_table.go line 1892 at r4 (raw file):
Previously, nvanbenschoten (Nathan VanBenschoten) wrote…
Should this be
if g.curStrength() == lock.None { ... } else { ... }for the same reason as the discussion in #103267 (review)? The choice of which queue to use should be locking strength agnostic, right?
Done.
pkg/kv/kvserver/concurrency/lock_table.go line 1917 at r4 (raw file):
Previously, nvanbenschoten (Nathan VanBenschoten) wrote…
It feels like this function could be inlined to avoid the repeated assertion, but I don't feel strongly.
Yeah, it's a one-liner with unnecessary assertions given the call path -- inlined it.
pkg/kv/kvserver/concurrency/lock_table.go line 1999 at r4 (raw file):
Previously, nvanbenschoten (Nathan VanBenschoten) wrote…
If we do return an "enqueued" bool from
maybeEnqueueRequest, we'll want to move this check aboveconflictsWithLockHolderand have it return true.// Non-locking read requests always actively wait.
Another alternative here would be to never even call into this function for non-locking reads (and assert this here). Instead, we could handle non-locking reads at the caller (the structure of which dictates why non-locking reads have to always actively wait).
I don't have a strong preference either way. I keep going back and forth between wanting to treat non-locking reads as a completely separate thing in scanAndMaybeEnqueue (right up top, before any non-locking requests' handling) and the current structure. For now, I've implemented your suggestion.
pkg/kv/kvserver/concurrency/lock_table.go line 2008 at r4 (raw file):
Previously, nvanbenschoten (Nathan VanBenschoten) wrote…
of the requests
Of the other requests?
Yeah; fixed.
pkg/kv/kvserver/concurrency/lock_table.go line 2047 at r4 (raw file):
Previously, nvanbenschoten (Nathan VanBenschoten) wrote…
This check feels out of place here. Can we scope this method to locking requests? I believe a few of the comments above outline a path to get there.
Done; added an assertion.
pkg/kv/kvserver/concurrency/lock_table.go line 2073 at r4 (raw file):
Previously, nvanbenschoten (Nathan VanBenschoten) wrote…
This all leads me to think that
removeWriteris not what you want to be calling here, and should instead just calll.queuedWriters.Remove(e).removeWriterhas a few assumptions that it is being called on behalf of another request.Also, separately, why does
removeWriteronly checkif g == l.distinguishedWaiterifqg.active. Is that assuming that the first condition implies the second? Do we need to tie the conditions together?
Like we talked about offline last week, I've added an assertion here.
pkg/kv/kvserver/concurrency/lock_table.go line 1699 at r5 (raw file):
if g.curStrength() == lock.None { conflicts := l.maybeEnqueueNonLockingReadRequest(g)
@nvanbenschoten like I mentioned below, an alternative is to handle the conflicts case here as well. We could pull out the datastructure changes in the shouldRequestActivelyWait block into its own function, call into that, and return early.
That allows us to completely handle non-locking readers, and say everything below only applies to locking requests. It's another round of reviews, but I think that's a bit cleaner. I can make the change if you feel similarly -- no strong feelings from me, either way.
88d936f to
30c3873
Compare
nvb
left a comment
There was a problem hiding this comment.
This feels like it's getting close. At this point, I think it's just a bit of polish.
Reviewed 1 of 1 files at r6, all commit messages.
Reviewable status:complete! 0 of 0 LGTMs obtained (waiting on @arulajmani)
pkg/kv/kvserver/concurrency/lock_table.go line 1699 at r5 (raw file):
Previously, arulajmani (Arul Ajmani) wrote…
@nvanbenschoten like I mentioned below, an alternative is to handle the
conflictscase here as well. We could pull out the datastructure changes in theshouldRequestActivelyWaitblock into its own function, call into that, and return early.That allows us to completely handle non-locking readers, and say everything below only applies to locking requests. It's another round of reviews, but I think that's a bit cleaner. I can make the change if you feel similarly -- no strong feelings from me, either way.
I think I like what you're suggesting. It means that we handle the non-locking request logic separately from the locking logic and don't try to mix them. The more we iterate on this, the more we're finding that it's easiest to just split these cases early on. So the change SGTM.
EDIT: if we address the two comments following this one, we may also be able to handle the waitQueueMaxLengthExceeded case with this same function. I'm imagining that it would be a function with a signature like func (g *lockTableGuardImpl) startWaitWithWaitingState(state waitingState, notify bool). Note that the g.key could be pulled from the waitingState to avoid the additional parameter.
pkg/kv/kvserver/concurrency/lock_table.go line 1682 at r6 (raw file):
// on them. However, they must be resolved before the request can evaluate. The // guard's state is modified to indicate if there are locks that need // resolution.
Do you want to update this comment to reflect the updated condition now that #104784 has landed?
pkg/kv/kvserver/concurrency/lock_table.go line 1730 at r6 (raw file):
g.mu.startWait = true g.mu.curLockWaitStart = g.lt.clock.PhysicalTime() // This may be the first request to actively start waiting in the receiver's
This logic feels distinct from the logic to update the lockTableGuardImpl. It's closer to the manipulation of the lockWaitQueue state when enqueueing requests, so I wonder if it's in the wrong place.
Did you consider pulling out a small helper like func (l *lockState) maybeMakeDistinguishedWaiter(g *lockTableGuardImpl) and then calling this from maybeEnqueueNonLockingReadRequest and enqueueLockingRequest?
pkg/kv/kvserver/concurrency/lock_table.go line 1743 at r6 (raw file):
ws := l.constructWaitingState(g) g.maybeUpdateWaitingStateLocked(ws, notify) g.mu.locks[l] = struct{}{}
We are already doing this in enqueueLockingRequest. Is it just missing from maybeEnqueueNonLockingReadRequest?
pkg/kv/kvserver/concurrency/lock_table.go line 1950 at r6 (raw file):
// // REQUIRES: l.mu to be locked. // REQUIRES: g.mu to be locked.
Is g.mu locked when calling this? Am I missing that? FWIW, the locking around lockTableGuardImpl is funky, but it doesn't seem like g.mu.locks needs to be mutated under lock, given that it's internal to the lock table.
pkg/kv/kvserver/concurrency/lock_table.go line 1977 at r6 (raw file):
// would be more fair, but more complicated, and we expect that the // common case is that this waiter will be at the end of the queue. return true
nit: return true /* maxQueueLengthExceeded */
pkg/kv/kvserver/concurrency/lock_table.go line 1998 at r6 (raw file):
} g.mu.locks[l] = struct{}{} return false
nit: return false /* maxQueueLengthExceeded */
4e67923 to
a56fa4d
Compare
arulajmani
left a comment
There was a problem hiding this comment.
RFAL.
First commit from #105562
Reviewable status:
complete! 0 of 0 LGTMs obtained (waiting on @nvanbenschoten)
pkg/kv/kvserver/concurrency/lock_table.go line 1699 at r5 (raw file):
Previously, nvanbenschoten (Nathan VanBenschoten) wrote…
I think I like what you're suggesting. It means that we handle the non-locking request logic separately from the locking logic and don't try to mix them. The more we iterate on this, the more we're finding that it's easiest to just split these cases early on. So the change SGTM.
EDIT: if we address the two comments following this one, we may also be able to handle the
waitQueueMaxLengthExceededcase with this same function. I'm imagining that it would be a function with a signature likefunc (g *lockTableGuardImpl) startWaitWithWaitingState(state waitingState, notify bool). Note that theg.keycould be pulled from thewaitingStateto avoid the additional parameter.
Made the change. I quite like how it turned out, especially with the suggestion to call startWaitingWithWaitingState in the waitQueueMaxLengthExceeded case as well.
pkg/kv/kvserver/concurrency/lock_table.go line 1682 at r6 (raw file):
Previously, nvanbenschoten (Nathan VanBenschoten) wrote…
Do you want to update this comment to reflect the updated condition now that #104784 has landed?
Good call; done.
pkg/kv/kvserver/concurrency/lock_table.go line 1730 at r6 (raw file):
Previously, nvanbenschoten (Nathan VanBenschoten) wrote…
This logic feels distinct from the logic to update the
lockTableGuardImpl. It's closer to the manipulation of thelockWaitQueuestate when enqueueing requests, so I wonder if it's in the wrong place.Did you consider pulling out a small helper like
func (l *lockState) maybeMakeDistinguishedWaiter(g *lockTableGuardImpl)and then calling this frommaybeEnqueueNonLockingReadRequestandenqueueLockingRequest?
That's a good idea, done. We also need to clear out this distinction in maybeClaimBeforeProceeding. Interestingly, making these changes uncovered a minor bug where an inactive waiter could be a distinguished waiter (fixed separately in #105562).
pkg/kv/kvserver/concurrency/lock_table.go line 1743 at r6 (raw file):
Previously, nvanbenschoten (Nathan VanBenschoten) wrote…
We are already doing this in
enqueueLockingRequest. Is it just missing frommaybeEnqueueNonLockingReadRequest?
Yeah; moved this to maybeEnqueueNonLockingReadRequest -- that's the right place for this.
pkg/kv/kvserver/concurrency/lock_table.go line 1950 at r6 (raw file):
It wasn't, thanks for catching. Likely a result of moving some code around, there's been a lot of churn here 😅
but it doesn't seem like
g.mu.locksneeds to be mutated under lock, given that it's internal to the lock table.
I don't fully follow -- I thought we need this because g.mu.locks can be mutated by both the request itself and other requests being sequence through the lock table. It's not directly related to this patch, but could you say more about why we don't need locking here?
a56fa4d to
0af0ffb
Compare
nvb
left a comment
There was a problem hiding this comment.
Reviewed 1 of 3 files at r7, 1 of 2 files at r8, 1 of 1 files at r9, all commit messages.
Reviewable status:complete! 1 of 0 LGTMs obtained (waiting on @arulajmani)
pkg/kv/kvserver/concurrency/lock_table.go line 1950 at r6 (raw file):
I thought we need this because
g.mu.lockscan be mutated by both the request itself and other requests being sequence through the lock table.
Yeah, I think you're right about this. Ignore me.
pkg/kv/kvserver/concurrency/lock_table.go line 767 at r9 (raw file):
} // startWaitingWithWaitingState modifies state on the request's guard to let it
nit: consider moving this method closer to maybeUpdateWaitingStateLocked and updateWaitingStateLocked to keep related methods grouped together.
pkg/kv/kvserver/concurrency/lock_table.go line 2134 at r9 (raw file):
l.queuedWriters.Remove(e) } qqg.active = false // claim the lock
nit: should we put this in an else branch?
This patch majorly refactors the lock table scan codepath, all in the name of shared locks. At its core is a desire to use lock modes to perform conflict resolution between an incoming request and locks held on one particular key. In doing so, we rip out tryActiveWait. At a high level (for a particular key), a request's journey looks like the following: - It first checks if the transaction already holds a lock at a equal or higher lock strength (read: It's good enough for its use). If this is the case, it can proceed without any bookkeeping. - It then checks if any finalized transactions hold locks on the key. Such locks do not conflict, but need to be resolved before the transaction can evaluate. They're thus accumulated for later. - The request then enqueues itself in the appropriate wait queue. - It then determines if it needs to actively wait at this lock because of a conflict. If that's the case, the lock table scan short circuits. - Otherwise, the request lays a claim (if it can) before proceeding with its scan. Closes cockroachdb#102210 Release note: None
0af0ffb to
a2aa8b7
Compare
arulajmani
left a comment
There was a problem hiding this comment.
TFTRs!
bors r=nvanbenschoten
Reviewable status:
complete! 0 of 0 LGTMs obtained (and 1 stale) (waiting on @nvanbenschoten)
|
Build succeeded: |
Previous attempt over at: #104261
First commit from: #104537
Second commit from: #104600
This patch majorly refactors the lock table scan codepath, all in the
name of shared locks. At its core is a desire to use lock modes to
perform conflict resolution between an incoming request and locks held
on one particular key. In doing so, we rip out tryActiveWait.
At a high level (for a particular key), a request's journey looks like
the following:
higher lock strength (read: It's good enough for its use). If this is
the case, it can proceed without any bookkeeping.
Such locks do not conflict, but need to be resolved before the
transaction can evaluate. They're thus accumulated for later.
of a conflict. If that's the case, the lock table scan short circuits.
its scan.
Closes #102210
Release note: None