concurrency_manager: add slow_latch_observability test#73916
concurrency_manager: add slow_latch_observability test#73916craig[bot] merged 1 commit intocockroachdb:masterfrom
Conversation
nvb
left a comment
There was a problem hiding this comment.
Reviewed 1 of 1 files at r1, all commit messages.
Reviewable status:complete! 1 of 0 LGTMs obtained (waiting on @tbg)
pkg/kv/kvserver/concurrency/testdata/concurrency_manager/slow_latch_observability, line 55 at r1 (raw file):
[-] finish readbf: finishing request [3] sequence pute: scanning lock table for conflicting locks [3] sequence pute: sequencing complete, returned guard
Let's add a reset command to the end of this, which will verify that we aren't leaking any requests. I think it will show that we are, so we'll first need a finish req=pute.
This came out of cockroachdb#65099 (comment). This adds an explicit test focusing on latch observability (i.e. if we're waiting for a latch, can we learn what we're waiting for). It demonstrates that we're doing "ok" but could be doing better in the case in which the wait queue has a length of strictly greater than one. In that case, we will trace the waiter immediately in front of us, but not the heads of the queue. In the case in which a slow request evaluation is causing subsequent requests to be delayed as well, this is not helpful as the head(s) of the queues need to be logged instead. Release note: None
tbg
left a comment
There was a problem hiding this comment.
TFTR!
bors r=nvanbenschoten
Reviewable status:
complete! 0 of 0 LGTMs obtained (and 1 stale) (waiting on @nvanbenschoten)
pkg/kv/kvserver/concurrency/testdata/concurrency_manager/slow_latch_observability, line 55 at r1 (raw file):
Previously, nvanbenschoten (Nathan VanBenschoten) wrote…
Let's add a
resetcommand to the end of this, which will verify that we aren't leaking any requests. I think it will show that we are, so we'll first need afinish req=pute.
Done.
|
Build succeeded: |
This came out of
#65099 (comment).
This adds an explicit test focusing on latch observability (i.e. if
we're waiting for a latch, can we learn what we're waiting for).
It demonstrates that we're doing "ok" but could be doing better in
the case in which the wait queue has a length of strictly greater
than one. In that case, we will trace the waiter immediately in
front of us, but not the heads of the queue. In the case in which
a slow request evaluation is causing subsequent requests to be
delayed as well, this is not helpful as the head(s) of the queues
need to be logged instead.
Release note: None