-
Notifications
You must be signed in to change notification settings - Fork 4.1k
kv: log on excessive latch hold duration #114609
Copy link
Copy link
Closed
Labels
A-kv-observabilityA-kv-transactionsRelating to MVCC and the transactional model.Relating to MVCC and the transactional model.C-enhancementSolution expected to add code/behavior + preserve backward-compat (pg compat issues are exception)Solution expected to add code/behavior + preserve backward-compat (pg compat issues are exception)T-kvKV TeamKV Team
Metadata
Metadata
Assignees
Labels
A-kv-observabilityA-kv-transactionsRelating to MVCC and the transactional model.Relating to MVCC and the transactional model.C-enhancementSolution expected to add code/behavior + preserve backward-compat (pg compat issues are exception)Solution expected to add code/behavior + preserve backward-compat (pg compat issues are exception)T-kvKV TeamKV Team
We have logging for slow latch acquisitions due to conflicts, but not for when a request itself holds on to a latch for an excessive amount of time. This kind of logging would be helpful to avoid bugs and find unexpected slowness in the system which could cascade to other requests had there been contention.
Some notes:
spanlatch.Manager.spanlatch/settings.gofile.kv.concurrency.long_latch_hold_duration(or something better). Give it a default value of 3s.acquireTime time.Timefield tospanlatch.Guard.Manager.AcquireandManager.WaitUntilAcquiredafterwaitsucceeds.Manager.Releaseafter releasing latches. IfacquireTimeis set and it was acquired more thankv.concurrency.long_latch_hold_durationago, log a warning.log.Every(1 * time.Second)to avoid log spam.Jira issue: CRDB-33593
Epic CRDB-34227