admission,goschedstats: add metrics for non work-conserving CPU behavior#96511
Open
sumeerbhola wants to merge 1 commit intocockroachdb:masterfrom
Open
admission,goschedstats: add metrics for non work-conserving CPU behavior#96511sumeerbhola wants to merge 1 commit intocockroachdb:masterfrom
sumeerbhola wants to merge 1 commit intocockroachdb:masterfrom
Conversation
Member
37a5a72 to
738829c
Compare
abarganier
reviewed
Feb 7, 2023
Contributor
abarganier
left a comment
There was a problem hiding this comment.
New metrics in general LGTM, just one small nit re: some additional clarification in the HELP text.
(not commenting on specific ways we're recording/using the metrics).
Reviewable status:
complete! 0 of 0 LGTMs obtained (waiting on @irfansharif and @sumeerbhola)
pkg/util/admission/granter.go line 652 at r1 (raw file):
} // NB: Both the following metrics do not look at the SQL queues, since if // the KV queue is empty the only throttling on the SQL queues is via grant
nit: If neither of these metrics look at SQL queues, I think it'd be nice to include that information in the help text for both metrics.
Code quote:
// NB: Both the following metrics do not look at the SQL queues, since if
// the KV queue is empty the only throttling on the SQL queues is via grant
// chaining, which should only cause delays if there is no CPU available.5ab18e7 to
3167e0a
Compare
We have encountered scenarios with a large number of goroutines, which often causes an increase in the runnable goroutines, while the mean CPU utilization stays low (sometimes as low as 25%). Since there are non-zero runnable goroutines, at very short time scales of a few millis, CPU utilization must be 100% at those time scales. Since admission control (AC) samples the runnable goroutine count every 1ms, in order to react to such short time scales, we do see some drop in the slot count in some of these scenarios, and see queueing in the AC queues. The concern that comes up when seeing such queueing is whether AC is making the situation worse in its attempt to shift some queueing from the goroutine scheduler into the AC queue. Note that since admission.kv_slot_adjuster.overload_threshold is set to 32, AC does allow for significant queuing in the goroutine scheduler too, in an attempt to be work conserving. But it is still possible that the slot adjustment logic is being too slow to react and not allowing enough concurrency to keep the CPUs busy. This PR adds two metrics to measure this behavior. These are still subject to sampling errors, but they are tied to the 1ms sampling of CPULoad. The admission.granter.cpu_non_work_conserving_duration.kv is incremented by the sampling duration * number of idle Ps if there are requests waiting in the AC KV (CPU) queue. Since we have observed idle P's even when there are runnable goroutines (which is not the fault of AC), there is another metric admission.granter.cpu_non_work_conserving_due_to_admission_duration.kv which discounts the number of idle Ps by the number of runnable goroutines. These metrics give a sense of how much CPU capacity we are wasting per second. For example, if the first metric has a value 0.5s/s and we have 10 CPUs, so 10s/s of capacity, we are wasting 5% of the CPU. If the second metric is 0.3s/s then 3% of that CPU wastage can be attributed to AC queueing not behaving well. That is, one may expect CPU utilization to increase by 3% if AC is switched off. These metrics don't tell us the real latency impact of turning off AC, but the expectation is that if the reduction in CPU utilization due to AC divided by the observed CPU utilization (with AC on) is very small, the latency benefit of turning off AC will be small. Epic: none Fixes: cockroachdb#96495
3167e0a to
26e29ce
Compare
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.
We have encountered scenarios with a large number of goroutines, which often causes an increase in the runnable goroutines, while the mean CPU utilization stays low (sometimes as low as 25%). Since there are non-zero runnable goroutines, at very short time scales of a few millis, CPU utilization must be 100% at those time scales. Since admission control (AC) samples the runnable goroutine count every 1ms, in order to react to such short time scales, we do see some drop in the slot count in some of these scenarios, and see queueing in the AC queues. The concern that comes up when seeing such queueing is whether AC is making the situation worse in its attempt to shift some queueing from the goroutine scheduler into the AC queue. Note that since admission.kv_slot_adjuster.overload_threshold is set to 32, AC does allow for significant queuing in the goroutine scheduler too, in an attempt to be work conserving. But it is still possible that the slot adjustment logic is being too slow to react and not allowing enough concurrency to keep the CPUs busy.
This PR adds two metrics to measure this behavior. These are still subject to sampling errors, but they are tied to the 1ms sampling of CPULoad. The
admission.granter.cpu_non_work_conserving_duration.kvis incremented by the sampling duration * number of idle Ps if there are requests waiting in the AC KV (CPU) queue. Since we have observed idle P's even when there are runnable goroutines (which is not the fault of AC), there is another metricadmission.granter.cpu_non_work_conserving_due_to_admission_duration.kvwhich discounts the number of idle Ps by the number of runnable goroutines.These metrics give a sense of how much CPU capacity we are wasting per second. For example, if the first metric has a value 0.5s/s and we have 10 CPUs, so 10s/s of capacity, we are wasting 5% of the CPU. If the second metric is 0.3s/s then 3% of that CPU wastage can be attributed to AC queueing not behaving well. That is, one may expect CPU utilization to increase by 3% if AC is switched off. These metrics don't tell us the real latency impact of turning off AC, but the expectation is that if the reduction in CPU utilization due to AC divided by the observed CPU utilization (with AC on) is very small, the latency benefit of turning off AC will be small.
Epic: none
Fixes: #96495