release-22.1: sql,tracing: per-stmt probabilistic trace sampling#89774
Merged
irfansharif merged 1 commit intocockroachdb:release-22.1from Oct 14, 2022
Merged
Conversation
|
Thanks for opening a backport. Please check the backport criteria before merging:
If some of the basic criteria cannot be satisfied, ensure that the exceptional criteria are satisfied within.
Add a brief release justification to the body of your PR to justify this backport. Some other things to consider:
|
Member
This commit introduces a backportable alternative to cockroachdb#82750 and cockroachdb#83020, to help solve for cockroachdb#82896. It gives a palatable alternative to sql.trace.stmt.enable_threshold. See those PRs/issues for verbose commentary, but roughly we can do the following: Pick a stmt fingerprint, declare a sampling probability which controls when verbose tracing is enabled for it, and a latency threshold for which a trace is logged. With a given stmt rate (say 1000/s) and a given percentile we're trying to capture (say p99.9), we have N stmt/s in the 99.9th percentile (1 in our example). We should be able to set a sampling probability P such that with high likelihood (>95%) we capture at least one trace captured over the next S seconds. The sampling rate lets you control how the overhead you’re introducing for those statements in aggregate, which if dialed up higher lets you lower S. You might want to do such a thing for infrequently executed statements. This commit does the simplest thing: ask for all input through shoddy cluster settings and just log traces to our usual files. Below we outline an example of how to use these settings to catch outlier executions for writes to the history table in TPC-C. When using it, we did not see an overall increase in p99s as a result of the sampling. It also took only about 10s to capture this data, showing clearly that it was due to latch waits. > SELECT encode(fingerprint_id, 'hex'), (statistics -> 'statistics' ->> 'cnt')::INT AS count, metadata ->> 'query' AS query FROM system.statement_statistics ORDER BY COUNT DESC limit 10; encode | count | query -----------------+-------+-------------------------------------------------------------------------------------------------------------------- ... 4e4214880f87d799 | 2680 | INSERT INTO history(h_c_id, h_c_d_id, h_c_w_id, h_d_id, h_w_id, h_amount, h_date, h_data) VALUES ($1, $2, __more6__) > SET CLUSTER SETTING trace.fingerprint = '4e4214880f87d799'; -- fingerprint > SET CLUSTER SETTING trace.fingerprint.probability = 0.01; -- 1% sampling probability > SET CLUSTER SETTING trace.fingerprint.threshold = '90ms'; -- latency threshold 0.521ms 0.005ms event:kv/kvserver/concurrency/concurrency_manager.go:260 [n1,s1,r105/1:/Table/109/1/{90/7/2…-105/10…}] acquiring latches› 0.532ms 0.011ms event:kv/kvserver/spanlatch/manager.go:532 [n1,s1,r105/1:/Table/109/1/{90/7/2…-105/10…}] waiting to acquire read latch /Table/109/1/105/3/519/0@0,0, held by write latch /Table/109/1/105/3/519/0@1654849354.483984000,0› 99.838ms 99.306ms event:kv/kvserver/concurrency/concurrency_manager.go:301 [n1,s1,r105/1:/Table/109/1/{90/7/2…-105/10…}] scanning lock table for conflicting locks› 99.851ms 0.013ms event:kv/kvserver/replica_read.go:251 [n1,s1,r105/1:/Table/109/1/{90/7/2…-105/10…}] executing read-only batch› Compare this to sql.trace.stmt.enable_threshold which enables verbose tracing for all statements with 100% probability. It introduces far too much overhead for it to be used in reasonable production clusters. The overhead also often masks the problems we're looking to capture. Release note (general change): We introduced a trifecta of three cluster settings to collect trace data for outlier executions with low overhead. This is only going to be available in 22.1; in 22.2 and beyond we have other mechanisms to collect outlier traces. Traces come in handy when looking to investigate latency spikes, and these three settings are intended to supplant most uses of sql.trace.stmt.enable_threshold. That setting enables verbose tracing for all statements with 100% probability which can cause a lot of overhead in production clusters, and also a lot of logging pressure. Instead we introduce the following: - trace.fingerprint - trace.fingerprint.probability - trace.fingerprint.threshold Put together (all have to be set) it only enables tracing for the statement with the set hex-encoded fingerprint, does so probabilistically (where probability is whatever trace.fingerprint.probability is set to), logging it only if the latency threshold is exceeded (configured using trace.fingerprint.threshold). To obtain a hex-encoded fingerprint, consider looking at the contents of system.statement_statistics. For example: > SELECT encode(fingerprint_id, 'hex'), (statistics -> 'statistics' ->> 'cnt')::INT AS count, metadata ->> 'query' AS query FROM system.statement_statistics ORDER BY COUNT DESC limit 10; encode | count | query -----------------+-------+-------------------------------------------------------------------------------------------------------------------- ... 4e4214880f87d799 | 2680 | INSERT INTO history(h_c_id, h_c_d_id, h_c_w_id, h_d_id, h_w_id, h_amount, h_date, h_data) VALUES ($1, $2, __more6__)
0774848 to
0ad199d
Compare
Contributor
Author
|
(Bump.) |
aayushshah15
approved these changes
Oct 14, 2022
4 tasks
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.
This commit introduces a backportable alternative to #82750 and #83020, to help solve for #82896. It gives a palatable alternative to
sql.trace.stmt.enable_threshold. See those PRs/issues for verbose commentary, but roughly we can do the following:This commit does the simplest thing: ask for all input through shoddy cluster settings and just log traces to our usual files. Below we outline an example of how to use these settings to catch outlier executions for writes to the history table in TPC-C. When using it, we did not see an overall increase in p99s as a result of the sampling. It also took only about 10s to capture this data, showing clearly that it was due to latch waits.
Compare this to
sql.trace.stmt.enable_thresholdwhich enables verbose tracing for all statements with 100% probability. It introduces far too much overhead for it to be used in reasonable production clusters. The overhead also often masks the problems we're looking to capture.Release note (general change): We introduced a trifecta of three cluster settings to collect trace data for outlier executions with low overhead. This is only going to be available in 22.1; in 22.2 and beyond we have other mechanisms to collect outlier traces. Traces come in handy when looking to investigate latency spikes, and these three settings are intended to supplant most uses of
sql.trace.stmt.enable_threshold. That setting enables verbose tracing for all statements with 100% probability which can cause a lot of overhead in production clusters, and also a lot of logging pressure. Instead we introduce the following:trace.fingerprinttrace.fingerprint.probabilitytrace.fingerprint.thresholdPut together (all have to be set) it only enables tracing for the statement with the set hex-encoded fingerprint, does so probabilistically (where probability is whatever trace.fingerprint.probability is set to), logging it only if the latency threshold is exceeded (configured using trace.fingerprint.threshold). To obtain a hex-encoded fingerprint, consider looking at the contents of system.statement_statistics. For example:
Release justification: adds helpful instrumentation possibilities for latency investigations.