jobs: retry jobs with exponential backoff#66889
jobs: retry jobs with exponential backoff#66889craig[bot] merged 2 commits intocockroachdb:masterfrom
Conversation
d405c37 to
3747e06
Compare
315bcea to
6111368
Compare
ajwerner
left a comment
There was a problem hiding this comment.
Reviewable status:
complete! 0 of 0 LGTMs obtained (waiting on @sajjadrizvi)
pkg/jobs/adopt.go, line 51 at r3 (raw file):
SET claim_session_id = $1, claim_instance_id = $2 WHERE ((claim_session_id IS NULL) AND (status IN ` + claimableStatusTupleString + `))
Why these extra parens?
pkg/jobs/adopt.go, line 86 at r3 (raw file):
// Select only those jobs that have their next retry time before the current time. // 2^62 is the max positive number. // $5 is the base in exponential backoff calculation, and $6 is the max retry delay.
I think your placeholders are off now. Also, $6 isn't the "base" the way we usually talk about bases in exponentials but rather the "starting value".
pkg/jobs/config.go, line 112 at r3 (raw file):
retryBaseSetting = settings.RegisterDurationSetting( retryBaseSettingKey, "the base multiplier to calculate the exponential-backoff delay "+
nit: starting value. You could also supply a base if you wanted (it should be positive). Right now it's 2.
pkg/migration/migrations/retry_jobs_with_exponential_backoff.go, line 24 at r3 (raw file):
ctx context.Context, _ clusterversion.ClusterVersion, d migration.TenantDeps, ) error { const query = `
The problem with this approach is that it might return before the schema change is done. Imagine that we start this migration and issue this transaction. It will create the job to do the schema change. Now, while the schema change is running, we crash. When we come back around, the schema change will still be running but the below commands will no-op and we'll return happily. This is not a good place to be. We need to make sure that the table really is changed.
There are a bunch of ways to do this. One way is to check to make sure that the jobs table does not have any running jobs and if it does, to wait for them. You could do that at the top of the loop. There are two basic approaches you could take to that task:
- Use a
descs.Txnto read the jobs table and check for the new columns or for a pending job - Use sql to do the same thing by using
crdb_internal.pb_to_json
I don't have a strong preference.
pkg/migration/migrations/retry_jobs_with_exponential_backoff.go, line 26 at r3 (raw file):
const query = ` BEGIN; ALTER TABLE IF EXISTS system.jobs
The table will definitely exist so nix the first IF EXISTS.
pkg/sql/catalog/systemschema/system.go, line 897 at r3 (raw file):
StoreColumnNames: []string{"claim_instance_id", "num_runs", "last_run"}, StoreColumnIDs: []descpb.ColumnID{9, 10, 11}, KeySuffixColumnIDs: []descpb.ColumnID{1}, //TODO(sajjad): To discuss: What should be the values here?
The values should be all of the columns of the primary key which are not in the KeyColumnIDs. In this case, 1.
0472f20 to
1745705
Compare
| KeySuffixColumnIDs: []descpb.ColumnID{1}, | ||
| Version: descpb.StrictIndexColumnIDGuaranteesVersion, | ||
| }, | ||
| // TODO(sajjad): To discuss: Why are we creating another index instead of storing num_runs and last_runs in jobs_status_created_idx? |
There was a problem hiding this comment.
We wanted to create another index for two reasons. The main one is to make it a partial index. The other reason is to allow us to order it on claim_session_id
There was a problem hiding this comment.
Speaking of which, this is missing the predicate.
| const addColsQuery = `ALTER TABLE system.jobs | ||
| ADD COLUMN IF NOT EXISTS num_runs INT8 FAMILY claim, | ||
| ADD COLUMN IF NOT EXISTS last_run TIMESTAMP FAMILY claim;` | ||
|
|
||
| const addIndexQuery = ` | ||
| CREATE INDEX IF NOT EXISTS jobs_status_claim |
There was a problem hiding this comment.
I'm not so sure about these IF NOT EXISTS
| ADD COLUMN IF NOT EXISTS last_run TIMESTAMP FAMILY claim;` | ||
|
|
||
| const addIndexQuery = ` | ||
| CREATE INDEX IF NOT EXISTS jobs_status_claim |
There was a problem hiding this comment.
Be careful about aligning the naming on the index.
sajjadrizvi
left a comment
There was a problem hiding this comment.
Reviewable status:
complete! 0 of 0 LGTMs obtained (waiting on @ajwerner)
pkg/jobs/adopt.go, line 51 at r3 (raw file):
Previously, ajwerner wrote…
Why these extra parens?
This is because when we intercept the query in the StatementFilter testing knob, the statement has these extra params even if we don't add them. As we compare the received statement with this const in TestRegistrySettingUpdate, the statements don't match if we don't add these extra params.
pkg/jobs/adopt.go, line 86 at r3 (raw file):
Previously, ajwerner wrote…
I think your placeholders are off now. Also,
$6isn't the "base" the way we usually talk about bases in exponentials but rather the "starting value".
Done.
pkg/migration/migrations/retry_jobs_with_exponential_backoff.go, line 24 at r3 (raw file):
Previously, ajwerner wrote…
The problem with this approach is that it might return before the schema change is done. Imagine that we start this migration and issue this transaction. It will create the job to do the schema change. Now, while the schema change is running, we crash. When we come back around, the schema change will still be running but the below commands will no-op and we'll return happily. This is not a good place to be. We need to make sure that the table really is changed.
There are a bunch of ways to do this. One way is to check to make sure that the jobs table does not have any running jobs and if it does, to wait for them. You could do that at the top of the loop. There are two basic approaches you could take to that task:
- Use a
descs.Txnto read the jobs table and check for the new columns or for a pending job- Use sql to do the same thing by using
crdb_internal.pb_to_jsonI don't have a strong preference.
As we sketched out together, we have taken the first approach.
pkg/migration/migrations/retry_jobs_with_exponential_backoff.go, line 40 at r5 (raw file):
Previously, ajwerner wrote…
Be careful about aligning the naming on the index.
Do you mean the ordering of the columns based on the schema in system.go? I didn't order them correctly.
I added IF NOT EXISTS to cater for transient failures that may happen after the index has been created but before completing the migration.
pkg/sql/catalog/systemschema/system.go, line 961 at r5 (raw file):
Previously, ajwerner wrote…
Speaking of which, this is missing the predicate.
Thanks for pointing out. I have added the predicate.
| } | ||
|
|
||
| func hasBackoffIndex(jobsTable catalog.TableDescriptor) bool { | ||
| const ( |
There was a problem hiding this comment.
func equalMessage(a, b proto.Message) error {
aBytes, err := protoutil.Marshal(a)
if err != nil { ... }
bBytes, err := protoutil.Marshal(b)
if err != nil { ... }
if bytes.Equal(aBytes, bBytes) { return nil }
return errors.Errorf("mismatch for messages of types %T&%T: %s", a, b, strings.Join(pretty.Diff(a, b), "\n")
}
3e0a028 to
d65f99c
Compare
bd1b55c to
29bb38f
Compare
ajwerner
left a comment
There was a problem hiding this comment.
I appreciate the stamina working through all the revisions. The change turned well because of it.
Reviewed 1 of 7 files at r14, 1 of 31 files at r20, 8 of 9 files at r21, 5 of 5 files at r22, 4 of 4 files at r23.
Reviewable status:complete! 1 of 0 LGTMs obtained (waiting on @nihalpednekar and @sajjadrizvi)
pkg/jobs/executor_impl_test.go, line 32 at r23 (raw file):
argsFn := func(args *base.TestServerArgs) { args.Knobs.JobsTestingKnobs = NewTestingKnobsWithIntervals( time.Millisecond, time.Millisecond, time.Millisecond, time.Millisecond,
use the NewTestingKnobsWithShortIntervals here?
pkg/jobs/jobs_test.go, line 2888 at r23 (raw file):
jobConstructorCleanup := jobs.ResetConstructors() args := base.TestServerArgs{Knobs: base.TestingKnobs{ JobsTestingKnobs: jobs.NewTestingKnobsWithIntervals(time.Millisecond, time.Millisecond, time.Millisecond, time.Millisecond)},
use the NewTestingKnobsWithShortIntervals here?
sajjadrizvi
left a comment
There was a problem hiding this comment.
Reviewable status:
complete! 1 of 0 LGTMs obtained (waiting on @ajwerner and @nihalpednekar)
pkg/jobs/executor_impl_test.go, line 32 at r23 (raw file):
Previously, ajwerner wrote…
use the
NewTestingKnobsWithShortIntervalshere?
Done.
f8526f2 to
98c4b6a
Compare
This commit adds a mechanism to retry jobs with exponentially increasing delays. This is achieved through two new columns in system.jobs table, last_run and num_runs. In addition, this commit adds cluster settings to control exponential-backoff parameters, initial delay and max delay, with corresponding settings `jobs.registry.retry.initial_delay` and `jobs.registry.retry.max_delay`. Finally, this commit adds a new partial-index in the jobs table that improves the performance of periodic queries run by registry in each node. Release note (general change): The behavior for retrying jobs, which fail due to a retriable error or due to job coordinator failure, is now delayed using exponential backoff. Before this change, jobs which failed in a retryable manner, would be resumed immediately on a different coordinator. This change reduces the impact of recurrently failing jobs on the cluster. This change adds two new cluster settings that control this behavior: "jobs.registry.retry.initial_delay" and "jobs.registry.retry.max_delay", which respectively control initial delay and maximum delay between resumptions. Fixes cockroachdb#44594 Fixes cockroachdb#65080
This commit adds a test to verify that relevant queries in jobs registry use the new partial index. Release note: None
98c4b6a to
fa98ca1
Compare
|
TFTR. bors r=ajwerner |
|
This PR was included in a batch that was canceled, it will be automatically retried |
66889: jobs: retry jobs with exponential backoff r=ajwerner a=sajjadrizvi This commit adds a mechanism to retry jobs with exponentially increasing delays. This is achieved through two new columns in system.jobs table, last_run and num_runs. In addition, this commit adds cluster settings to control exponential-backoff parameters, initial delay and max delay, with corresponding settings `jobs.registry.retry.initial_delay` and `jobs.registry.retry.max_delay`. Finally, this commit adds a new partial-index in the jobs table that improves the performance of periodic queries run by registry in each node. Release note (general change): The behavior for retrying jobs, which fail due to a retriable error or due to job coordinator failure, is now delayed using exponential backoff. Before this change, jobs which failed in a retryable manner, would be resumed immediately on a different coordinator. This change reduces the impact of recurrently failing jobs on the cluster. This change adds two new cluster settings that control this behavior: "jobs.registry.retry.initial_delay" and "jobs.registry.retry.max_delay", which respectively control initial delay and maximum delay between resumptions. Fixes #44594 Fixes #65080 68212: colexec: add optimized versions of aggregate window functions r=DrewKimball a=DrewKimball **colexecwindow: add sliding window functionality to window framer** This commit adds a method `slidingWindowIntervals` to `windowFramer` operators that returns a set of `toAdd` intervals and a set of `toRemove` intervals, which indicate the rows that should be added to the current aggregation and those that should be removed, respectively. This will be used to implement the sliding window optimization for aggregate window functions such as `sum`. **colexecwindow: implement sliding window aggregator** This commit supplies a new operator, `slidingWindowAggregator`, which is used for any window aggregate functions that implement the `slidingWindowAggregateFunc` interface. Rather than aggregating over the entire window frame for each row, the `slidingWindowAggregator` operator aggregates over the rows that are in the current window frame but were not in the previous, and removes from the aggregation the rows that were in the previous window frame but not the current. This allows window aggregate functions to be evaluated in linear rather than quadratic time. **colexec: implement sliding window optimization for sum window function** This commit modifies the `sum` aggregate window function to implement the `slidingWindowAggregateFunc`, which allows it to be used in a sliding window context. This yields linear rather than quadratic scaling in the worst case, and allows the vectorized engine to meet or exceed parity with the row engine for `sum` window functions. **colexec: implement sliding window optimization for count window function** This commit modifies the count aggregate operator to implement the `slidingWindowAggregateFunc` interface so that it can be used with the sliding window optimization. **colexec: implement sliding window optimization for average window function** This commit modifies the `average` aggregate operator to implement the `slidingWindowAggregateFunc` interface so that it can be used with the sliding window optimization. **colexec: optimize count_rows window function** This commit implements an optimized version of `count_rows` that calculates the size of the window frame as soon as the window frame is calculated. This means that most of the overhead for `count_rows` now comes from calculating the window frame, which is worst-case linear time (previously, the step to retrieve the size of the frame was quadratic, though with a small constant). **colexec: optimize min and max window functions with default exclusion** This commit modifies the 'min' and 'max' aggregate window functions to implement the `slidingWindowAggregateFunc` interface, which allows them to be used in a sliding window context. However, this is only usable when the window frame never shrinks - e.g. it always contains all rows from the previous frame. This commit also provides implementations of `min` and `max` for use when the window frame can shrink. The indices of the 'next best' minimum or maximum values are stored in a priority queue that is updated for each row. Using the priority queue allows the `min` and `max` operators to avoid fully aggregating over the window frame even when the previous best value goes out of scope. Note that this implementation currently does not handle the case of non-default exclusion clause, in which case we must fall back to the quadratic approach. Fixes: #37039 Release note (performance improvement): The vectorized engine can now use the sliding-window approach to execute common aggregate functions as window functions. This allows aggregate window functions to be evaluated in linear rather than quadratic time. Currently, sum, count, average, min, and max are executed using this approach. 68433: sql: implemented placement restricted syntax for domiciling r=pawalt a=pawalt This PR combines the existing restricted placement zone config logic with the stubbed syntax to create an end-to-end PLACEMENT RESTRICTED implementation. Release note: None Note that the cluster setting for domiciling and telemetry will be added in a later PR. 68818: changefeedccl: mark avro format as no longer experimental r=[miretskiy,spiffyeng] a=HonoreDB The avro format for changefeeds now supports all column types and has been in production use for several releases. We'll now allow format=avro rather than format=experimental_avro The old string will remain supported because job payloads can persist across upgrades and downgrades. Release note (enterprise change): changefeed avro format no longer marked experimental Co-authored-by: Sajjad Rizvi <sajjad@cockroachlabs.com> Co-authored-by: Drew Kimball <drewk@cockroachlabs.com> Co-authored-by: Peyton Walters <peyton.walters@cockroachlabs.com> Co-authored-by: Aaron Zinger <zinger@cockroachlabs.com>
|
Build failed (retrying...): |
|
Build succeeded: |
|
This patch made |
This test was made slow by cockroachdb#66889. Release note: None
Thanks for the heads up! #69103 |
68595: ui,admission: observability improvements for admission control r=sumeerbhola a=sumeerbhola
- Trace statements for latency incurred in admission queues.
- Certain admission control metrics are now included in the
overload dashboard. Specifically,
- Resource bottlenecks can be identified using the
"KV Admission Slots" and "KV Admission IO Tokens Exhausted
Duration Per Second" graphs.
- The rate at which admission control is admitting requests
is in the "Admission Work Rate" graphs and the corresponding
latency rate (for all requests) is in
"Admission Latency Rate". Dividing the latter by the former
gives the mean admission latency.
- The 75th percentile latency for those requests that actually
waited for admission is in the
"Admission Latency: 75th percentile" graph.
When admission control is off most of these graphs will be
empty or zero, and the total KV admission slots will be 1.
Informs #65955
Release note (ui change): admission control metrics are added to
Overload dashboard.
69103: sql: make TestFailureToMarkCanceledReversalLeadsToCanceledStatus faster r=sajjadrizvi a=ajwerner
This test was made slow by #66889.
Release note: None
Co-authored-by: sumeerbhola <sumeer@cockroachlabs.com>
Co-authored-by: Andrew Werner <awerner32@gmail.com>
|
Thanks! Maybe check that no other tests slowed similarly by comparing package run times; I haven't done that. |
This commit adds a mechanism to retry jobs with exponentially increasing
delays. This is achieved through two new columns in system.jobs table,
last_run and num_runs. In addition, this commit adds cluster settings
to control exponential-backoff parameters, initial delay and max delay,
with corresponding settings
jobs.registry.retry.initial_delayandjobs.registry.retry.max_delay. Finally, this commit adds a newpartial-index in the jobs table that improves the performance of periodic
queries run by registry in each node.
Release note (general change): The behavior for retrying jobs, which fail
due to a retriable error or due to job coordinator failure, is now delayed
using exponential backoff. Before this change, jobs which failed in a
retryable manner, would be resumed immediately on a different coordinator.
This change reduces the impact of recurrently failing jobs on the cluster.
This change adds two new cluster settings that control this behavior:
"jobs.registry.retry.initial_delay" and "jobs.registry.retry.max_delay",
which respectively control initial delay and maximum delay between
resumptions.
Fixes #44594
Fixes #65080