jobs: write payload and progress to system.job_info#97218
jobs: write payload and progress to system.job_info#97218craig[bot] merged 2 commits intocockroachdb:masterfrom
Conversation
c07f014 to
21401f2
Compare
|
friendly ping @dt! Should we wait for branch cut before introducing these version gates considering the read side of this change might take longer and be more involved? |
This change introduces an InfoStorage wrapper that is initialized in the context of a Job and optionally a txn. This wrapper can be used to read/write rows in the system.job_info table that correspond to the Job. Additionally, the write path of InfoStorage checks that the caller updating the row is part of the currently running instance of the job. We do this by checking that the job's sqlliveness session is the same as the one persisted in the job record. Release note: None
Newly created jobs will write their Payload and Progress to the system.job_info table, in addition to the job record that is already created in the system.jobs table. Running jobs that update their Payload or Progress in system.jobs, during their execution, will also write this updated state to the job_info table. Double writes for the Payload and Progress of jobs is the first step to migrating existing job state to the job_info table. Follow up changes will: 1. Add a migration to backfill job_info with the Payload and Progress of all job records in the system.jobs table. 2. Modify read methods in the registry to pull the Payload and Progress from the job_info table instead of the jobs table. Release note: None
| TestingKnobs: base.TestingKnobs{ | ||
| JobsTestingKnobs: jobs.NewTestingKnobsWithShortIntervals(), | ||
| UpgradeManager: &upgradebase.TestingKnobs{ | ||
| DontUseJobs: true, |
There was a problem hiding this comment.
@fqazi this test mocks the cluster versions the server steps through. Because of this, it doesn't run the migration that introduces the system.job_info table. Since that migration doesn't run, any migration that starts a job will fail because a write to the jobs info table will fail. Sanity checking that not using jobs for this test is fine?
| 13,AlterTableAddForeignKey/alter_table_add_2_foreign_keys | ||
| 15,AlterTableAddForeignKey/alter_table_add_3_foreign_keys | ||
| 11,AlterTableAddForeignKey/alter_table_add_foreign_key_with_3_columns | ||
| 14,AlterRole/alter_role_with_1_option |
There was a problem hiding this comment.
@fqazi can I also bug you to lgtm these changes? I looked at one of the jaeger json files and the additional roundtrips are the delete and write to the jobs info table.
fqazi
left a comment
There was a problem hiding this comment.
Reviewed 7 of 7 files at r3, 9 of 9 files at r4, all commit messages.
Reviewable status:complete! 0 of 0 LGTMs obtained (waiting on @adityamaru, @ajwerner, @dt, and @miretskiy)
pkg/ccl/kvccl/kvtenantccl/tenant_upgrade_test.go line 258 at r4 (raw file):
Previously, adityamaru (Aditya Maru) wrote…
@fqazi this test mocks the cluster versions the server steps through. Because of this, it doesn't run the migration that introduces the
system.job_infotable. Since that migration doesn't run, any migration that starts a job will fail because a write to the jobs info table will fail. Sanity checking that not using jobs for this test is fine?
I think that should be okay, assuming there isn't any coverage we are losing on the jobs side for unhappy paths with upgrades.
|
TFTR! bors r=dt,fqazi |
|
Build succeeded: |
jobs: introduces an InfoStorage wrapper
This change introduces an InfoStorage wrapper that
is initialized in the context of a Job and optionally a txn.
This wrapper can be used to read/write rows in
the system.job_info table that correspond to the
Job.
Additionally, the write path of InfoStorage checks that
the caller updating the row is part of the currently running
instance of the job. We do this by checking that the job's sqlliveness
session is the same as the one persisted in the job record.
Release note: None
jobs: write payload and progress to system.job_info
Newly created jobs will write their Payload and Progress
to the system.job_info table, in addition to the job record
that is already created in the system.jobs table. Running jobs
that update their Payload or Progress in system.jobs, during their
execution, will also write this updated state to the job_info table.
Double writes for the Payload and Progress of jobs is the first step
to migrating existing job state to the job_info table.
Follow up changes will:
Add a migration to backfill job_info with the Payload and Progress
of all job records in the system.jobs table.
Modify read methods in the registry to pull the Payload and Progress
from the job_info table instead of the jobs table.
Release note: None
Epic: CRDB-24874