-
Notifications
You must be signed in to change notification settings - Fork 4.1k
importer: AddSSTable sent after import cancellation #91418
Description
Describe the problem
In we've observed AddSSTable requests that were applied after an IMPORT job's OnFailOrCancel callback attempted to clear the data added by the import.
As a result, the table contains data from the cancelled IMPORT.
We believe this happens because the import processor on the remote node is still running. While the import processor will eventually see a context cancellation, occasionally that context cancellation isn't seen until after another node has already adopted the IMPORT job and run the OnFailOrCancel hook.
We've identified a few possible causes of this:
-
We have at least 1 go routine that we do not wait for
. Despite the comment in that code, in the case of cancellation we've observed that goroutine outliving the processor.cockroach/pkg/sql/importer/import_processor.go
Lines 179 to 185 in 8a36b94
// We don't have to worry about this go routine leaking because next we loop over progCh // which is closed only after the go routine returns. go func() { defer close(idp.progCh) idp.summary, idp.importErr = runImport(ctx, idp.flowCtx, &idp.spec, idp.progCh, idp.seqChunkProvider) }() -
Since jobs: Clear out claim info when pausing #89014
OnFailOrCancelis eligible for execution on another node immediately after the Resumer's context has been cancelled. We've observed via logs OnFailOrCancel running before the Resumer has exited. -
Even with (1) and (2) fixed, we've observed that
dsp.Runreturns before processors have exited. Typically, the processor will shutdown or observe a cancelled context before it is able to make a successful AddSSTable request, but occasionally it is not.dsp.Run(ctx, planCtx, nil, p, recv, &evalCtxCopy, nil /* finishedSetupFn */)
*** Possible Solutions ***
The following are possible solutions we've discussed in the past for this problem:
-
Review our distSQL flow and ensure we are using the correct contexts and implementing the correct callbacks to ensure as orderly a shutdown as possible.
-
Add new code in the job coordinator to explicitly wait for an affirmative shutdown from all processors. This would certainly help on the happy path, but it wouldn't cover all cases since the node responsible for doing the waiting may
-
Add a safety timeout before issuing any DeleteRange requests.
-
Periodically broadcast a timestamp to all processors that the processors will use for writing AddSSTables (rather than allowing them to use the batch timestamps). The node responsible for cancellation would then know the last timestamp at which nodes were possibly writing.
-
For
IMPORT INTOon empty tables, as a special case, we could write into a different index and then swap over to that index on success. -
A new KV feature "span admin lock" which would lock a span for admin operations. Any AddSSTable requests that arrived with the wrong or an old lock token would be rejected.
To Reproduce
The failure can be seen in the unit test found here: #91407 when run under stress for a few minutes.
Jira issue: CRDB-21252