ddl: Fix unstable DROP TABLE/FLASHBACK TABLE/RECOVER TABLE#8422
ddl: Fix unstable DROP TABLE/FLASHBACK TABLE/RECOVER TABLE#8422ti-chi-bot[bot] merged 16 commits intopingcap:masterfrom
DROP TABLE/FLASHBACK TABLE/RECOVER TABLE#8422Conversation
42bdb40 to
e189878
Compare
|
/run-all-tests |
98e9d86 to
1fe9919
Compare
|
/run-all-tests |
|
/run-unit-test |
DROP TABLE/FLASHBACK TABLE/RECOVER TABLE
|
/run-all-tests |
CalvinNeo
left a comment
There was a problem hiding this comment.
Can this pr fix #1664 ?
Seems the problem1 is #1664 (comment), and the problem2 is #1664 (comment)?
If so, please also close 1664.
| ## drop table arrive tiflash before ddl and insert, and do recover, check the data is not lost | ||
| ## because we want to test we actually drop the table, so please not use the same name for this table |
There was a problem hiding this comment.
This case is moved to fullstack-test2/ddl/flashback/recover_table.test
|
/run-all-tests |
|
[APPROVALNOTIFIER] This PR is APPROVED This pull-request has been approved by: hongyunyan, Lloyd-Pottiger The full list of commands accepted by this bot can be found here. The pull request process is described here DetailsNeeds approval from an approver in each of these files:
Approvers can indicate their approval by writing |
[LGTM Timeline notifier]Timeline:
|
|
/run-all-tests |
|
In response to a cherrypick label: new pull request created to branch |
Signed-off-by: ti-chi-bot <ti-community-prow-bot@tidb.io>
|
In response to a cherrypick label: new pull request created to branch |
Signed-off-by: ti-chi-bot <ti-community-prow-bot@tidb.io>
|
In response to a cherrypick label: new pull request created to branch |
Signed-off-by: ti-chi-bot <ti-community-prow-bot@tidb.io>
|
In response to a cherrypick label: new pull request created to branch |
Signed-off-by: ti-chi-bot <ti-community-prow-bot@tidb.io>
What problem does this PR solve?
Issue Number: close #8395, close #1664, close #3777
Problem Summary:
Problem 1:
There could be a chance that raft snapshot or raft command of a table comes after it has been dropped.
Because tiflash will ignore the previous
DROP TABLE, because storage instance is not created yet. The storage instance is created as "non tombstone" after that when raft snapshot or raft command comes. And the storage instance will never be physically dropped until tiflash restarts.Problem 2:
In the second time when
syncTableSchemacallstrySyncTableSchemaafter table-id-mapping is up-to-date, it should usemvcc getto ensure it can use the table schema of tombstone table to create Storage instance or decode new raft logs, or some new columns data will not be decoded.Previously
updateTiFlashReplicawill update the table info with the latest columns by accident. So tiflash pass the related tests sometimes.What is changed and how it works?
This is a PR following #8421
The logic changes is these commits: https://github.com/pingcap/tiflash/pull/8422/files/d88f6b6f4ae4c8026b969cd9c5ae50924b179529..1fe991997055f755aea89cd2e2bdb8ab26a848bf
syncTableSchemacallstrySyncTableSchema, it will not usemvcc getso that we can detect whether we need to update the table-id-mappingsyncTableSchemacallstrySyncTableSchemaafter table-id-mapping is up-to-date, it will usemvcc getto ensure it can use the table schema of tombstone table to create Storage instance or decode new raft logs.SchemaGetter::getTableInfoImpldoes not check the existence ofdb_key, so that we can still get the table info after database is dropped. (get ready forFLASHBACK DATABASE ... TO ...)client-c mvcc get, then we will create the storage instance with a tombstone timestamp. So that it can be physically dropped after GC time.applySetTiFlashReplicashould only update the tiflash replica info instead of replacing all the table info, or it will lead to some later DDLs (changing partitions, etc) is not executedrecover tablecodes using early returnCheck List
Tests
Side effects
Documentation
Release note