ddl: fix ExistsTableRow and add tests for skip reorg checks#57778
ddl: fix ExistsTableRow and add tests for skip reorg checks#57778ti-chi-bot[bot] merged 7 commits intopingcap:masterfrom
Conversation
|
Hi @tangenta. Thanks for your PR. PRs from untrusted users cannot be marked as trusted with I understand the commands that are listed here. DetailsInstructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository. |
pkg/ddl/reorg.go
Outdated
| err = result.Next(ctx.ddlJobCtx, chk) | ||
| func existsTableRow(ctx *ReorgContext, store kv.Storage, tbl table.PhysicalTable) (bool, error) { | ||
| found := false | ||
| err := iterateSnapshotKeys(ctx, store, kv.PriorityLow, tbl.RecordPrefix(), math.MaxUint64, nil, nil, |
There was a problem hiding this comment.
A safer way is to fetch and use a ts from pd like pdCli.GetTS(ctx). math.MaxUint64 cannot provide snapshot semantics.
There was a problem hiding this comment.
I think we don't need snapshot semantics here. This function determines if we can use the fast path for data reorganization:
- if it returns false, but actually it is true, we won't enter the fast path
- if it returns true, but actually it is false, we enter the fast path, and the following insertions will be handled by dual-write mechanism (txn mode) or temp-index-merge mechanism (ingest mode). This is covered by test https://github.com/pingcap/tidb/pull/57778/files#diff-2998ac87ef4adb20bfc6bf8ce58ace7015800e57f330bacad08cba4b90350300R908
There was a problem hiding this comment.
Another layer of semantics for safety is that, in the transaction layer and TiKV, the use of math.MaxUint64 is only expected for primary key point-get autocommit reads. Other scenarios should not use math.MaxUint64 for reads. It's better to stick to PD-allocated timestamps instead.
There was a problem hiding this comment.
I think we don't need snapshot semantics here.
Did you mean that you don't need a consistent snapshot, so you try to use a weaker or looser way for the reading?
I think reading with MaxUint64 seems to be much more special than you, as well as many other developers, might think. At least it is so after implementing async commit. Maybe it's better if you only consider whether the overhead of getting a new ts from PD is so unacceptable to you.
There was a problem hiding this comment.
reading with MaxUint64 seems to be much more special
can you give more detail? or some doc link?
There was a problem hiding this comment.
the use of
math.MaxUint64is only expected for primary key point-get autocommit reads. Other scenarios should not usemath.MaxUint64for reads
In my understanding, snapshot reading is not a pure "reading" at TiKV. It updates the Max TS so caused side effect to async commit. #57769 https://cn.pingcap.com/blog/async-commit-principle/
There was a problem hiding this comment.
In my understand, snapshot reading is not a pure "reading" at TiKV. It has side effect to async commit. #57769
that's too bad, a read operation corrupt the entire cluster
There was a problem hiding this comment.
can you give more detail? or some doc link?
There's not any document dedicated to talking about this as far as I know. There's many special handling for reading with MaxUint64. For example, TiKV skipping updating the max_ts using it; special lock-ignoring logic when checking read-write conflict; etc. It also used to cause difficulty for us in developent or optimization work.
snapshot reading is not a pure "reading" at TiKV.
The max_ts in async commit / 1PC is an approach to let writing transactions to avoid affecting other reading operations. It tells the latest timestamp allocated by PD that has been used on this TiKV node. It's quite doubtful to me whether this can be the reason to say that the reading is not "pure".
that's too bad
The design is based on the fact that the reads and writes are strongly ordered by PD's globally unique and monotonic TSO allocation, and writing cannot change the snapshot has been read, otherwise the snapshot becomes not repeat-readable, then it's no longer a snapshot. You may think that this makes async commit transactions committed to an incorrect future version and cause its result not readable, and things would be all better if async commit not exists. But that's the choice that async commit transactions made to avoid breaking your snapshot. If the transaction is committed in 2PC mode which is not related to the max_ts mechanism, there's still another form of corruption, that is, your snapshot is changed and becomes not repeat-readable. If you believe there's something bad, I think the bad part is that the strong ordering that's supposed to be provided by TSO is broken.
There was a problem hiding this comment.
for bad, i mean a read only operation has side effects, and it corrupt the entire cluster
read only operation shouldn't have such effects as a common design practice AFAIK
Codecov ReportAttention: Patch coverage is
Additional details and impacted files@@ Coverage Diff @@
## master #57778 +/- ##
================================================
+ Coverage 72.8370% 74.8316% +1.9945%
================================================
Files 1677 1722 +45
Lines 464191 474115 +9924
================================================
+ Hits 338103 354788 +16685
+ Misses 105210 97131 -8079
- Partials 20878 22196 +1318
Flags with carried forward coverage won't be shown. Click here to find out more.
|
D3Hunter
left a comment
There was a problem hiding this comment.
rest lgtm except existing comments
[LGTM Timeline notifier]Timeline:
|
|
[APPROVALNOTIFIER] This PR is APPROVED This pull-request has been approved by: crazycs520, D3Hunter, MyonKeminta The full list of commands accepted by this bot can be found here. The pull request process is described here DetailsNeeds approval from an approver in each of these files:
Approvers can indicate their approval by writing |
|
/retest |
|
@tangenta: Cannot trigger testing until a trusted user reviews the PR and leaves an DetailsIn response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository. |
|
In response to a cherrypick label: new pull request created to branch |
* skip TestIndexJoin31494 * fix * global sort: add boundaries to split keys when generating plan (pingcap#58323) (pingcap#58356) * statistics: get right max table id when to init stats (pingcap#58280) (pingcap#58298) * executor: Fix the parse problematic slow log panic issue due to empty … * statstics: trigger evict by the timer (pingcap#58027) (pingcap#58268) * br: make table existence check unified on different br client (pingcap#58211) (pingcap#58262) * log backup: use global checkpoint ts as source of truth (pingcap#58135) (pingcap#58265) * executor: skip execution when build query for VIEW in I_S (pingcap#58203) (pingcap#58236) * statistics: copy stats when to update it for avoiding data race (pingcap#5810… * domain,infoschema: make infoschema activity block GC safepoint advanci… * planner: handle panic when loading bindings at startup (pingcap#58017) (pingcap#58035) * statistics: right deal with error for reading stats from storage (pingcap#58… * statistics: lite init used wrong value to build table stats ver (pingcap#5802… * lightning, ddl: set TS to engineMeta after ResetEngineSkipAllocTS (pingcap#5… * *: avoid unlock of unlocked mutex panic on TableDeltaMap (pingcap#57799) (pingcap#57997) * ddl: handle context done after sending DDL jobs (pingcap#57945) (pingcap#57989) * *: activate txn for query on infoschema tables (pingcap#57937) (pingcap#57951) * lightning: add PK to internal tables (pingcap#57480) (pingcap#57932) * statistics: correct behavior of non-lite InitStats and stats sync load… * statistics: avoid stats meta full load after table analysis (pingcap#57756) (pingcap#57911) * dumpling: use I_S to get table list for TiDB and add database to WHERE… * br: fix insert gc failed due to slow schema reload (pingcap#57742) (pingcap#57907) * statistics: do not record historical stats meta if the table is locked… * metrics: remove the filled colors (pingcap#57838) (pingcap#57866) * planner: use TableInfo.DBID to locate schema (pingcap#57785) (pingcap#57870) * *: support cancel query like 'select * from information_schema.tables'… * session: make `TxnInfo()` return even if process info is empty (pingcap#57044) (pingcap#57161) * ddl: Fixed partitioning a non-partitioned table with placement rules (… * *: Reorg partition fix delete ranges and handling non-clustered tables… * executor: fix query infoschema.tables table_schema/table_name with fil… * ddl: check context done in isReorgRunnable function (pingcap#57813) (pingcap#57820) * ddl: fix ExistsTableRow and add tests for skip reorg checks (pingcap#57778) (pingcap#57801) * *: Fix for TRUNCATE PARTITION and Global Index (pingcap#57724) * br: prompt k8s.io/api version (pingcap#57791) (pingcap#57802) * statistics: fix some problem related to stats async load (pingcap#57723) (pingcap#57775) * expression: fix wrong calculation order of `radians` (pingcap#57672) (pingcap#57688) * statistics: rightly deal with timout when to send sync load (pingcap#57712) (pingcap#57751) * ddl: `tidb_scatter_region` variable supports setting value in both upp… * planner: fix that vector index output empty result when pk is non-int … * ddl: dynamically adjusting the max write speed of reorganization job (… * executor: fix hang in hash agg when exceeding memory limit leads to pa… * statistics: use infoschema api to get table info (pingcap#57574) (pingcap#57614) * planner: Use realtimeRowCount when all topN collected (pingcap#56848) (pingcap#57689) * statistics: handle deleted tables correctly in the PQ (pingcap#57649) (pingcap#57674) * backup: reset timeout on store level (pingcap#55526) (pingcap#57667) * planner/core: fix a wrong privilege check for CTE & UPDATE statement (…
What problem does this PR solve?
Issue Number: close #57769
Problem Summary:
What changed and how does it work?
Use the transaction key-value interface instead to check if a table is empty.
Check List
Tests
Side effects
Documentation
Release note
Please refer to Release Notes Language Style Guide to write a quality release note.