Skip to content

Storage: Improve small partition table read performance by limit concurrency (#10489)#10499

Merged
ti-chi-bot[bot] merged 3 commits intopingcap:release-8.5from
ti-chi-bot:cherry-pick-10489-to-release-8.5
Oct 23, 2025
Merged

Storage: Improve small partition table read performance by limit concurrency (#10489)#10499
ti-chi-bot[bot] merged 3 commits intopingcap:release-8.5from
ti-chi-bot:cherry-pick-10489-to-release-8.5

Conversation

@ti-chi-bot
Copy link
Member

This is an automated cherry-pick of #10489

What problem does this PR solve?

Issue Number: close #10487

Problem Summary:

If a PartitionTableScan involve many partition but each partition only have 1 segment. Then DeltaMergeStore::read will generate num_partitions * num_streams * UnorderedSourceOp. Because there is only 1 segment, SegmentReadTaskPool will only have 1 concurrency for reading data from disk.

size_t final_num_stream
= enable_read_thread ? std::max(1, num_streams) : std::max(1, std::min(num_streams, tasks.size()));
auto read_mode = getReadMode(db_context, is_fast_scan, keep_order, executor);
const auto & final_columns_to_read
= executor && executor->extra_cast ? *executor->columns_after_cast : columns_to_read;
auto read_task_pool = std::make_shared<SegmentReadTaskPool>(
extra_table_id_index,
final_columns_to_read,
executor,
start_ts,
expected_block_size,
read_mode,
std::move(tasks),
after_segment_read,
log_tracing_id,
enable_read_thread,
final_num_stream,
dm_context->scan_context->keyspace_id,
dm_context->scan_context->resource_group_name);
dm_context->scan_context->read_mode = read_mode;
if (enable_read_thread)
{
for (size_t i = 0; i < final_num_stream; ++i)
{
group_builder.addConcurrency(std::make_unique<UnorderedSourceOp>(
exec_context,
read_task_pool,
final_columns_to_read,
extra_table_id_index,
log_tracing_id,
runtime_filter_list,
rf_max_wait_time_ms));
}

In order to avoid OOM issue when running queries on large PartitionTableScan (#8507), MultiplexInputStream and ConcatBuilderPool will process streams/source ops of partition tables one by one.

void add(PipelineExecGroupBuilder & group_builder)
{
RUNTIME_CHECK(group_builder.groupCnt() == 1);
for (size_t i = 0; i < group_builder.concurrency(); ++i)
{
pool[pre_index++].push_back(std::move(group_builder.getCurBuilder(i)));
if (pre_index == pool.size())
pre_index = 0;
}
}

So there is only 1 concurrency for storage layer scanning data from 1 partition, and all compute thread wait for the blocks read from the partition. Only after the current partition finish reading, compute thread will call streams/source ops from the next partition. It result to the PartitionTableScan performance degrade along with the number of partitions, which is not expected.

What is changed and how it works?

* Limit the number of source ops by num of segment task * 4 in function `DeltaMergeStore::read`. In order to reduce concurrency overhead and let PartitionTableScan with small partitions that only contains 1~2 segments can schedule more segment read tasks in parallel.
  - For large partition, the storage layer still generate `num_streams` * `UnorderedSourceOp`, the behavior is the same as before.
  - For small partition, the storage layer only generate segment task * 4 * `UnorderedSourceOp`. And `ConcatBuilderPool` reorg the source ops and read the data from multiple partitions in parallel
* Introduce `DMReadOptions` and reduce the complexity of adding has_multiple_partitions from compute layer to storage layer.
* Add active_segment_limit, peak_active_segments, block_slot_limit, peak_blocks_in_queue when `SegmentReadTaskPool` finished

Main logic changes is this piece of code: https://github.com/pingcap/tiflash/pull/10489/files#diff-22b900e9e8020dc835612316f3bf151cae28b621bfac64d6a22b377d062f4b7eR1419-R1435

The source ops generated by each partition will be added to ConcatBuilderPool that wrap into ConcatSourceOp. Consider num_stream == 8.

In the master, each partition will generate final_num_stream * UnorderedSourceOp. And ConcatBuilderPool will generate 8 ConcatSourceOp by

  • ConcatSourceOp on pool[0]: part-1-UnorderedSourceOp, part-2-UnorderedSourceOp, part-3-UnorderedSourceOp
  • ConcatSourceOp on pool[1]: part-1-UnorderedSourceOp, part-2-UnorderedSourceOp, part-3-UnorderedSourceOp
  • ConcatSourceOp on pool[2]: part-1-UnorderedSourceOp, part-2-UnorderedSourceOp, part-3-UnorderedSourceOp
  • ConcatSourceOp on pool[3]: part-1-UnorderedSourceOp, part-2-UnorderedSourceOp, part-3-UnorderedSourceOp
  • ConcatSourceOp on pool[4]: part-1-UnorderedSourceOp, part-2-UnorderedSourceOp, part-3-UnorderedSourceOp
  • ConcatSourceOp on pool[5]: part-1-UnorderedSourceOp, part-2-UnorderedSourceOp, part-3-UnorderedSourceOp
  • ConcatSourceOp on pool[6]: part-1-UnorderedSourceOp, part-2-UnorderedSourceOp, part-3-UnorderedSourceOp
  • ConcatSourceOp on pool[7]: part-1-UnorderedSourceOp, part-2-UnorderedSourceOp, part-3-UnorderedSourceOp

After this PR, ConcatBuilderPool will generate 8 ConcatSourceOp by

  • ConcatSourceOp on pool[0]: part-1-UnorderedSourceOp, part-3-UnorderedSourceOp
  • ConcatSourceOp on pool[1]: part-1-UnorderedSourceOp, part-3-UnorderedSourceOp
  • ConcatSourceOp on pool[2]: part-1-UnorderedSourceOp, part-3-UnorderedSourceOp
  • ConcatSourceOp on pool[3]: part-1-UnorderedSourceOp, part-3-UnorderedSourceOp
  • ConcatSourceOp on pool[4]: part-2-UnorderedSourceOp
  • ConcatSourceOp on pool[5]: part-2-UnorderedSourceOp
  • ConcatSourceOp on pool[6]: part-2-UnorderedSourceOp
  • ConcatSourceOp on pool[7]: part-2-UnorderedSourceOp

So after this PR, compute layer read part-1 and part-2 in parallel

Manual test

manual test of small partition table scan performance as described in #10487 (comment)

-- master
-- we can observe performance regression as the number of partition increased
-- and scanning the same number of rows on partition table is slower than non-partition table
TiDB root@10.2.12.81:test> select "p0-0",count(*) from reports_part partition(p0);
                        -> select "p0-1",count(*) from reports_part partition(p0,p1);
                        -> select "p0-2",count(*) from reports_part partition(p0,p1,p2);
                        -> select "p0-3",count(*) from reports_part partition(p0,p1,p2,p3);
                        -> select "p0-4",count(*) from reports_part partition(p0,p1,p2,p3,p4);
                        -> select "p0-5",count(*) from reports_part partition(p0,p1,p2,p3,p4,p5);
                        -> select "p0-6",count(*) from reports_part partition(p0,p1,p2,p3,p4,p5,p6);
                        -> select "p0-7",count(*) from reports_part partition(p0,p1,p2,p3,p4,p5,p6,p7);
                        -> select "p0-8",count(*) from reports_part partition(p0,p1,p2,p3,p4,p5,p6,p7,p8);
                        -> select "p0-9",count(*) from reports_part partition(p0,p1,p2,p3,p4,p5,p6,p7,p8,p9);
                        -> select "p0-10",count(*) from reports_part partition(p0,p1,p2,p3,p4,p5,p6,p7,p8,p9,p10);
                        -> select "p0-11",count(*) from reports_part partition(p0,p1,p2,p3,p4,p5,p6,p7,p8,p9,p10,p11);
                        -> select "p0-12",count(*) from reports_part partition(p0,p1,p2,p3,p4,p5,p6,p7,p8,p9,p10,p11,p12);
                        -> select "non-part",count(*) from reports;
+------+----------+
| p0-0 | count(*) |
+------+----------+
| p0-0 | 0        |
+------+----------+
1 row in set
Time: 0.025s
+------+----------+
| p0-1 | count(*) |
+------+----------+
| p0-1 | 524288   |
+------+----------+
1 row in set
Time: 0.014s
+------+----------+
| p0-2 | count(*) |
+------+----------+
| p0-2 | 1048576  |
+------+----------+
1 row in set
Time: 0.017s
+------+----------+
| p0-3 | count(*) |
+------+----------+
| p0-3 | 1572864  |
+------+----------+
1 row in set
Time: 0.019s
+------+----------+
| p0-4 | count(*) |
+------+----------+
| p0-4 | 2097152  |
+------+----------+
1 row in set
Time: 0.023s
+------+----------+
| p0-5 | count(*) |
+------+----------+
| p0-5 | 2621440  |
+------+----------+
1 row in set
Time: 0.023s
+------+----------+
| p0-6 | count(*) |
+------+----------+
| p0-6 | 3145728  |
+------+----------+
1 row in set
Time: 0.024s
+------+----------+
| p0-7 | count(*) |
+------+----------+
| p0-7 | 3670016  |
+------+----------+
1 row in set
Time: 0.027s
+------+----------+
| p0-8 | count(*) |
+------+----------+
| p0-8 | 4194304  |
+------+----------+
1 row in set
Time: 0.029s
+------+----------+
| p0-9 | count(*) |
+------+----------+
| p0-9 | 4718592  |
+------+----------+
1 row in set
Time: 0.031s
+-------+----------+
| p0-10 | count(*) |
+-------+----------+
| p0-10 | 5242880  |
+-------+----------+
1 row in set
Time: 0.036s
+-------+----------+
| p0-11 | count(*) |
+-------+----------+
| p0-11 | 5767168  |
+-------+----------+
1 row in set
Time: 0.036s
+-------+----------+
| p0-12 | count(*) |
+-------+----------+
| p0-12 | 6291456  |
+-------+----------+
1 row in set
Time: 0.040s
+----------+----------+
| non-part | count(*) |
+----------+----------+
| non-part | 6291456  |
+----------+----------+
1 row in set
Time: 0.017s
-- after the fix
-- there is no performance regression as the number of partition increased
TiDB root@10.2.12.81:test> select "p0-0",count(*) from reports_part partition(p0);
                        -> select "p0-1",count(*) from reports_part partition(p0,p1);
                        -> select "p0-2",count(*) from reports_part partition(p0,p1,p2);
                        -> select "p0-3",count(*) from reports_part partition(p0,p1,p2,p3);
                        -> select "p0-4",count(*) from reports_part partition(p0,p1,p2,p3,p4);
                        -> select "p0-5",count(*) from reports_part partition(p0,p1,p2,p3,p4,p5);
                        -> select "p0-6",count(*) from reports_part partition(p0,p1,p2,p3,p4,p5,p6);
                        -> select "p0-7",count(*) from reports_part partition(p0,p1,p2,p3,p4,p5,p6,p7);
                        -> select "p0-8",count(*) from reports_part partition(p0,p1,p2,p3,p4,p5,p6,p7,p8);
                        -> select "p0-9",count(*) from reports_part partition(p0,p1,p2,p3,p4,p5,p6,p7,p8,p9);
                        -> select "p0-10",count(*) from reports_part partition(p0,p1,p2,p3,p4,p5,p6,p7,p8,p9,p10);
                        -> select "p0-11",count(*) from reports_part partition(p0,p1,p2,p3,p4,p5,p6,p7,p8,p9,p10,p11);
                        -> select "p0-12",count(*) from reports_part partition(p0,p1,p2,p3,p4,p5,p6,p7,p8,p9,p10,p11,p12);
                        -> select "non-part",count(*) from reports;
+------+----------+
| p0-0 | count(*) |
+------+----------+
| p0-0 | 0        |
+------+----------+
1 row in set
Time: 0.018s
+------+----------+
| p0-1 | count(*) |
+------+----------+
| p0-1 | 524288   |
+------+----------+
1 row in set
Time: 0.011s
+------+----------+
| p0-2 | count(*) |
+------+----------+
| p0-2 | 1048576  |
+------+----------+
1 row in set
Time: 0.013s
+------+----------+
| p0-3 | count(*) |
+------+----------+
| p0-3 | 1572864  |
+------+----------+
1 row in set
Time: 0.012s
+------+----------+
| p0-4 | count(*) |
+------+----------+
| p0-4 | 2097152  |
+------+----------+
1 row in set
Time: 0.012s
+------+----------+
| p0-5 | count(*) |
+------+----------+
| p0-5 | 2621440  |
+------+----------+
1 row in set
Time: 0.013s
+------+----------+
| p0-6 | count(*) |
+------+----------+
| p0-6 | 3145728  |
+------+----------+
1 row in set
Time: 0.013s
+------+----------+
| p0-7 | count(*) |
+------+----------+
| p0-7 | 3670016  |
+------+----------+
1 row in set
Time: 0.014s
+------+----------+
| p0-8 | count(*) |
+------+----------+
| p0-8 | 4194304  |
+------+----------+
1 row in set
Time: 0.015s
+------+----------+
| p0-9 | count(*) |
+------+----------+
| p0-9 | 4718592  |
+------+----------+
1 row in set
Time: 0.014s
+-------+----------+
| p0-10 | count(*) |
+-------+----------+
| p0-10 | 5242880  |
+-------+----------+
1 row in set
Time: 0.017s
+-------+----------+
| p0-11 | count(*) |
+-------+----------+
| p0-11 | 5767168  |
+-------+----------+
1 row in set
Time: 0.014s
+-------+----------+
| p0-12 | count(*) |
+-------+----------+
| p0-12 | 6291456  |
+-------+----------+
1 row in set
Time: 0.017s
+----------+----------+
| non-part | count(*) |
+----------+----------+
| non-part | 6291456  |
+----------+----------+
1 row in set
Time: 0.016s

Check List

Tests

  • Unit test
  • Integration test
  • Manual test (add detailed scripts or steps below)
    See the manual test describe above
  • No code

Side effects

  • Performance regression: Consumes more CPU
  • Performance regression: Consumes more Memory
  • Breaking backward compatibility

Documentation

  • Affects user behaviors
  • Contains syntax changes
  • Contains variable changes
  • Contains experimental features
  • Changes MySQL compatibility

Release note

Fix the bug that table scan performance on small partition table is not optimal

Signed-off-by: ti-chi-bot <ti-community-prow-bot@tidb.io>
@ti-chi-bot ti-chi-bot added do-not-merge/hold Indicates that a PR should not merge because someone has issued a /hold command. release-note Denotes a PR that will be considered when it comes time to generate release notes. size/XXL Denotes a PR that changes 1000+ lines, ignoring generated files. type/cherry-pick-for-release-8.5 This PR is cherry-picked to release-8.5 from a source PR. labels Oct 23, 2025
@ti-chi-bot
Copy link
Member Author

@JaySon-Huang This PR has conflicts, I have hold it.
Please resolve them or ask others to resolve them, then comment /unhold to remove the hold label.

@ti-chi-bot
Copy link
Contributor

ti-chi-bot bot commented Oct 23, 2025

@ti-chi-bot: ## If you want to know how to resolve it, please read the guide in TiDB Dev Guide.

Details

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the ti-community-infra/tichi repository.

Signed-off-by: JaySon-Huang <tshent@qq.com>
@JaySon-Huang
Copy link
Contributor

/unhold

@ti-chi-bot ti-chi-bot bot removed the do-not-merge/hold Indicates that a PR should not merge because someone has issued a /hold command. label Oct 23, 2025
@ti-chi-bot ti-chi-bot bot added needs-1-more-lgtm Indicates a PR needs 1 more LGTM. approved labels Oct 23, 2025
Signed-off-by: JaySon-Huang <tshent@qq.com>
@ti-chi-bot
Copy link
Contributor

ti-chi-bot bot commented Oct 23, 2025

[APPROVALNOTIFIER] This PR is APPROVED

This pull-request has been approved by: JaySon-Huang, Lloyd-Pottiger

The full list of commands accepted by this bot can be found here.

The pull request process is described here

Details Needs approval from an approver in each of these files:
  • OWNERS [JaySon-Huang,Lloyd-Pottiger]

Approvers can indicate their approval by writing /approve in a comment
Approvers can cancel approval by writing /approve cancel in a comment

@ti-chi-bot ti-chi-bot bot added lgtm and removed needs-1-more-lgtm Indicates a PR needs 1 more LGTM. labels Oct 23, 2025
@ti-chi-bot
Copy link
Contributor

ti-chi-bot bot commented Oct 23, 2025

[LGTM Timeline notifier]

Timeline:

  • 2025-10-23 06:40:32.377824236 +0000 UTC m=+940338.455076796: ☑️ agreed by Lloyd-Pottiger.
  • 2025-10-23 07:24:44.76906833 +0000 UTC m=+942990.846320890: ☑️ agreed by JaySon-Huang.

@ti-chi-bot ti-chi-bot bot added cherry-pick-approved Cherry pick PR approved by release team. and removed do-not-merge/cherry-pick-not-approved labels Oct 23, 2025
@ti-chi-bot ti-chi-bot bot merged commit 6a8b2c6 into pingcap:release-8.5 Oct 23, 2025
4 checks passed
@ti-chi-bot ti-chi-bot bot deleted the cherry-pick-10489-to-release-8.5 branch October 23, 2025 14:59
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

approved cherry-pick-approved Cherry pick PR approved by release team. lgtm release-note Denotes a PR that will be considered when it comes time to generate release notes. size/XXL Denotes a PR that changes 1000+ lines, ignoring generated files. type/cherry-pick-for-release-8.5 This PR is cherry-picked to release-8.5 from a source PR.

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants