Skip to content

Storage: Improve small partition table read performance by limit concurrency#10489

Merged
ti-chi-bot[bot] merged 7 commits intopingcap:masterfrom
JaySon-Huang:hack_opt_small_part_read
Oct 23, 2025
Merged

Storage: Improve small partition table read performance by limit concurrency#10489
ti-chi-bot[bot] merged 7 commits intopingcap:masterfrom
JaySon-Huang:hack_opt_small_part_read

Conversation

@JaySon-Huang
Copy link
Contributor

@JaySon-Huang JaySon-Huang commented Oct 21, 2025

What problem does this PR solve?

Issue Number: close #10487

Problem Summary:

If a PartitionTableScan involve many partition but each partition only have 1 segment. Then DeltaMergeStore::read will generate num_partitions * num_streams * UnorderedSourceOp. Because there is only 1 segment, SegmentReadTaskPool will only have 1 concurrency for reading data from disk.

size_t final_num_stream
= enable_read_thread ? std::max(1, num_streams) : std::max(1, std::min(num_streams, tasks.size()));
auto read_mode = getReadMode(db_context, is_fast_scan, keep_order, executor);
const auto & final_columns_to_read
= executor && executor->extra_cast ? *executor->columns_after_cast : columns_to_read;
auto read_task_pool = std::make_shared<SegmentReadTaskPool>(
extra_table_id_index,
final_columns_to_read,
executor,
start_ts,
expected_block_size,
read_mode,
std::move(tasks),
after_segment_read,
log_tracing_id,
enable_read_thread,
final_num_stream,
dm_context->scan_context->keyspace_id,
dm_context->scan_context->resource_group_name);
dm_context->scan_context->read_mode = read_mode;
if (enable_read_thread)
{
for (size_t i = 0; i < final_num_stream; ++i)
{
group_builder.addConcurrency(std::make_unique<UnorderedSourceOp>(
exec_context,
read_task_pool,
final_columns_to_read,
extra_table_id_index,
log_tracing_id,
runtime_filter_list,
rf_max_wait_time_ms));
}

In order to avoid OOM issue when running queries on large PartitionTableScan (#8507), MultiplexInputStream and ConcatBuilderPool will process streams/source ops of partition tables one by one.

void add(PipelineExecGroupBuilder & group_builder)
{
RUNTIME_CHECK(group_builder.groupCnt() == 1);
for (size_t i = 0; i < group_builder.concurrency(); ++i)
{
pool[pre_index++].push_back(std::move(group_builder.getCurBuilder(i)));
if (pre_index == pool.size())
pre_index = 0;
}
}

So there is only 1 concurrency for storage layer scanning data from 1 partition, and all compute thread wait for the blocks read from the partition. Only after the current partition finish reading, compute thread will call streams/source ops from the next partition. It result to the PartitionTableScan performance degrade along with the number of partitions, which is not expected.

What is changed and how it works?

* Limit the number of source ops by num of segment task * 4 in function `DeltaMergeStore::read`. In order to reduce concurrency overhead and let PartitionTableScan with small partitions that only contains 1~2 segments can schedule more segment read tasks in parallel.
  - For large partition, the storage layer still generate `num_streams` * `UnorderedSourceOp`, the behavior is the same as before.
  - For small partition, the storage layer only generate segment task * 4 * `UnorderedSourceOp`. And `ConcatBuilderPool` reorg the source ops and read the data from multiple partitions in parallel
* Introduce `DMReadOptions` and reduce the complexity of adding has_multiple_partitions from compute layer to storage layer.
* Add active_segment_limit, peak_active_segments, block_slot_limit, peak_blocks_in_queue when `SegmentReadTaskPool` finished

Main logic changes is this piece of code: https://github.com/pingcap/tiflash/pull/10489/files#diff-22b900e9e8020dc835612316f3bf151cae28b621bfac64d6a22b377d062f4b7eR1419-R1435

The source ops generated by each partition will be added to ConcatBuilderPool that wrap into ConcatSourceOp. Consider num_stream == 8.

In the master, each partition will generate final_num_stream * UnorderedSourceOp. And ConcatBuilderPool will generate 8 ConcatSourceOp by

  • ConcatSourceOp on pool[0]: part-1-UnorderedSourceOp, part-2-UnorderedSourceOp, part-3-UnorderedSourceOp
  • ConcatSourceOp on pool[1]: part-1-UnorderedSourceOp, part-2-UnorderedSourceOp, part-3-UnorderedSourceOp
  • ConcatSourceOp on pool[2]: part-1-UnorderedSourceOp, part-2-UnorderedSourceOp, part-3-UnorderedSourceOp
  • ConcatSourceOp on pool[3]: part-1-UnorderedSourceOp, part-2-UnorderedSourceOp, part-3-UnorderedSourceOp
  • ConcatSourceOp on pool[4]: part-1-UnorderedSourceOp, part-2-UnorderedSourceOp, part-3-UnorderedSourceOp
  • ConcatSourceOp on pool[5]: part-1-UnorderedSourceOp, part-2-UnorderedSourceOp, part-3-UnorderedSourceOp
  • ConcatSourceOp on pool[6]: part-1-UnorderedSourceOp, part-2-UnorderedSourceOp, part-3-UnorderedSourceOp
  • ConcatSourceOp on pool[7]: part-1-UnorderedSourceOp, part-2-UnorderedSourceOp, part-3-UnorderedSourceOp

After this PR, ConcatBuilderPool will generate 8 ConcatSourceOp by

  • ConcatSourceOp on pool[0]: part-1-UnorderedSourceOp, part-3-UnorderedSourceOp
  • ConcatSourceOp on pool[1]: part-1-UnorderedSourceOp, part-3-UnorderedSourceOp
  • ConcatSourceOp on pool[2]: part-1-UnorderedSourceOp, part-3-UnorderedSourceOp
  • ConcatSourceOp on pool[3]: part-1-UnorderedSourceOp, part-3-UnorderedSourceOp
  • ConcatSourceOp on pool[4]: part-2-UnorderedSourceOp
  • ConcatSourceOp on pool[5]: part-2-UnorderedSourceOp
  • ConcatSourceOp on pool[6]: part-2-UnorderedSourceOp
  • ConcatSourceOp on pool[7]: part-2-UnorderedSourceOp

So after this PR, compute layer read part-1 and part-2 in parallel

Manual test

manual test of small partition table scan performance as described in #10487 (comment)

-- master
-- we can observe performance regression as the number of partition increased
-- and scanning the same number of rows on partition table is slower than non-partition table
TiDB root@10.2.12.81:test> select "p0-0",count(*) from reports_part partition(p0);
                        -> select "p0-1",count(*) from reports_part partition(p0,p1);
                        -> select "p0-2",count(*) from reports_part partition(p0,p1,p2);
                        -> select "p0-3",count(*) from reports_part partition(p0,p1,p2,p3);
                        -> select "p0-4",count(*) from reports_part partition(p0,p1,p2,p3,p4);
                        -> select "p0-5",count(*) from reports_part partition(p0,p1,p2,p3,p4,p5);
                        -> select "p0-6",count(*) from reports_part partition(p0,p1,p2,p3,p4,p5,p6);
                        -> select "p0-7",count(*) from reports_part partition(p0,p1,p2,p3,p4,p5,p6,p7);
                        -> select "p0-8",count(*) from reports_part partition(p0,p1,p2,p3,p4,p5,p6,p7,p8);
                        -> select "p0-9",count(*) from reports_part partition(p0,p1,p2,p3,p4,p5,p6,p7,p8,p9);
                        -> select "p0-10",count(*) from reports_part partition(p0,p1,p2,p3,p4,p5,p6,p7,p8,p9,p10);
                        -> select "p0-11",count(*) from reports_part partition(p0,p1,p2,p3,p4,p5,p6,p7,p8,p9,p10,p11);
                        -> select "p0-12",count(*) from reports_part partition(p0,p1,p2,p3,p4,p5,p6,p7,p8,p9,p10,p11,p12);
                        -> select "non-part",count(*) from reports;
+------+----------+
| p0-0 | count(*) |
+------+----------+
| p0-0 | 0        |
+------+----------+
1 row in set
Time: 0.025s
+------+----------+
| p0-1 | count(*) |
+------+----------+
| p0-1 | 524288   |
+------+----------+
1 row in set
Time: 0.014s
+------+----------+
| p0-2 | count(*) |
+------+----------+
| p0-2 | 1048576  |
+------+----------+
1 row in set
Time: 0.017s
+------+----------+
| p0-3 | count(*) |
+------+----------+
| p0-3 | 1572864  |
+------+----------+
1 row in set
Time: 0.019s
+------+----------+
| p0-4 | count(*) |
+------+----------+
| p0-4 | 2097152  |
+------+----------+
1 row in set
Time: 0.023s
+------+----------+
| p0-5 | count(*) |
+------+----------+
| p0-5 | 2621440  |
+------+----------+
1 row in set
Time: 0.023s
+------+----------+
| p0-6 | count(*) |
+------+----------+
| p0-6 | 3145728  |
+------+----------+
1 row in set
Time: 0.024s
+------+----------+
| p0-7 | count(*) |
+------+----------+
| p0-7 | 3670016  |
+------+----------+
1 row in set
Time: 0.027s
+------+----------+
| p0-8 | count(*) |
+------+----------+
| p0-8 | 4194304  |
+------+----------+
1 row in set
Time: 0.029s
+------+----------+
| p0-9 | count(*) |
+------+----------+
| p0-9 | 4718592  |
+------+----------+
1 row in set
Time: 0.031s
+-------+----------+
| p0-10 | count(*) |
+-------+----------+
| p0-10 | 5242880  |
+-------+----------+
1 row in set
Time: 0.036s
+-------+----------+
| p0-11 | count(*) |
+-------+----------+
| p0-11 | 5767168  |
+-------+----------+
1 row in set
Time: 0.036s
+-------+----------+
| p0-12 | count(*) |
+-------+----------+
| p0-12 | 6291456  |
+-------+----------+
1 row in set
Time: 0.040s
+----------+----------+
| non-part | count(*) |
+----------+----------+
| non-part | 6291456  |
+----------+----------+
1 row in set
Time: 0.017s
-- after the fix
-- there is no performance regression as the number of partition increased
TiDB root@10.2.12.81:test> select "p0-0",count(*) from reports_part partition(p0);
                        -> select "p0-1",count(*) from reports_part partition(p0,p1);
                        -> select "p0-2",count(*) from reports_part partition(p0,p1,p2);
                        -> select "p0-3",count(*) from reports_part partition(p0,p1,p2,p3);
                        -> select "p0-4",count(*) from reports_part partition(p0,p1,p2,p3,p4);
                        -> select "p0-5",count(*) from reports_part partition(p0,p1,p2,p3,p4,p5);
                        -> select "p0-6",count(*) from reports_part partition(p0,p1,p2,p3,p4,p5,p6);
                        -> select "p0-7",count(*) from reports_part partition(p0,p1,p2,p3,p4,p5,p6,p7);
                        -> select "p0-8",count(*) from reports_part partition(p0,p1,p2,p3,p4,p5,p6,p7,p8);
                        -> select "p0-9",count(*) from reports_part partition(p0,p1,p2,p3,p4,p5,p6,p7,p8,p9);
                        -> select "p0-10",count(*) from reports_part partition(p0,p1,p2,p3,p4,p5,p6,p7,p8,p9,p10);
                        -> select "p0-11",count(*) from reports_part partition(p0,p1,p2,p3,p4,p5,p6,p7,p8,p9,p10,p11);
                        -> select "p0-12",count(*) from reports_part partition(p0,p1,p2,p3,p4,p5,p6,p7,p8,p9,p10,p11,p12);
                        -> select "non-part",count(*) from reports;
+------+----------+
| p0-0 | count(*) |
+------+----------+
| p0-0 | 0        |
+------+----------+
1 row in set
Time: 0.018s
+------+----------+
| p0-1 | count(*) |
+------+----------+
| p0-1 | 524288   |
+------+----------+
1 row in set
Time: 0.011s
+------+----------+
| p0-2 | count(*) |
+------+----------+
| p0-2 | 1048576  |
+------+----------+
1 row in set
Time: 0.013s
+------+----------+
| p0-3 | count(*) |
+------+----------+
| p0-3 | 1572864  |
+------+----------+
1 row in set
Time: 0.012s
+------+----------+
| p0-4 | count(*) |
+------+----------+
| p0-4 | 2097152  |
+------+----------+
1 row in set
Time: 0.012s
+------+----------+
| p0-5 | count(*) |
+------+----------+
| p0-5 | 2621440  |
+------+----------+
1 row in set
Time: 0.013s
+------+----------+
| p0-6 | count(*) |
+------+----------+
| p0-6 | 3145728  |
+------+----------+
1 row in set
Time: 0.013s
+------+----------+
| p0-7 | count(*) |
+------+----------+
| p0-7 | 3670016  |
+------+----------+
1 row in set
Time: 0.014s
+------+----------+
| p0-8 | count(*) |
+------+----------+
| p0-8 | 4194304  |
+------+----------+
1 row in set
Time: 0.015s
+------+----------+
| p0-9 | count(*) |
+------+----------+
| p0-9 | 4718592  |
+------+----------+
1 row in set
Time: 0.014s
+-------+----------+
| p0-10 | count(*) |
+-------+----------+
| p0-10 | 5242880  |
+-------+----------+
1 row in set
Time: 0.017s
+-------+----------+
| p0-11 | count(*) |
+-------+----------+
| p0-11 | 5767168  |
+-------+----------+
1 row in set
Time: 0.014s
+-------+----------+
| p0-12 | count(*) |
+-------+----------+
| p0-12 | 6291456  |
+-------+----------+
1 row in set
Time: 0.017s
+----------+----------+
| non-part | count(*) |
+----------+----------+
| non-part | 6291456  |
+----------+----------+
1 row in set
Time: 0.016s

Check List

Tests

  • Unit test
  • Integration test
  • Manual test (add detailed scripts or steps below)
    See the manual test describe above
  • No code

Side effects

  • Performance regression: Consumes more CPU
  • Performance regression: Consumes more Memory
  • Breaking backward compatibility

Documentation

  • Affects user behaviors
  • Contains syntax changes
  • Contains variable changes
  • Contains experimental features
  • Changes MySQL compatibility

Release note

Fix the bug that table scan performance on small partition table is not optimal

@ti-chi-bot ti-chi-bot bot added do-not-merge/needs-linked-issue release-note-none Denotes a PR that doesn't merit a release note. size/XS Denotes a PR that changes 0-9 lines, ignoring generated files. size/M Denotes a PR that changes 30-99 lines, ignoring generated files. size/XL Denotes a PR that changes 500-999 lines, ignoring generated files. do-not-merge/needs-triage-completed size/XXL Denotes a PR that changes 1000+ lines, ignoring generated files. and removed size/XS Denotes a PR that changes 0-9 lines, ignoring generated files. size/M Denotes a PR that changes 30-99 lines, ignoring generated files. do-not-merge/needs-linked-issue do-not-merge/needs-triage-completed size/XL Denotes a PR that changes 500-999 lines, ignoring generated files. labels Oct 21, 2025
@JaySon-Huang JaySon-Huang changed the title Improve small table/partition read performance by limit concurrency Storage: Improve small table/partition read performance by limit concurrency Oct 21, 2025
@JaySon-Huang
Copy link
Contributor Author

/test pull-unit-test

@JaySon-Huang JaySon-Huang changed the title Storage: Improve small table/partition read performance by limit concurrency Storage: Improve small partition table read performance by limit concurrency Oct 22, 2025
@ti-chi-bot ti-chi-bot bot added release-note Denotes a PR that will be considered when it comes time to generate release notes. and removed release-note-none Denotes a PR that doesn't merit a release note. labels Oct 22, 2025
Signed-off-by: JaySon-Huang <tshent@qq.com>
…t_limit

Signed-off-by: JaySon-Huang <tshent@qq.com>
…ltaMergeStore by DMReadOptions

Signed-off-by: JaySon-Huang <tshent@qq.com>
Signed-off-by: JaySon-Huang <tshent@qq.com>
Signed-off-by: JaySon-Huang <tshent@qq.com>
@JaySon-Huang JaySon-Huang force-pushed the hack_opt_small_part_read branch from be47d27 to 664581b Compare October 22, 2025 06:01
Signed-off-by: JaySon-Huang <tshent@qq.com>
@JaySon-Huang
Copy link
Contributor Author

also /cc @windtalker @gengliqi

This reverts commit 664581b.

Revert "Refine logging"

This reverts commit ffd6348.
@ti-chi-bot ti-chi-bot bot added needs-1-more-lgtm Indicates a PR needs 1 more LGTM. approved labels Oct 22, 2025
@ti-chi-bot ti-chi-bot bot added lgtm and removed needs-1-more-lgtm Indicates a PR needs 1 more LGTM. labels Oct 23, 2025
@ti-chi-bot
Copy link
Contributor

ti-chi-bot bot commented Oct 23, 2025

[LGTM Timeline notifier]

Timeline:

  • 2025-10-22 14:35:09.693208355 +0000 UTC m=+882415.770460915: ☑️ agreed by JinheLin.
  • 2025-10-23 02:30:27.731935798 +0000 UTC m=+925333.809188357: ☑️ agreed by Lloyd-Pottiger.

Copy link
Contributor

@windtalker windtalker left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

lgtm

@ti-chi-bot
Copy link
Contributor

ti-chi-bot bot commented Oct 23, 2025

[APPROVALNOTIFIER] This PR is APPROVED

This pull-request has been approved by: JinheLin, Lloyd-Pottiger, windtalker

The full list of commands accepted by this bot can be found here.

The pull request process is described here

Details Needs approval from an approver in each of these files:
  • OWNERS [JinheLin,Lloyd-Pottiger,windtalker]

Approvers can indicate their approval by writing /approve in a comment
Approvers can cancel approval by writing /approve cancel in a comment

Comment on lines +1419 to +1435
size_t final_num_stream = 0;
if (enable_read_thread)
{
// For limited tasks size under `enable_read_thread`, too much source ops actually lead to
// the table scan speed can not match the compute layer speed and lead to more concurrency
// overhead. So we limit the final_num_stream to tasks.size() * 4 when read thread is enabled
// under multiple partitions.
if (read_opts.has_multiple_partitions)
final_num_stream = std::min(num_streams, tasks.size() * 4);
else
final_num_stream = num_streams;
final_num_stream = std::max(1, final_num_stream);
}
else
{
final_num_stream = std::max(1, std::min(num_streams, tasks.size()));
}
Copy link
Contributor Author

@JaySon-Huang JaySon-Huang Oct 23, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The main logical change of this PR is this piece of code, others are passing require params from compute layer to storage layer.

@JaySon-Huang
Copy link
Contributor Author

/test pull-unit-test

@ti-chi-bot ti-chi-bot bot added the needs-cherry-pick-release-8.5 Should cherry pick this PR to release-8.5 branch. label Oct 23, 2025
@ti-chi-bot ti-chi-bot bot merged commit 6fb1744 into pingcap:master Oct 23, 2025
7 checks passed
ti-chi-bot pushed a commit to ti-chi-bot/tiflash that referenced this pull request Oct 23, 2025
Signed-off-by: ti-chi-bot <ti-community-prow-bot@tidb.io>
@ti-chi-bot
Copy link
Member

In response to a cherrypick label: new pull request created to branch release-8.5: #10499.
But this PR has conflicts, please resolve them!

@JaySon-Huang
Copy link
Contributor Author

/cherry-pick release-nextgen-20251011

@ti-chi-bot
Copy link
Member

@JaySon-Huang: new pull request created to branch release-nextgen-20251011: #10500.

Details

In response to this:

/cherry-pick release-nextgen-20251011

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the ti-community-infra/tichi repository.

ti-chi-bot pushed a commit to ti-chi-bot/tiflash that referenced this pull request Oct 23, 2025
…urrency (pingcap#10489)

close pingcap#10487

* Limit the number of source ops by num of segment task * 4 in function `DeltaMergeStore::read`. In order to reduce concurrency overhead and let PartitionTableScan with small partitions that only contains 1~2 segments can schedule more segment read tasks in parallel.
  - For large partition, the storage layer still generate `num_streams` * `UnorderedSourceOp`, the behavior is the same as before.
  - For small partition, the storage layer only generate segment task * 4 * `UnorderedSourceOp`. And `ConcatBuilderPool` reorg the source ops and read the data from multiple partitions in parallel
* Introduce `DMReadOptions` and reduce the complexity of adding has_multiple_partitions from compute layer to storage layer.
* Add active_segment_limit, peak_active_segments, block_slot_limit, peak_blocks_in_queue when `SegmentReadTaskPool` finished

Signed-off-by: JaySon-Huang <tshent@qq.com>
ti-chi-bot bot pushed a commit that referenced this pull request Oct 23, 2025
…urrency (#10489) (#10500)

close #10487

* Limit the number of source ops by num of segment task * 4 in function `DeltaMergeStore::read`. In order to reduce concurrency overhead and let PartitionTableScan with small partitions that only contains 1~2 segments can schedule more segment read tasks in parallel.
  - For large partition, the storage layer still generate `num_streams` * `UnorderedSourceOp`, the behavior is the same as before.
  - For small partition, the storage layer only generate segment task * 4 * `UnorderedSourceOp`. And `ConcatBuilderPool` reorg the source ops and read the data from multiple partitions in parallel
* Introduce `DMReadOptions` and reduce the complexity of adding has_multiple_partitions from compute layer to storage layer.
* Add active_segment_limit, peak_active_segments, block_slot_limit, peak_blocks_in_queue when `SegmentReadTaskPool` finished

Signed-off-by: JaySon-Huang <tshent@qq.com>

Co-authored-by: JaySon <tshent@qq.com>
ti-chi-bot bot pushed a commit that referenced this pull request Oct 23, 2025
…urrency (#10489) (#10499)

close #10487

* Limit the number of source ops by num of segment task * 4 in function `DeltaMergeStore::read`. In order to reduce concurrency overhead and let PartitionTableScan with small partitions that only contains 1~2 segments can schedule more segment read tasks in parallel.
  - For large partition, the storage layer still generate `num_streams` * `UnorderedSourceOp`, the behavior is the same as before.
  - For small partition, the storage layer only generate segment task * 4 * `UnorderedSourceOp`. And `ConcatBuilderPool` reorg the source ops and read the data from multiple partitions in parallel
* Introduce `DMReadOptions` and reduce the complexity of adding has_multiple_partitions from compute layer to storage layer.
* Add active_segment_limit, peak_active_segments, block_slot_limit, peak_blocks_in_queue when `SegmentReadTaskPool` finished

Signed-off-by: ti-chi-bot <ti-community-prow-bot@tidb.io>
Signed-off-by: JaySon-Huang <tshent@qq.com>

Co-authored-by: JaySon <tshent@qq.com>
Co-authored-by: JaySon-Huang <tshent@qq.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

approved lgtm needs-cherry-pick-release-8.5 Should cherry pick this PR to release-8.5 branch. release-note Denotes a PR that will be considered when it comes time to generate release notes. size/XXL Denotes a PR that changes 1000+ lines, ignoring generated files.

Projects

None yet

Development

Successfully merging this pull request may close these issues.

TiFlash query performance is not expected under small partition table

5 participants