Storage: Improve small partition table read performance by limit concurrency#10489
Conversation
|
/test pull-unit-test |
Signed-off-by: JaySon-Huang <tshent@qq.com>
…t_limit Signed-off-by: JaySon-Huang <tshent@qq.com>
…ltaMergeStore by DMReadOptions Signed-off-by: JaySon-Huang <tshent@qq.com>
Signed-off-by: JaySon-Huang <tshent@qq.com>
be47d27 to
664581b
Compare
Signed-off-by: JaySon-Huang <tshent@qq.com>
|
also /cc @windtalker @gengliqi |
[LGTM Timeline notifier]Timeline:
|
|
[APPROVALNOTIFIER] This PR is APPROVED This pull-request has been approved by: JinheLin, Lloyd-Pottiger, windtalker The full list of commands accepted by this bot can be found here. The pull request process is described here DetailsNeeds approval from an approver in each of these files:
Approvers can indicate their approval by writing |
| size_t final_num_stream = 0; | ||
| if (enable_read_thread) | ||
| { | ||
| // For limited tasks size under `enable_read_thread`, too much source ops actually lead to | ||
| // the table scan speed can not match the compute layer speed and lead to more concurrency | ||
| // overhead. So we limit the final_num_stream to tasks.size() * 4 when read thread is enabled | ||
| // under multiple partitions. | ||
| if (read_opts.has_multiple_partitions) | ||
| final_num_stream = std::min(num_streams, tasks.size() * 4); | ||
| else | ||
| final_num_stream = num_streams; | ||
| final_num_stream = std::max(1, final_num_stream); | ||
| } | ||
| else | ||
| { | ||
| final_num_stream = std::max(1, std::min(num_streams, tasks.size())); | ||
| } |
There was a problem hiding this comment.
The main logical change of this PR is this piece of code, others are passing require params from compute layer to storage layer.
|
/test pull-unit-test |
Signed-off-by: ti-chi-bot <ti-community-prow-bot@tidb.io>
|
In response to a cherrypick label: new pull request created to branch |
|
/cherry-pick release-nextgen-20251011 |
|
@JaySon-Huang: new pull request created to branch DetailsIn response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the ti-community-infra/tichi repository. |
…urrency (pingcap#10489) close pingcap#10487 * Limit the number of source ops by num of segment task * 4 in function `DeltaMergeStore::read`. In order to reduce concurrency overhead and let PartitionTableScan with small partitions that only contains 1~2 segments can schedule more segment read tasks in parallel. - For large partition, the storage layer still generate `num_streams` * `UnorderedSourceOp`, the behavior is the same as before. - For small partition, the storage layer only generate segment task * 4 * `UnorderedSourceOp`. And `ConcatBuilderPool` reorg the source ops and read the data from multiple partitions in parallel * Introduce `DMReadOptions` and reduce the complexity of adding has_multiple_partitions from compute layer to storage layer. * Add active_segment_limit, peak_active_segments, block_slot_limit, peak_blocks_in_queue when `SegmentReadTaskPool` finished Signed-off-by: JaySon-Huang <tshent@qq.com>
…urrency (#10489) (#10500) close #10487 * Limit the number of source ops by num of segment task * 4 in function `DeltaMergeStore::read`. In order to reduce concurrency overhead and let PartitionTableScan with small partitions that only contains 1~2 segments can schedule more segment read tasks in parallel. - For large partition, the storage layer still generate `num_streams` * `UnorderedSourceOp`, the behavior is the same as before. - For small partition, the storage layer only generate segment task * 4 * `UnorderedSourceOp`. And `ConcatBuilderPool` reorg the source ops and read the data from multiple partitions in parallel * Introduce `DMReadOptions` and reduce the complexity of adding has_multiple_partitions from compute layer to storage layer. * Add active_segment_limit, peak_active_segments, block_slot_limit, peak_blocks_in_queue when `SegmentReadTaskPool` finished Signed-off-by: JaySon-Huang <tshent@qq.com> Co-authored-by: JaySon <tshent@qq.com>
…urrency (#10489) (#10499) close #10487 * Limit the number of source ops by num of segment task * 4 in function `DeltaMergeStore::read`. In order to reduce concurrency overhead and let PartitionTableScan with small partitions that only contains 1~2 segments can schedule more segment read tasks in parallel. - For large partition, the storage layer still generate `num_streams` * `UnorderedSourceOp`, the behavior is the same as before. - For small partition, the storage layer only generate segment task * 4 * `UnorderedSourceOp`. And `ConcatBuilderPool` reorg the source ops and read the data from multiple partitions in parallel * Introduce `DMReadOptions` and reduce the complexity of adding has_multiple_partitions from compute layer to storage layer. * Add active_segment_limit, peak_active_segments, block_slot_limit, peak_blocks_in_queue when `SegmentReadTaskPool` finished Signed-off-by: ti-chi-bot <ti-community-prow-bot@tidb.io> Signed-off-by: JaySon-Huang <tshent@qq.com> Co-authored-by: JaySon <tshent@qq.com> Co-authored-by: JaySon-Huang <tshent@qq.com>
What problem does this PR solve?
Issue Number: close #10487
Problem Summary:
If a PartitionTableScan involve many partition but each partition only have 1 segment. Then
DeltaMergeStore::readwill generatenum_partitions*num_streams*UnorderedSourceOp. Because there is only 1 segment,SegmentReadTaskPoolwill only have 1 concurrency for reading data from disk.tiflash/dbms/src/Storages/DeltaMerge/DeltaMergeStore.cpp
Lines 1421 to 1454 in 80e1927
In order to avoid OOM issue when running queries on large PartitionTableScan (#8507),
MultiplexInputStreamandConcatBuilderPoolwill process streams/source ops of partition tables one by one.tiflash/dbms/src/Operators/ConcatSourceOp.h
Lines 190 to 199 in 80e1927
So there is only 1 concurrency for storage layer scanning data from 1 partition, and all compute thread wait for the blocks read from the partition. Only after the current partition finish reading, compute thread will call streams/source ops from the next partition. It result to the PartitionTableScan performance degrade along with the number of partitions, which is not expected.
What is changed and how it works?
Main logic changes is this piece of code: https://github.com/pingcap/tiflash/pull/10489/files#diff-22b900e9e8020dc835612316f3bf151cae28b621bfac64d6a22b377d062f4b7eR1419-R1435
The source ops generated by each partition will be added to
ConcatBuilderPoolthat wrap intoConcatSourceOp. Consider num_stream == 8.In the master, each partition will generate final_num_stream * UnorderedSourceOp. And ConcatBuilderPool will generate 8 ConcatSourceOp by
After this PR, ConcatBuilderPool will generate 8 ConcatSourceOp by
So after this PR, compute layer read part-1 and part-2 in parallel
Manual test
manual test of small partition table scan performance as described in #10487 (comment)
Check List
Tests
See the manual test describe above
Side effects
Documentation
Release note