Storage: Improve small partition table read performance by limit concurrency (#10489)#10499
Conversation
Signed-off-by: ti-chi-bot <ti-community-prow-bot@tidb.io>
|
@JaySon-Huang This PR has conflicts, I have hold it. |
|
@ti-chi-bot: ## If you want to know how to resolve it, please read the guide in TiDB Dev Guide. DetailsInstructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the ti-community-infra/tichi repository. |
Signed-off-by: JaySon-Huang <tshent@qq.com>
|
/unhold |
|
[APPROVALNOTIFIER] This PR is APPROVED This pull-request has been approved by: JaySon-Huang, Lloyd-Pottiger The full list of commands accepted by this bot can be found here. The pull request process is described here DetailsNeeds approval from an approver in each of these files:
Approvers can indicate their approval by writing |
[LGTM Timeline notifier]Timeline:
|
This is an automated cherry-pick of #10489
What problem does this PR solve?
Issue Number: close #10487
Problem Summary:
If a PartitionTableScan involve many partition but each partition only have 1 segment. Then
DeltaMergeStore::readwill generatenum_partitions*num_streams*UnorderedSourceOp. Because there is only 1 segment,SegmentReadTaskPoolwill only have 1 concurrency for reading data from disk.tiflash/dbms/src/Storages/DeltaMerge/DeltaMergeStore.cpp
Lines 1421 to 1454 in 80e1927
In order to avoid OOM issue when running queries on large PartitionTableScan (#8507),
MultiplexInputStreamandConcatBuilderPoolwill process streams/source ops of partition tables one by one.tiflash/dbms/src/Operators/ConcatSourceOp.h
Lines 190 to 199 in 80e1927
So there is only 1 concurrency for storage layer scanning data from 1 partition, and all compute thread wait for the blocks read from the partition. Only after the current partition finish reading, compute thread will call streams/source ops from the next partition. It result to the PartitionTableScan performance degrade along with the number of partitions, which is not expected.
What is changed and how it works?
Main logic changes is this piece of code: https://github.com/pingcap/tiflash/pull/10489/files#diff-22b900e9e8020dc835612316f3bf151cae28b621bfac64d6a22b377d062f4b7eR1419-R1435
The source ops generated by each partition will be added to
ConcatBuilderPoolthat wrap intoConcatSourceOp. Consider num_stream == 8.In the master, each partition will generate final_num_stream * UnorderedSourceOp. And ConcatBuilderPool will generate 8 ConcatSourceOp by
After this PR, ConcatBuilderPool will generate 8 ConcatSourceOp by
So after this PR, compute layer read part-1 and part-2 in parallel
Manual test
manual test of small partition table scan performance as described in #10487 (comment)
Check List
Tests
See the manual test describe above
Side effects
Documentation
Release note