Skip to content

Reduce INSERT/merges memory usage with wide parts for very wide tables by enabling adaptive write buffers#92250

Merged
azat merged 2 commits intoClickHouse:masterfrom
azat:min_columns_to_activate_adaptive_write_buffer
Dec 18, 2025
Merged

Reduce INSERT/merges memory usage with wide parts for very wide tables by enabling adaptive write buffers#92250
azat merged 2 commits intoClickHouse:masterfrom
azat:min_columns_to_activate_adaptive_write_buffer

Conversation

@azat
Copy link
Copy Markdown
Member

@azat azat commented Dec 16, 2025

Changelog category (leave one):

  • Performance Improvement

Changelog entry (a user-readable short description of the changes that goes into CHANGELOG.md):

Reduce INSERT/merges memory usage with wide parts for very wide tables by enabling adaptive write buffers. Add support of adaptive write buffers for encrypted disks.

Each column requires at least 1MiB buffer for writing, and in case you have 1000 columns you need at least 1GiB to write part (on INSERT or merge).

Introduce new MergeTree setting min_columns_to_activate_adaptive_write_buffer, that controls when to use adaptive write buffers, techinque that uses adaptive buffers for writing, that starts from 16K

For metric_log memory usage decreased 30x for INSERTing/merging (horizontally) wide parts (note, that vertical merge may be significantly slower in case of tons of columns).

Also note, that default value (500) automatically enables it for system.metric_log, and applies this optimization for it.

@clickhouse-gh
Copy link
Copy Markdown
Contributor

clickhouse-gh bot commented Dec 16, 2025

Workflow [PR], commit [c64ea66]

Summary:

job_name test_name status info comment
AST fuzzer (amd_debug) failure
Logical error: '(isConst() || isSparse() || isReplicated()) ? getDataType() == rhs.getDataType() : typeid(*this) == typeid(rhs)' (STID: 2508-32b3) FAIL cidb
AST fuzzer (amd_ubsan) failure
Logical error: 'Pipeline stuck. Current state: (STID: 3833-2f20) FAIL cidb
BuzzHouse (amd_debug) failure
Logical error: 'Inconsistent AST formatting: the query: (STID: 1941-1bfa) FAIL cidb, issue

@clickhouse-gh clickhouse-gh bot added the pr-performance Pull request with some performance improvements label Dec 16, 2025
@Avogar Avogar self-assigned this Dec 16, 2025
@azat azat changed the title Reduce INSERT/merges memory usage for very wide tables by enabling adaptive write buffers Reduce INSERT/merges memory usage with wide parts for very wide tables by enabling adaptive write buffers Dec 16, 2025
@azat
Copy link
Copy Markdown
Member Author

azat commented Dec 16, 2025

@azat azat force-pushed the min_columns_to_activate_adaptive_write_buffer branch 2 times, most recently from c215488 to 82fca78 Compare December 16, 2025 21:43
azat added 2 commits December 17, 2025 20:18
…aptive write buffers

Each column requires at least 1MiB buffer for writing, and in case you
have 1000 columns you need at least 1GiB to write part (on INSERT or
merge).

Introduce new MergeTree setting `min_columns_to_activate_adaptive_write_buffer`,
that controls when to use adaptive write buffers, techinque that uses
adaptive buffers for writing, that starts from 16K

For metric_log memory usage decreased 30x for INSERTing/merging
(horizontally) wide parts (note, that vertical merge may be
significantly slower in case of tons of columns).

Also note, that default value (500) automatically enables it for
system.metric_log, and applies this optimization for it.
@azat azat force-pushed the min_columns_to_activate_adaptive_write_buffer branch from 82fca78 to c64ea66 Compare December 17, 2025 21:07
@azat azat enabled auto-merge December 17, 2025 22:52
@azat
Copy link
Copy Markdown
Member Author

azat commented Dec 18, 2025

@Avogar please take a look one more time, I've touched encrypted disk, though it should pretty straightforward, I could use one more pair of eyes

@azat azat requested a review from Avogar December 18, 2025 08:07
Copy link
Copy Markdown
Member

@Avogar Avogar left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Looks good

@azat azat added this pull request to the merge queue Dec 18, 2025
Merged via the queue into ClickHouse:master with commit 4f2f026 Dec 18, 2025
127 of 131 checks passed
@azat azat deleted the min_columns_to_activate_adaptive_write_buffer branch December 18, 2025 13:52
@robot-ch-test-poll1 robot-ch-test-poll1 added the pr-synced-to-cloud The PR is synced to the cloud repo label Dec 18, 2025
@telperions telperions mentioned this pull request Dec 23, 2025
zvonand pushed a commit to Altinity/ClickHouse that referenced this pull request Dec 29, 2025
…_adaptive_write_buffer

Reduce INSERT/merges memory usage with wide parts for very wide tables by enabling adaptive write buffers
zvonand pushed a commit to Altinity/ClickHouse that referenced this pull request Jan 15, 2026
…_adaptive_write_buffer

Reduce INSERT/merges memory usage with wide parts for very wide tables by enabling adaptive write buffers
zvonand pushed a commit to Altinity/ClickHouse that referenced this pull request Jan 27, 2026
…_adaptive_write_buffer

Reduce INSERT/merges memory usage with wide parts for very wide tables by enabling adaptive write buffers
zvonand added a commit to Altinity/ClickHouse that referenced this pull request Feb 3, 2026
25.8.15 Backport of ClickHouse#92250 - Reduce INSERT/merges memory usage with wide parts for very wide tables by enabling adaptive write buffers
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

pr-performance Pull request with some performance improvements pr-synced-to-cloud The PR is synced to the cloud repo

Projects

None yet

Development

Successfully merging this pull request may close these issues.

4 participants