Skip to content

Server memory usage increases gradually after upgrading from 24.7 to 24.10. #71906

@dudtj0904

Description

@dudtj0904

I have the MEMORY_LIMITE_EXCEEDED issue. The issue persists after upgrading the ClickHouse version from 24.7 to 24.10.

Code: 241. DB::Exception: Memory limit (total) exceeded: would use 201.33 GiB (attempt to allocate chunk of 5171232 bytes), current RSS 6.25 GiB, maximum: 201.33 GiB. OvercommitTracker decision: Query was selected to stop by OvercommitTracker. (MEMORY_LIMIT_EXCEEDED) (version 24.10.1.2812 (official build))

image
The graph used Prometheus' clickhouse_memory_tracking metric.
Graph was initialized when Clickhouse server restarted. Gradually it rises again afterwards. I'm restarting the server after a period of time.

I'm using the Kafka engine to consume the message and then using the MaterializedView engine to collect data in the MergeTree table.
There are about 200 such Kafka engines on this server.
It is inferred that memory leaks occur in the process of inserting data.

SELECT
    name,
    value
FROM system.metrics
WHERE name ILIKE '%background%'

    ┌─name────────────────────────────────────────┬─value─┐
 1. │ BackgroundMergesAndMutationsPoolTask        │     1 │
 2. │ BackgroundMergesAndMutationsPoolSize        │    64 │
 3. │ BackgroundFetchesPoolTask                   │     0 │
 4. │ BackgroundFetchesPoolSize                   │    64 │
 5. │ BackgroundCommonPoolTask                    │     0 │
 6. │ BackgroundCommonPoolSize                    │    16 │
 7. │ BackgroundMovePoolTask                      │     0 │
 8. │ BackgroundMovePoolSize                      │    16 │
 9. │ BackgroundSchedulePoolTask                  │     1 │
10. │ BackgroundSchedulePoolSize                  │   512 │
11. │ BackgroundBufferFlushSchedulePoolTask       │     0 │
12. │ BackgroundBufferFlushSchedulePoolSize       │    16 │
13. │ BackgroundDistributedSchedulePoolTask       │     0 │
14. │ BackgroundDistributedSchedulePoolSize       │   512 │
15. │ BackgroundMessageBrokerSchedulePoolTask     │   922 │
16. │ BackgroundMessageBrokerSchedulePoolSize     │  2000 │
17. │ TablesLoaderBackgroundThreads               │     0 │
18. │ TablesLoaderBackgroundThreadsActive         │     0 │
19. │ TablesLoaderBackgroundThreadsScheduled      │     0 │
20. │ MergeTreeBackgroundExecutorThreads          │    64 │
21. │ MergeTreeBackgroundExecutorThreadsActive    │    64 │
22. │ MergeTreeBackgroundExecutorThreadsScheduled │    64 │
23. │ KafkaBackgroundReads                        │   922 │
    └─────────────────────────────────────────────┴───────┘

The server's background pool settings are the same as above, and the pool size is sufficient.

May I know the cause?

Metadata

Metadata

Assignees

No one assigned

    Labels

    memoryWhen memory usage is higher than expected

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions