Conversation
|
Workflow [PR], commit [3a2240a] Summary: ❌
|
|
Dear @serxa, this PR hasn't been updated for a while. Will you continue working on it? If not, please close it. Otherwise, ignore this message. |
|
Dear @serxa, this PR hasn't been updated for a while. Will you continue working on it? If not, please close it. Otherwise, ignore this message. |
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
There was a problem hiding this comment.
I did not look through all the scheduler code (maybe I will get back), but I've already spend quite some time (first draft does not contains 7K LOC 😂 ) for this PR - and in general it looks good to me:
- interactions of
PipelineExecutoris simple - the interface changes looks clear, splitted into time/space shared
- I have few concerns about new implementation for memory, mostly around locking (you will find it below), please take a look
Also I think:
- we should enable workloads on CI to catch bugs
- enable it for perf tests to measure the overhead (though maybe they will not be able to catch the difference)
Co-authored-by: Azat Khuzhin <a3at.mail@gmail.com>
…ickHouse into workload-memory-scheduling
…ions Previously `MemoryReservation::increaseApproved` and `decreaseApproved` called `syncWithScheduler` which re-entered `AllocationQueue` during hierarchy traversal. This caused lock order inversions and required a `recursive_mutex` on `AllocationQueue`. Key changes: - Remove `syncWithScheduler` from `MemoryReservation` and `syncSize` from `TestAllocation`. Callbacks now just update state and notify via `cv.notify_all`. - Add `IAllocationQueue::removeAllocation` to handle allocation removal on the scheduler thread (cancels pending increase, prepares decrease to zero). Both `MemoryReservation` and `TestAllocation` destructors use this instead of `decreaseAllocation`. - Add serialization in `syncWithMemoryTracker`: block all threads while an increase is pending, ensuring at most one in flight at a time. - Decouple decrease from removal: `decreaseAllocation` never removes an allocation (it may stay alive at zero). Only `removeAllocation` sets `removing_allocation=true`. Running sets at all hierarchy levels use `allocations` count instead of `allocated` amount. - Change `AllocationQueue::mutex` from `recursive_mutex` to `std::mutex`. - Remove `isInSchedulerOrStopped` guards from `increaseAllocation` and `decreaseAllocation` (no longer needed without re-entry). - Fix `FairAllocation` and `PrecedenceAllocation` destructors to properly detach children (was `= default`, caused `!parent` assertion). Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
- Fix typo "reqeuested" -> "requested" in `ResourceAllocation.h` - Fix brace style in `ITimeSharedNode.h`, `WorkloadSettings.cpp`, `gtest_workload_resource_manager.cpp` (Allman style) - Return false for non-memory units in `WorkloadSettings::hasAllocationLimit` - Catch `MEMORY_RESERVATION_KILLED` specifically in space-shared eviction test - Replace `std::ostringstream` with `fmt::format` in `ISpaceSharedNode::Update::toString` - Poll `system.processes` instead of `sleep` in integration test - Fix `demand_increment` metric leak in `allocationFailed`: store the enqueued demand amount in a dedicated field so the exact value is subtracted on failure, instead of recomputing from potentially stale `actual_size - allocated_size` Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
… workload-memory-scheduling
- Change `ISchedulerNode::getTypeName` return type from `const String &` to `std::string_view`, eliminating static String objects in all 14 overrides across time-shared and space-shared nodes Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
| return; | ||
| } | ||
| actual_size = 0; | ||
| } |
There was a problem hiding this comment.
MemoryReservation::~MemoryReservation sets actual_size = 0 under mutex, unlocks, and only then calls queue.removeAllocation. In that unlocked window, another query thread can enter syncWithMemoryTracker, recompute new_actual_size from MemoryTracker, and write a non-zero actual_size again.
This creates a race in destruction/teardown and can enqueue a stale increase/decrease request while removal is in progress, which may corrupt reservation accounting.
Please guard syncWithMemoryTracker from running after destruction starts (for example by setting a being_destroyed flag under mutex in the destructor and returning early in syncWithMemoryTracker), or otherwise keep teardown and actual_size transitions serialized until removeAllocation is fully committed.
…ption In `onCancelOrConnectionLoss` and `onException`, `releaseWorkloadResources` was called before `resetPipeline`. This destroys the `MemoryReservation` while pipeline threads still hold raw pointers to it (stored in `PipelineExecutor::WorkloadResources`) and may call `syncWithMemoryTracker` between processor executions, leading to use-after-free. The normal finish path (`onFinish`) is not affected because the pipeline has already completed execution by that point. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
| allocation.allocationFailed(std::make_exception_ptr( | ||
| Exception(ErrorCodes::INVALID_SCHEDULER_NODE, | ||
| "Queue for pending allocation is about to be destructed"))); | ||
|
|
There was a problem hiding this comment.
AllocationQueue::purgeQueue fails only pending_allocations, then clears removing_allocations without notifying the allocation objects.
A MemoryReservation destructor that already called queue.removeAllocation waits on cv.wait(... removed || fail_reason ...), and when its node is dropped from removing_allocations here, neither removed nor fail_reason is guaranteed to be set. This can hang teardown indefinitely under queue purge/detach races.
Please fail (or complete) all entries in removing_allocations before clearing the container, similarly to pending_allocations, so waiting destructors are always released.
Exercises the `onCancelOrConnectionLoss` / `onException` code path by starting queries that allocate memory (triggering `syncWithMemoryTracker` in pipeline threads) and killing them mid-execution. Repeated 10 times to increase the chance of catching teardown ordering issues under TSan. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Changelog category (leave one):
Changelog entry (a user-readable short description of the changes that goes to CHANGELOG.md):
Introduce a memory reservation feature for workloads. More details https://clickhouse.com/docs/operations/workload-scheduling
Documentation entry for user-facing changes
Details
To enable memory reservations for workloads create MEMORY RESERVATION resource and set at least one limit for the total memory reserved using workload settings:
ClickHouse tracks memory allocations of all queries and background activities. The number of allocated bytes is aggregated through the scheduling hierarchy up to the root. Every query has an associated allocation in the leaf workload it belongs to. If a query has the
reserve_memorysetting greater than zero, then the allocation is created in a pending state. Pending allocation reserves requested amount of memory in the workload hierarchy. If there is not enough memory available, the allocation remains pending until enough memory is freed or other allocations are evicted (killed). When allocation is admitted, it becomes running. Running allocation could increase or decrease its size dynamically according to memory consumption of the query. Allocation life-cycle can be depicted with the following state diagram:stateDiagram-v2 [*] --> Pending: init [reserve_memory > 0] [*] --> Running: init [reserve_memory == 0] Pending --> Running: admit state Running { %% Region 1: increase flow NotIncreasing --> Increasing: request Increasing --> NotIncreasing: approve -- %% Region 2: decrease flow NotDecreasing --> Decreasing: request Decreasing --> NotDecreasing: approve } Running --> Killed: evict Running --> Released: finishPending allocations of a leaf workload are admitted according to FIFO order. When multiple workloads have pending allocations, they are admitted according to precedence and weight settings. Higher precedence workloads are served first. Sibling workloads with the same precedence share memory according to weights in a max-min fair manner, which means that workload with lower normalized memory usage (current usage plus requested increase divided by weight) is served first. The reverse logic is applied during eviction. When memory needs to be freed, workloads with lower precedence and higher normalized memory usage are evicted first.
Note that time-shared resources use priority, while space-shared resources use precedence. They are independent settings and could be set to different values. Higher priority implies non-destructive preemption (delay or throttling), while higher precedence may imply destructive eviction (stops with an error). A workload could have high priority for CPU scheduling, but the same precedence for memory reservation to avoid evicting other workloads and losing work that was already done by them.
Every workload with a
max_memorylimit ensures that the total memory allocated in its subtree does not exceed the limit. If a pending or increasing allocation would exceed the limit, eviction procedure is initiated to free memory. Eviction procedure selects a victim to be killed. The least common ancestor workload of killer and victim prevents eviction in the following situations:If eviction is prevented or does not free enough memory, the new allocation is blocked until enough memory is freed. These rules allow queueing of excessive queries based on memory pressure and provide a convenient way to avoid MEMORY_LIMIT_EXCEEDED errors.
NOTE: Workload limits are independent from other ways to limit memory consumption like
max_memory_usagequery setting. They could be used together to achieve better control over memory consumption. It is possible to set independent memory limits based on users (not workloads). This is less flexible and does not provide features like memory reservation and queueing of pending queries. See Memory overcommitWorkload setting
max_waiting_querieslimits the number of pending allocations for the workload. When the limit is reached, the server returns an errorSERVER_OVERLOADED.Memory reservation scheduling is not supported for merges and mutations yet.
Only queries with the
reserve_memorysetting greater than zero are subject to blocking while waiting for memory reservation. However, queries with zeroreserve_memoryare also accounted for in their workload memory footprint, and they can be evicted if necessary to free memory for other pending or increasing allocations. Queries without proper workload markup are not subject to memory reservation scheduling and cannot be evicted by the scheduler.To provide non-elastic memory reservation for a query, set both
reserve_memoryandmax_memory_usagequery settings to the same value. In this case, the query will reserve fixed amount of memory and will not be able to increase its allocation dynamically.Let's consider an example of configuration:
In this example, the total memory reserved by all queries and background activities cannot exceed 10 GiB. The system workload has a guarantee of at least 1 GiB (10% of 10 GiB), while the user workload has a guarantee of at least 9 GiB (90% of 10 GiB). Inside the user workload, production and staging workloads share memory according to weights (3 to 1) with equal precedence of 1. Testing workload has precedence 2, which is lower than production and staging. Therefore, testing workload can only use memory that is not used by production and staging.
If memory pressure arises, testing workload allocations will be evicted first. Then, if more memory needs to be freed, staging workload allocations will be evicted before production workload allocations if they exceed their guarantees. Note that pending queries in production and staging can evict running allocations in testing workload to free memory, but they cannot evict each other because they have the same precedence. In case of memory pressure, they will wait in queues, which allows the system to avoid MEMORY_LIMIT_EXCEEDED errors due to too many concurrently executing queries.
Note that system workload has precedence 0 (default), which is higher than production, staging and testing workloads, but they are not sibling workload. The least common ancestor is workload all, both children of which has equal precedence. So pending system workload cannot evict any of them, and vice versa. This ensures that system activities cannot easily be evicted.