colflow: release disk resources in hash router in all cases#81491
colflow: release disk resources in hash router in all cases#81491craig[bot] merged 1 commit intocockroachdb:masterfrom
Conversation
f8f49b0 to
071298f
Compare
Previously, it was possible for the disk-backed spilling queue used by the hash router outputs to not be closed when the hash router exited. Namely, this could occur if the router output was not fully exhausted (i.e. it could still produce more batches, but the consumer of the router output was satisfied and called `DrainMeta`). In such a scenario, `routerOutput.closeLocked` was never called because a zero-length batch was never given to `addBatch` nor the output was canceled due to an error. The flow cleanup also didn't save us because the router outputs are not added into `ToClose` slice. The bug is now fixed by closing the router output in `DrainMeta`. This behavior is acceptable because the caller is not interested in any more data, and closing the output can be done multiple times (it is a no-op on all calls except for the first one). There is no regression test since it's quite tricky to come up with given that the behavior of router outputs is non-deterministic, and I don't think it's worth introducing special knobs inside of `DrainMeta` / `Next` for this. The impact of not closing the spilling queue is that it might lead to leaking a file descriptor until the node restarts. Although the temporary directory is deleted on the flow cleanup, the bug would result in a leak of the disk space which is also "fixed" by the node restarts. Release note: None
michae2
left a comment
There was a problem hiding this comment.
Maybe this is the wrong place to ask, but why does the hash router have a disk-backed spilling queue? Naively, it seems like a router should have some kind of backpressure on inputs rather than spilling to disk...
Is it because rows destined for different receivers are mixed together in the batch? So we're trying to prevent a single slow receiver from stopping all sending?
Reviewable status:
complete! 1 of 0 LGTMs obtained (waiting on @cucaroach)
Yes, this is the reason. Imagine we have an input to the hash router that can quickly produce batches, and, say, we have two outputs with one of the consumers being extremely slow. If each output gets a half of the rows in the batch, then eventually the buffer for the slow consumer will fill up. At that point we have to make a choice: either block the input from producing more batches altogether (which means that the fast consumer is also blocked) until the slow consumer catches up with the buffer, or continue buffering more rows on disk for the slow consumer (meaning that the fast consumer is not impacted). We choose the latter option since it seems like a faster approach overall (imagine that the fast consumer satisfies LIMIT clause, so we won't have to wait for the slow consumer to consume the buffer at all). Also these buffers are relatively large (namely about TFTR! bors r+ |
|
Makes sense, thank you! |
|
Build succeeded: |
Previously, it was possible for the disk-backed spilling queue used
by the hash router outputs to not be closed when the hash router exited.
Namely, this could occur if the router output was not fully exhausted
(i.e. it could still produce more batches, but the consumer of the
router output was satisfied and called
DrainMeta). In such a scenario,routerOutput.closeLockedwas never called because a zero-length batchwas never given to
addBatchnor the output was canceled due to anerror. The flow cleanup also didn't save us because the router outputs
are not added into
ToCloseslice.The bug is now fixed by closing the router output in
DrainMeta. Thisbehavior is acceptable because the caller is not interested in any more
data, and closing the output can be done multiple times (it is a no-op
on all calls except for the first one). There is no regression test
since it's quite tricky to come up with given that the behavior of
router outputs is non-deterministic, and I don't think it's worth
introducing special knobs inside of
DrainMeta/Nextfor this.The impact of not closing the spilling queue is that it might lead to
leaking a file descriptor until the node restarts. Although the
temporary directory is deleted on the flow cleanup, the bug would result
in a leak of the disk space which is also "fixed" by the node restarts.
Fixes: #81490.
Release note: None