Skip to content

Check exclusive queue owner before deleting a queue (backport #15276)#15286

Merged
michaelklishin merged 3 commits intov4.2.xfrom
mergify/bp/v4.2.x/pr-15276
Jan 16, 2026
Merged

Check exclusive queue owner before deleting a queue (backport #15276)#15286
michaelklishin merged 3 commits intov4.2.xfrom
mergify/bp/v4.2.x/pr-15276

Conversation

@mergify
Copy link
Copy Markdown

@mergify mergify bot commented Jan 16, 2026

[Why]
For a long time, there has been race condition when deleting exclusive queues - if a connection was re-established and a queue with the same name was declared, we could delete the new queue.

For example, with many MQTT consumers, if we performed a rolling restart of the cluster and the clients reconnected without any delay, after the restart, we sometimes had the expected number of connections but a lower number of queues, even though there should be a queue for each consumer.

[How]
Check that the exclusive_owner has the value we expect when requesting deletion. If the value is different, this means this is effectively a different queue (same name, but a different connection), so we should not delete it.

[Testing]
Here's an example of how to test before/after:

  1. With MQTT QoS0 queue type:
make start-cluster RABBITMQ_ENABLED_PLUGINS="rabbitmq_management,rabbitmq_mqtt"
omq mqtt --uri mqtt://localhost:1883,mqtt://localhost:1884,mqtt://localhost:1885 -x 100 -y 100 -r 1
make restart-cluster RABBITMQ_ENABLED_PLUGINS="rabbitmq_management,rabbitmq_mqtt"
rabbitmqctl -n rabbit-1 list_queues | rg -c mqtt
  1. With classic queues:
make start-cluster RABBITMQ_ENABLED_PLUGINS="rabbitmq_management,rabbitmq_mqtt"
omq mqtt --uri mqtt://localhost:1883,mqtt://localhost:1884,mqtt://localhost:1885 -x 100 -y 100 -r 1 --mqtt-publisher-qos 1 --mqtt-consumer-qos 1
make restart-cluster RABBITMQ_ENABLED_PLUGINS="rabbitmq_management,rabbitmq_mqtt"
rabbitmqctl -n rabbit-1 list_queues | rg -c mqtt

In both cases, you will almost certainly see that once nodes are restarted, the number of published messages doesn't match the number of consumed messages. list_queues will almost certainly return fewer than 100 queues before the PR. With this PR, the number of queues and messages flowing should meet expectations.


This is an automatic backport of pull request #15276 done by Mergify.

mkuratczyk and others added 3 commits January 16, 2026 19:46
[Why]
For a long time, there has been race condition when deleting
exclusive queues - if a connection was re-established and a queue
with the same name was declared, we could delete the new queue.

For example, with many MQTT consumers, if we performed a rolling restart
of the cluster and the clients reconnected without any delay, after the
restart, we sometimes had the expected number of connections but a lower
number of queues, even though there should be a queue for each consumer.

[How]
Check that the exclusive_owner has the value we expect when requesting
deletion. If the value is different, this means this is effectively a
different queue (same name, but a different connection), so we should
not delete it.

(cherry picked from commit 31ba23a)
(cherry picked from commit 8418f61)
(cherry picked from commit 49ab811)
@michaelklishin michaelklishin added this to the 4.2.3 milestone Jan 16, 2026
@michaelklishin michaelklishin merged commit c5daeb7 into v4.2.x Jan 16, 2026
291 checks passed
@michaelklishin michaelklishin deleted the mergify/bp/v4.2.x/pr-15276 branch January 16, 2026 20:50
michaelklishin added a commit that referenced this pull request Jan 17, 2026
Check exclusive queue owner before deleting a queue (backport #15276) (backport #15286)
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants