Federation: parallel shutdown; disconnect links before stopping#15271
Merged
michaelklishin merged 10 commits intomainfrom Jan 16, 2026
Merged
Federation: parallel shutdown; disconnect links before stopping#15271michaelklishin merged 10 commits intomainfrom
michaelklishin merged 10 commits intomainfrom
Conversation
This yields very significant efficiency gains with hundreds or thousands of links. To avoid overwhelming the upstream schema data store (which could be a 7-9 node cluster on 3.x with Mnesia), we limit the degree of parallelism and add configurable throttling delays into the process. Technical design pair: @ansd.
without it, the new keys (or rather, their defaults) will spill into the `config_schema_SUITE`s of other plugins.
michaelklishin
added a commit
that referenced
this pull request
Jan 15, 2026
We implement the `revive/0` part for symmetry. As with the revive command in general, it serves as a last resort available for rollback. Usually nodes put into maintenance mode are shortly stopped for upgrading or reconfiguration.
We implement the `revive/0` part for symmetry. As with the revive command in general, it serves as a last resort available for rollback. Usually nodes put into maintenance mode are shortly stopped for upgrading or reconfiguration.
9f491c0 to
283aa0e
Compare
Previously, the following three supervisors used the wrong `shutdown` and wrong `type`: * rabbit_exchange_federation_sup * rabbit_federation_sup * rabbit_queue_federation_sup For `shutdown` Erlang/OTP recommends: "If the child process is another supervisor, the shutdown time must be set to infinity to give the subtree ample time to shut down. Setting the shutdown time to anything other than infinity for a child of type supervisor can cause a race condition where the child in question unlinks its own children, but fails to terminate them before it is killed." For `type` Erlang/OTP recommends: "type specifies if the child process is a supervisor or a worker. The type key is optional. If it is not specified, it defaults to worker." This commit fixes the wrong child spec by using a timeout of `infinity` and type `supervisor`. (cherry picked from commit cfcf6cf)
## What? Federation links started in the federation plugins are put under the `rabbit` app supervision tree (unfortunately). This commit ensures that the entire federation supervision hierarchies (including all federation links) are stopped **before** stopping app `rabbit` when stopping RabbittMQ. ## Why? Previously, we've seen cases where hundreds of federation links are stopped during the shutdown procedure in app `rabbit` leading to federation link restarts happening in parallel to vhosts being stopped. In one case, the shutdown of app `rabbit` even got stuck (although there is no evidence that federation was the problem). Either way, the cleaner appraoch is to gracefully stop all federation links, i.e. the entire supervision hierarchy under `rabbit_exchange_federation_sup` and `rabbit_queue_federation_sup` when stopping the federation apps, i.e. **before** proceeding to stop app `rabbit`. ## How? The boot step cleanup steps for the federation plugins are skipped when stopping RabbitMQ. Hence, this commit ensures that the supervisors are stopped in the stop/1 application callback. This commit does something similar to #14054 but uses a simpler approach. (cherry picked from commit 8bffa58)
when the core now interacts with a part of the supervision tree owned by this plugin for more efficient shutdown.
ansd
reviewed
Jan 16, 2026
Member
ansd
left a comment
There was a problem hiding this comment.
make run-broker PLUGINS="rabbitmq_exchange_federation"
./sbin/rabbitmqctl set_parameter federation-upstream origin '{"uri":"amqp://localhost:5672"}'
./sbin/rabbitmqctl set_policy exchange-federation "^amq.direct" '{"federation-upstream-set":"all"}' --priority 10 --apply-to exchanges
./sbin/rabbitmq-upgrade drain
will print the following warning every 5 seconds:
2026-01-16 11:05:04.797918+01:00 [warning] <0.1080.0> Federation exchange 'amq.direct' in vhost '/' did not connect to exchange 'amq.direct' in vhost '/' on amqp://localhost:5672. Reason: {error,
2026-01-16 11:05:04.797918+01:00 [warning] <0.1080.0> econnrefused}
With 5k federation links, this will create 1k warnings being logged per second when RabbitMQ is put into maintenance mode.
ansd
reviewed
Jan 16, 2026
deps/rabbitmq_exchange_federation/src/rabbit_federation_exchange_link.erl
Outdated
Show resolved
Hide resolved
ansd
reviewed
Jan 16, 2026
deps/rabbitmq_exchange_federation/src/rabbit_federation_exchange_link.erl
Outdated
Show resolved
Hide resolved
ansd
reviewed
Jan 16, 2026
ansd
reviewed
Jan 16, 2026
deps/rabbitmq_exchange_federation/src/rabbit_federation_exchange_link.erl
Outdated
Show resolved
Hide resolved
mergify bot
pushed a commit
that referenced
this pull request
Jan 16, 2026
(cherry picked from commit a7a2cac)
mergify bot
pushed a commit
that referenced
this pull request
Jan 16, 2026
We implement the `revive/0` part for symmetry. As with the revive command in general, it serves as a last resort available for rollback. Usually nodes put into maintenance mode are shortly stopped for upgrading or reconfiguration. (cherry picked from commit 283aa0e)
mergify bot
pushed a commit
that referenced
this pull request
Jan 16, 2026
(cherry picked from commit 1721af8)
mergify bot
pushed a commit
that referenced
this pull request
Jan 16, 2026
(cherry picked from commit 19bb842)
mergify bot
pushed a commit
that referenced
this pull request
Jan 16, 2026
(cherry picked from commit 59e9f7a)
michaelklishin
added a commit
that referenced
this pull request
Jan 16, 2026
Federation: disconnect links before stopping, in parallel (backport #15271)
michaelklishin
added a commit
that referenced
this pull request
Jan 16, 2026
This is a backport of #15271 to v4.1.x with its single federation plugin repository structure.
Collaborator
Author
michaelklishin
added a commit
that referenced
this pull request
Feb 24, 2026
michaelklishin
added a commit
that referenced
this pull request
Feb 24, 2026
We implement the `revive/0` part for symmetry. As with the revive command in general, it serves as a last resort available for rollback. Usually nodes put into maintenance mode are shortly stopped for upgrading or reconfiguration.
michaelklishin
added a commit
that referenced
this pull request
Feb 24, 2026
michaelklishin
added a commit
that referenced
this pull request
Feb 24, 2026
michaelklishin
added a commit
that referenced
this pull request
Feb 24, 2026
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.

Technical design pair: @ansd.
Proposed Changes
This PR makes the most expensive part of federation link shutdown — closing AMQP 0-9-1 connections to the upstream — parallel, by notifying links in the
prep_stopshutdown callback.This yields very significant efficiency gains with hundreds or thousands of links, all without changing the supervision tree structure.
Why Not Use
simple_one_for_one?Indeed the
simple_one_for_oneOTP supervisor restart strategy would shut down all child processes concurrently for us. But this would require changing the child identity (key)to an Erlang PID, which would require intrusive and painful to test changes (such as
an ETS table that maps their PIDS to the current identities and the other way around).
Throttling to Avoid Overwhelming the Upstream
To avoid overwhelming the upstream schema data store (which could be a 7-9 node cluster on 3.x with Mnesia), we limit the degree of parallelism and add batching with configurable throttling delays into the process.
The entire link shutdown process is now capped at 180 seconds (by default), and should not meaningfully exceed that time period even on nodes with many thousands of links.
By default we close up to 128 links per batch, with a 50 ms delay, and a 180 second hard cap (timeout) for the entire link termination operation.
Data Safety Considerations
Federation uses publisher confirms by default, and most users never change it, therefore
aggressive connection closures are safe and acceptable.
In addition, the user can set
resource-cleanup-modetoneverto make sure that theupstream resources (e.g. internal queues used by exchange federation) are never deleted
by the links running in the downstream cluster.
Show Me The Benchmark Data
Microbenchmarks (Supervisor Child Process Termination)
Below are some microbenchmarks that measure everything beyond the actual
AMQP 0-9-1 connection termination part on an 8 core aarch64 CPU from 2022:
Worst Case Scenario Calculations
If we consider the worst case scenario where every link connection hits its timeout,
1K links would take about 83 minutes to start for the sequential (status quo) version and 5.6 seconds (see below) with these changes.
Real World Federation Links with Outgoing Connections
With the throttling delay of 0, the time it takes to shut down N links to a remote upstream
cluster look like this:
Maintenance Mode Integration
Maintenance mode integration of these changes needs to be done with care: since maintenance mode stops all client connection listeners, we run the risk of stopping the listeners
before this part of the federation shutdown has a chance to do its job as designed.
For that reason, we have to special case the federation plugins in the core and first trigger their termination, then stop the listeners.
When the node is revived (the maintenance mode is rolled back), all links are restarted.