Skip to content

rabbit_db: Fix handling of async m2k puts#15480

Merged
dumbbell merged 1 commit intomainfrom
optimize-rabbit_db_m2k_converter
Feb 20, 2026
Merged

rabbit_db: Fix handling of async m2k puts#15480
dumbbell merged 1 commit intomainfrom
optimize-rabbit_db_m2k_converter

Conversation

@dumbbell
Copy link
Copy Markdown
Collaborator

Why

Several rabbit_db_*_m2k_converter modules use async puts to Khepri to parallelize writes and make them faster. This is handled by the rabbit_db_m2k_converter:with_correlation_id/2 function.

Ra will send back a Ra event message for each async write to reply with the result of the write. The problem is that the code behind with_correlation_id/2 has a logic issue: it only consumes 1 Ra event every 64 async writes. It will consume all of them at the end of the overall, but meeawhile, messages will accumulate. If there are a lot of Mnesia records to copy, this can fill the process mailbox and slow it down significantly.

How

The logic is changed to regularily consume all messages in the mailbox with a timeout of 0. Only at the end, remaining messages are consumed with an infinity timeout because we know how many messages we are expecting (the actual timeout is handled by the write itself).

@dumbbell dumbbell requested a review from mkuratczyk February 16, 2026 13:36
@dumbbell dumbbell self-assigned this Feb 16, 2026
@dumbbell dumbbell changed the title rabbit_db: Optimize handling of async m2k puts rabbit_db: Fix handling of async m2k puts Feb 17, 2026
@dumbbell dumbbell force-pushed the optimize-rabbit_db_m2k_converter branch from 84423db to 48df4d4 Compare February 17, 2026 22:27
Copy link
Copy Markdown
Contributor

@mkuratczyk mkuratczyk left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

thanks! I see a significant speed up

@dumbbell dumbbell marked this pull request as ready for review February 20, 2026 10:06
[Why]
Several `rabbit_db_*_m2k_converter` modules use async puts to Khepri to
parallelize writes and make them faster. This is handled by the
`rabbit_db_m2k_converter:with_correlation_id/2` function.

Ra will send back a Ra event message for each async write to reply with
the result of the write. The problem is that the code behind
`with_correlation_id/2` has a logic issue: it only consumes 1 Ra event
every 64 async writes. It will consume all of them at the end of the
overall, but meeawhile, messages will accumulate. If there are a lot of
Mnesia records to copy, this can fill the process mailbox and slow it
down significantly.

[How]
The logic is changed to regularily consume all messages in the mailbox
with a timeout of 0. Only at the end, remaining messages are consumed
with an infinity timeout because we know how many messages we are
expecting (the actual timeout is handled by the write itself).

While here, delete debug log messages logged for each async puts. With a
significant number of puts, this noticably slows down the process.
@dumbbell dumbbell force-pushed the optimize-rabbit_db_m2k_converter branch from 48df4d4 to c6415de Compare February 20, 2026 10:13
@dumbbell dumbbell merged commit 1e3097a into main Feb 20, 2026
576 of 577 checks passed
@dumbbell dumbbell deleted the optimize-rabbit_db_m2k_converter branch February 20, 2026 10:43
dumbbell added a commit that referenced this pull request Feb 20, 2026
rabbit_db: Fix handling of async m2k puts (backport #15480)
@dumbbell dumbbell added this to the 4.3.0 milestone Feb 20, 2026
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants