Skip to content

fix: use Dispatchers.IO for EventHandler coroutines#516

Merged
torlando-tech merged 1 commit intomainfrom
fix/message-coroutine-dispatcher
Feb 21, 2026
Merged

fix: use Dispatchers.IO for EventHandler coroutines#516
torlando-tech merged 1 commit intomainfrom
fix/message-coroutine-dispatcher

Conversation

@torlando-tech
Copy link
Copy Markdown
Owner

Summary

  • EventHandler's scope.launch inherited Dispatchers.Default from the service scope, which has only max(2, numCPU) threads
  • During heavy operations (e.g., large file propagation), all Default threads could become saturated, causing message processing coroutines to queue indefinitely without executing — messages would be received by Python but never persisted or rendered
  • Switches all three scope.launch calls in EventHandler to Dispatchers.IO (scales to 64 threads), which is appropriate for database I/O and Python wrapper calls

Test plan

  • Send messages between two devices while one is performing a heavy background operation (e.g., large file propagation)
  • Verify messages are persisted and rendered on the receiving device
  • Check logcat for "Message coroutine started on DefaultDispatcher-worker-..." confirming the coroutine body executes

🤖 Generated with Claude Code

@greptile-apps
Copy link
Copy Markdown
Contributor

greptile-apps bot commented Feb 21, 2026

Greptile Summary

Fixes a critical thread starvation issue by switching EventHandler coroutines from Dispatchers.Default (limited to max(2, numCPU) threads) to Dispatchers.IO (scales up to 64 threads). The change affects three scope.launch calls that perform database I/O and Python wrapper operations.

The changes are:

  • startEventHandling(): draining pending announces at startup
  • drainPendingMessages(): draining pending messages at startup
  • handleMessageReceivedEvent(): processing incoming message events

All three operations involve database persistence (persistenceManager) and Python wrapper calls (wrapperManager), making Dispatchers.IO the appropriate dispatcher choice. This prevents message processing from being blocked during heavy operations like large file propagation.

Confidence Score: 5/5

  • This PR is safe to merge with no risk
  • The change correctly addresses a real concurrency issue by switching from Dispatchers.Default to Dispatchers.IO for I/O-bound operations. The affected operations perform database persistence and Python wrapper calls, which are blocking I/O operations that should use the IO dispatcher. The change follows Kotlin coroutine best practices and is consistent with similar code in the codebase (e.g., MessageCollector, ConversationLinkManager, and multiple withContext(Dispatchers.IO) calls in ReticulumServiceBinder).
  • No files require special attention

Important Files Changed

Filename Overview
app/src/main/java/com/lxmf/messenger/service/manager/EventHandler.kt Changes scope.launch to use Dispatchers.IO for database/Python operations, preventing thread starvation. Also adds debug logging.

Last reviewed commit: 3809cd4

@sentry
Copy link
Copy Markdown
Contributor

sentry bot commented Feb 21, 2026

Codecov Report

✅ All modified and coverable lines are covered by tests.

📢 Thoughts on this report? Let us know!

… starvation

EventHandler's scope.launch inherited Dispatchers.Default from the service
scope, which has only max(2, numCPU) threads. During heavy operations like
large file propagation, all Default threads could become saturated, causing
message processing coroutines to queue indefinitely without executing.

Switch to Dispatchers.IO (scales to 64 threads) for all EventHandler
coroutines that perform database I/O via Room and Python wrapper calls.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
@torlando-tech torlando-tech force-pushed the fix/message-coroutine-dispatcher branch from 3809cd4 to dd6d259 Compare February 21, 2026 05:40
@torlando-tech torlando-tech merged commit 5f89ec8 into main Feb 21, 2026
13 checks passed
@torlando-tech torlando-tech deleted the fix/message-coroutine-dispatcher branch February 21, 2026 06:00
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant