Skip to content

fix(discord): use per-channel message queues to restore parallel agent dispatch#31927

Merged
steipete merged 2 commits intoopenclaw:mainfrom
Sid-Qin:fix/31530-parallel-channel-agent-dispatch
Mar 2, 2026
Merged

fix(discord): use per-channel message queues to restore parallel agent dispatch#31927
steipete merged 2 commits intoopenclaw:mainfrom
Sid-Qin:fix/31530-parallel-channel-agent-dispatch

Conversation

@Sid-Qin
Copy link
Contributor

@Sid-Qin Sid-Qin commented Mar 2, 2026

Summary

  • Problem: Since 2026.3.1, when sending messages to different Discord channel-bound agents on the same account, they are processed sequentially instead of in parallel. Agent B waits for Agent A's full reply before it begins processing.
  • Why it matters: Users with 8+ channel-bound agents experience severe latency — every message blocks behind unrelated agents, effectively making the system single-threaded per Discord account.
  • What changed: Replaced the single per-account messageQueue Promise chain in DiscordMessageListener with per-channel queues (Map<channelId, Promise>). Messages within the same channel remain serialized to preserve ordering, while messages to different channels proceed independently.
  • What did NOT change: No changes to message routing, session resolution, command queue, or maxConcurrent behavior. No Discord API interaction changes.

Change Type (select all)

  • Bug fix
  • Feature
  • Refactor
  • Docs
  • Security hardening
  • Chore/infra

Scope (select all touched areas)

  • Gateway / orchestration
  • Skills / tool execution
  • Auth / tokens
  • Memory / storage
  • Integrations
  • API / contracts
  • UI / DX
  • CI/CD / infra

Linked Issue/PR

User-visible / Behavior Changes

  • Channel-bound agents now process messages in parallel again (matching 2026.2.26 behavior)
  • Same-channel messages remain serialized (ordering preserved)

Security Impact (required)

  • New permissions/capabilities? No
  • Secrets/tokens handling changed? No
  • New/changed network calls? No
  • Command/tool execution surface changed? No
  • Data access scope changed? No

Repro + Verification

Environment

  • OS: macOS Darwin 25.3.0 (arm64)
  • Runtime: Node v22
  • Integration/channel: Discord (8+ channel-bound agents)

Steps

  1. Configure 8+ agents with individual Discord channel bindings on the same account
  2. Send messages in rapid succession to 2+ different channels
  3. Observe that agents now begin processing immediately (in parallel), not sequentially

Expected

Agent B starts processing its message while Agent A is still replying to its own.

Actual (before fix)

Agent B waits for Agent A to fully complete before beginning, even though they are on different channels.

Evidence

Root cause: DiscordMessageListener used a single messageQueue Promise chain per account. All incoming messages — regardless of channel — were serialized through this single chain. This created head-of-line blocking across unrelated channels.

Before (single queue):

private messageQueue: Promise<void> = Promise.resolve();

async handle(data, client) {
  this.messageQueue = this.messageQueue
    .catch(() => {})
    .then(() => this.handler(data, client));
  // All messages wait behind each other
}

After (per-channel queues):

private channelQueues = new Map<string, Promise<void>>();

async handle(data, client) {
  const channelId = data.channel_id;
  const prev = this.channelQueues.get(channelId) ?? Promise.resolve();
  const next = prev
    .catch(() => {})
    .then(() => this.handler(data, client));
  this.channelQueues.set(channelId, next);
  // Same channel serialized, different channels parallel
}

Tests: 4/4 pass — including a new test that verifies different channels run in parallel while same-channel messages remain serialized.

Human Verification (required)

  • Verified scenarios: same-channel serialization, cross-channel parallelism, error handling, memory cleanup of completed queue entries
  • Edge cases checked: thread channels (separate channel_id), DMs (separate channel_id), rapid bursts on same channel
  • What I did not verify: Live gateway with 8+ agents (no Discord bot token available)

Compatibility / Migration

  • Backward compatible? Yes
  • Config/env changes? No
  • Migration needed? No

Failure Recovery (if this breaks)

  • How to disable/revert: Revert this single commit
  • Files/config to restore: None
  • Known bad symptoms: If per-channel queues cause issues, messages may interleave within a channel (would be detected by out-of-order replies)

Risks and Mitigations

  • Risk: Higher concurrent model/IO load during traffic bursts (8+ channels processing simultaneously). Mitigation: The downstream command queue with maxConcurrent already gates actual agent execution; the change only affects the preflight/routing phase.
  • Risk: Memory accumulation from per-channel Map entries. Mitigation: Completed queue entries are automatically cleaned up when no newer messages are pending for that channel.

SidQin-cyber added 2 commits March 3, 2026 00:48
The native streaming path (chatStream) and preview final edit path
(chat.update) send raw Markdown text without converting to Slack
mrkdwn format. This causes **bold** to appear as literal asterisks
instead of rendered bold text.

Apply markdownToSlackMrkdwn() in streaming.ts (start/append/stop) and
in dispatch.ts (preview final edit via chat.update) to match the
non-streaming delivery path behavior.

Closes openclaw#31892
…t dispatch

Replace the single per-account messageQueue Promise chain in
DiscordMessageListener with per-channel queues. This restores parallel
processing for channel-bound agents that regressed in 2026.3.1.

Messages within the same channel remain serialized to preserve ordering,
while messages to different channels now proceed independently. Completed
queue entries are cleaned up to prevent memory accumulation.

Closes openclaw#31530
@openclaw-barnacle openclaw-barnacle bot added channel: discord Channel integration: discord channel: slack Channel integration: slack size: S experienced-contributor labels Mar 2, 2026
Copy link

@chatgpt-codex-connector chatgpt-codex-connector bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

💡 Codex Review

Here are some automated review suggestions for this pull request.

Reviewed commit: 8629b05a5f

ℹ️ About Codex in GitHub

Your team has set up Codex to review pull requests in this repo. Reviews are triggered when you

  • Open a pull request for review
  • Mark a draft as ready
  • Comment "@codex review".

If Codex has suggestions, it will comment; otherwise it will react with 👍.

Codex can also answer questions or update the PR. Try commenting "@codex address that feedback".

Comment on lines +155 to +159
.then(() => {
if (this.channelQueues.get(channelId) === next) {
this.channelQueues.delete(channelId);
}
})

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

P2 Badge Clean up channel queue entries on rejection paths

channelQueues entries are deleted only in the success-only then branch, so if next rejects (for example, if logging/slow-log code throws), that channel key is never removed unless another message later arrives for the same channel. In long-lived processes handling many channels, this can accumulate stale queue promises and channel IDs over time; use a finally-style cleanup so the Map entry is cleared for both resolve and reject outcomes when it is still the current promise.

Useful? React with 👍 / 👎.

@greptile-apps
Copy link
Contributor

greptile-apps bot commented Mar 2, 2026

Greptile Summary

Successfully fixes the Discord message serialization bottleneck by replacing the single per-account message queue with per-channel queues. Messages within the same channel remain serialized to preserve ordering, while messages to different channels can now process in parallel as intended. The implementation includes:

  • Proper memory cleanup using promise identity checks to avoid leaks
  • Comprehensive test coverage including a new test verifying cross-channel parallelism
  • Correct error handling with no changes to existing error paths
  • Incidental Slack formatting improvements (applying markdownToSlackMrkdwn in streaming and preview paths)

The cleanup logic correctly handles the case where new messages arrive before previous ones complete - only the final promise in each channel's queue deletes the map entry.

Confidence Score: 5/5

  • This PR is safe to merge with minimal risk - the changes are well-tested and follow established patterns
  • The implementation is straightforward with proper cleanup logic, comprehensive test coverage verifying both serialization and parallelization behavior, correct error handling, and no race conditions or memory leaks. The changes are isolated to Discord message handling with no breaking changes to APIs or configuration.
  • No files require special attention

Last reviewed commit: 8629b05

@steipete steipete merged commit 479095b into openclaw:main Mar 2, 2026
31 checks passed
@steipete
Copy link
Contributor

steipete commented Mar 2, 2026

Landed via temp rebase onto main.

  • Gate: pnpm -s vitest run src/discord/monitor/listeners.test.ts src/slack/monitor/message-handler/dispatch.streaming.test.ts src/slack/format.test.ts
  • Land commit: LAND_SHA_PLACEHOLDER
  • Merge commit: MERGE_SHA_PLACEHOLDER

Thanks @Sid-Qin!

@steipete
Copy link
Contributor

steipete commented Mar 2, 2026

Landed via temp rebase onto main.

  • Gate: pnpm -s vitest run src/discord/monitor/listeners.test.ts src/slack/monitor/message-handler/dispatch.streaming.test.ts src/slack/format.test.ts
  • Land commit: 038ae94
  • Merge commit: 479095b

Thanks @Sid-Qin!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

channel: discord Channel integration: discord channel: slack Channel integration: slack size: S

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2026.3.1 regression: channel-bound agents process messages sequentially instead of in parallel

2 participants