Summary
On a long-lived Slack-heavy OpenClaw install, sessions.list / dashboard session loading became a gateway-wide bottleneck after upgrading to 2026.4.29. I did not see this before 2026.4.29. The same behavior still reproduced immediately after updating to 2026.5.2.
When the issue is active, session/dashboard RPCs can pin the gateway process around one CPU core and make sessions.list take tens of seconds. This is not only a slow Control UI problem: Slack responsiveness degrades because Slack Socket Mode / channel handling shares the same saturated gateway process.
Environment
- OpenClaw versions affected: observed after
2026.4.29; still reproduced on 2026.5.2
- Platform: macOS
- Install shape: long-lived local gateway
- Channels: Slack enabled and heavily used; Telegram also enabled
- Control UI/dashboard: enabled
- Session profile: many Slack-originated sessions accumulated over time
Observed Behavior
After updating from 2026.4.29 to 2026.5.2, the plain unpatched 2026.5.2 gateway reproduced the same dashboard/gateway starvation pattern:
sessions.list calls took roughly 36-66s
openclaw status --deep could wedge or time out against the gateway even though the service was running
- the gateway process pinned roughly one CPU core while this was happening
- Slack became unreliable/degraded while the gateway was saturated
This felt like a regression that first cropped up around 2026.4.29, not merely an old long-lived-session problem that had always existed.
Why This Seems Distinct From Existing Trackers
This may be related to #47975 and #71631, but it does not look like only a generic "completed subagent sessions are retained" issue.
The user-visible failure mode here is:
Slack-heavy install with many sessions -> sessions.list/dashboard work gets very expensive -> gateway event loop/CPU saturates -> Slack responsiveness and Socket Mode reliability degrade.
That makes this a gateway isolation and bounded-listing problem, not just a dashboard UX issue.
Local Mitigation That Restored Usability
I applied a local installed-bundle hotfix that made the session-listing path cheaper. Usability improved after:
- capping
sessions.list rows to a small default
- disabling expensive derived-title / last-message preview work by default
- skipping model catalog work inside
sessions.list
- disabling transcript usage fallback work during session-list row construction
After that, sessions.list returned in a few seconds instead of tens of seconds, and gateway CPU returned to normal idle behavior.
I am not proposing that exact patch as the right fix, but it strongly suggests the expensive enrichment/listing path is enough to starve the gateway on a Slack-heavy install.
Expected Behavior
sessions.list should be bounded, paginated, cached, or otherwise cheap by default.
- Expensive transcript/model/preview enrichment should be opt-in, async, cached, or isolated from the hot path.
- Control UI/dashboard session loading should not be able to degrade Slack Socket Mode/channel responsiveness.
- A long-lived Slack install with many sessions should not require manual pruning or local bundle patches to keep the gateway responsive.
Related Issues
Summary
On a long-lived Slack-heavy OpenClaw install,
sessions.list/ dashboard session loading became a gateway-wide bottleneck after upgrading to2026.4.29. I did not see this before2026.4.29. The same behavior still reproduced immediately after updating to2026.5.2.When the issue is active, session/dashboard RPCs can pin the gateway process around one CPU core and make
sessions.listtake tens of seconds. This is not only a slow Control UI problem: Slack responsiveness degrades because Slack Socket Mode / channel handling shares the same saturated gateway process.Environment
2026.4.29; still reproduced on2026.5.2Observed Behavior
After updating from
2026.4.29to2026.5.2, the plain unpatched2026.5.2gateway reproduced the same dashboard/gateway starvation pattern:sessions.listcalls took roughly36-66sopenclaw status --deepcould wedge or time out against the gateway even though the service was runningThis felt like a regression that first cropped up around
2026.4.29, not merely an old long-lived-session problem that had always existed.Why This Seems Distinct From Existing Trackers
This may be related to #47975 and #71631, but it does not look like only a generic "completed subagent sessions are retained" issue.
The user-visible failure mode here is:
Slack-heavy install with many sessions->sessions.list/dashboard work gets very expensive->gateway event loop/CPU saturates->Slack responsiveness and Socket Mode reliability degrade.That makes this a gateway isolation and bounded-listing problem, not just a dashboard UX issue.
Local Mitigation That Restored Usability
I applied a local installed-bundle hotfix that made the session-listing path cheaper. Usability improved after:
sessions.listrows to a small defaultsessions.listAfter that,
sessions.listreturned in a few seconds instead of tens of seconds, and gateway CPU returned to normal idle behavior.I am not proposing that exact patch as the right fix, but it strongly suggests the expensive enrichment/listing path is enough to starve the gateway on a Slack-heavy install.
Expected Behavior
sessions.listshould be bounded, paginated, cached, or otherwise cheap by default.Related Issues
sessions.listreturns quickly #64004: related Control UI latency attribution work