Bug type
Behavior bug (incorrect output/state without crash)
Beta release blocker
No
Summary
The Control UI remains slow to update (about 40 seconds to load) after opening Sessions even when observed sessions.list requests complete in about 20ms to 70ms.
Steps to reproduce
- Open the Control UI on Windows with the affected gateway running.
- Click
Sessions.
- Observe that the UI does not visibly update for a long time even when the gateway log shows a fast
sessions.list response, for example 01:54:14+02:00 [ws] ⇄ res ✓ sessions.list 66ms id=986c475d…b2fb.
- During slow periods, also observe that other backend calls can take much longer, including
node.list 38031ms, node.list 45672ms, and cron.runs 38237ms.
Expected behavior
After clicking Sessions, the Control UI should update promptly once sessions.list returns, consistent with the observed fast sessions.list timings in the logs.
Actual behavior
The Control UI remains slow to update after clicking Sessions even though sessions.list completes quickly. Observed examples include:
01:45:21+02:00 [ws] ⇄ res ✓ sessions.list 64ms
01:45:21+02:00 [ws] ⇄ res ✓ sessions.list 60ms
01:54:14+02:00 [ws] ⇄ res ✓ sessions.list 66ms
02:17:33+02:00 [ws] ⇄ res ✓ sessions.list 33ms
02:18:17+02:00 [ws] ⇄ res ✓ sessions.list 31ms
02:24:13+02:00 [ws] ⇄ res ✓ sessions.list 21ms
During the same broader recovery window, other calls were observed taking much longer, including:
02:18:19+02:00 [ws] ⇄ res ✓ node.list 45672ms
02:51:21+02:00 [ws] ⇄ res ✓ node.list 38031ms
02:51:21+02:00 [ws] ⇄ res ✓ cron.runs 38237ms
NOT_ENOUGH_INFO after here:
Slack socket warnings were also observed one time during the same period, for example:
02:45:06+02:00 [WARN] socket-mode:SlackWebSocket:31 A pong wasn't received from the server before the timeout of 5000ms!
02:35:39+02:00 [health-monitor] [slack:default] health-monitor: restarting (reason: stale-socket)
OpenClaw version
2026.4.9
Operating system
Windows Server 2022
Install method
npm global
Model
minimax/MiniMax-M2.7-highspeed
Provider / routing chain
openclaw -> minimax
Additional provider/model setup details
No response
Logs, screenshots, and evidence
Observed fast `sessions.list` responses while the UI still felt stalled:
01:45:21+02:00 [ws] ⇄ res ✓ sessions.list 64ms id=cd7b396c…f1f1
01:45:21+02:00 [ws] ⇄ res ✓ sessions.list 60ms id=95afbea8…bf33
01:54:14+02:00 [ws] ⇄ res ✓ sessions.list 66ms id=986c475d…b2fb
02:17:33+02:00 [ws] ⇄ res ✓ sessions.list 33ms id=11a1d03f…a9f5
02:18:17+02:00 [ws] ⇄ res ✓ sessions.list 31ms id=a115b16f…9997
02:24:13+02:00 [ws] ⇄ res ✓ sessions.list 21ms conn=60cc023a…47b9 id=ff52db6e…a908
02:51:37+02:00 [ws] ⇄ res ✓ sessions.list 34ms id=4a4315bc…f490
Observed slow related calls during the same broader period:
01:46:10+02:00 [ws] ⇄ res ✓ node.list 2143ms id=9c78c915…a80b
02:01:17+02:00 [ws] ⇄ res ✓ node.list 9046ms conn=60cc023a…47b9 id=8733ec6c…dd2d
02:02:16+02:00 [ws] ⇄ res ✓ node.list 18125ms conn=60cc023a…47b9 id=7966decb…415c
02:09:52+02:00 [ws] ⇄ res ✓ node.list 39895ms id=614bab67…c258
02:18:19+02:00 [ws] ⇄ res ✓ node.list 45672ms id=76e048a9…7103
02:51:21+02:00 [ws] ⇄ res ✓ node.list 38031ms id=86aa8b92…0c4e
02:51:21+02:00 [ws] ⇄ res ✓ cron.runs 38237ms id=e251ca71…2e23
Observed run timing mismatch examples:
02:51:37+02:00 [agent] embedded run prompt start: runId=dc006db2-bcca-4e0a-8747-fd30ef353cd1
02:51:55+02:00 [agent] embedded run prompt end: runId=dc006db2-bcca-4e0a-8747-fd30ef353cd1 sessionId=dc006db2-bcca-4e0a-8747-fd30ef353cd1 durationMs=18622
02:51:55+02:00 [agent] embedded run done: runId=dc006db2-bcca-4e0a-8747-fd30ef353cd1 sessionId=dc006db2-bcca-4e0a-8747-fd30ef353cd1 durationMs=413991 aborted=false
User observation recorded alongside the logs:
clicked it 5 Minutes ago, and nothing happens until now
NOT_ENOUGH_INFO after here:
Observed Slack socket warnings during the same recovery window:
02:17:33+02:00 [WARN] socket-mode:SlackWebSocket:18 A pong wasn't received from the server before the timeout of 5000ms!
02:17:33+02:00 [WARN] socket-mode:SlackWebSocket:18 A ping wasn't received from the server before the timeout of 30000ms!
02:35:39+02:00 [health-monitor] [slack:default] health-monitor: restarting (reason: stale-socket)
02:45:06+02:00 [WARN] socket-mode:SlackWebSocket:31 A pong wasn't received from the server before the timeout of 5000ms!
Impact and severity
Affected users/systems/channels: Control UI on the affected Windows system; Slack integration warnings were also observed in the same environment.
Severity: High - the system is usable again, but Control UI loading remains slow enough to interfere with administration and debugging.
Frequency: Intermittent but repeatedly observed across multiple clicks and log windows.
Consequence: Slow or delayed Control UI updates, reliance on console/log inspection instead of the UI, and reduced confidence in whether backend state has updated.
Additional information
This was observed after broader recovery work that improved overall system behavior. By the time of these logs, the system could again reply in Slack and execute work, but Control UI latency remained present.
Observed evidence shows that sessions.list itself is fast while some other calls during the same periods are much slower, especially node.list and cron.runs. Beyond that observation, root cause is NOT_ENOUGH_INFO.
Bug type
Behavior bug (incorrect output/state without crash)
Beta release blocker
No
Summary
The Control UI remains slow to update (about 40 seconds to load) after opening Sessions even when observed
sessions.listrequests complete in about 20ms to 70ms.Steps to reproduce
Sessions.sessions.listresponse, for example01:54:14+02:00 [ws] ⇄ res ✓ sessions.list 66ms id=986c475d…b2fb.node.list 38031ms,node.list 45672ms, andcron.runs 38237ms.Expected behavior
After clicking
Sessions, the Control UI should update promptly oncesessions.listreturns, consistent with the observed fastsessions.listtimings in the logs.Actual behavior
The Control UI remains slow to update after clicking
Sessionseven thoughsessions.listcompletes quickly. Observed examples include:01:45:21+02:00 [ws] ⇄ res ✓ sessions.list 64ms01:45:21+02:00 [ws] ⇄ res ✓ sessions.list 60ms01:54:14+02:00 [ws] ⇄ res ✓ sessions.list 66ms02:17:33+02:00 [ws] ⇄ res ✓ sessions.list 33ms02:18:17+02:00 [ws] ⇄ res ✓ sessions.list 31ms02:24:13+02:00 [ws] ⇄ res ✓ sessions.list 21msDuring the same broader recovery window, other calls were observed taking much longer, including:
02:18:19+02:00 [ws] ⇄ res ✓ node.list 45672ms02:51:21+02:00 [ws] ⇄ res ✓ node.list 38031ms02:51:21+02:00 [ws] ⇄ res ✓ cron.runs 38237msNOT_ENOUGH_INFO after here:
Slack socket warnings were also observed one time during the same period, for example:
02:45:06+02:00 [WARN] socket-mode:SlackWebSocket:31 A pong wasn't received from the server before the timeout of 5000ms!02:35:39+02:00 [health-monitor] [slack:default] health-monitor: restarting (reason: stale-socket)OpenClaw version
2026.4.9
Operating system
Windows Server 2022
Install method
npm global
Model
minimax/MiniMax-M2.7-highspeed
Provider / routing chain
openclaw -> minimax
Additional provider/model setup details
No response
Logs, screenshots, and evidence
Impact and severity
Affected users/systems/channels: Control UI on the affected Windows system; Slack integration warnings were also observed in the same environment.
Severity: High - the system is usable again, but Control UI loading remains slow enough to interfere with administration and debugging.
Frequency: Intermittent but repeatedly observed across multiple clicks and log windows.
Consequence: Slow or delayed Control UI updates, reliance on console/log inspection instead of the UI, and reduced confidence in whether backend state has updated.
Additional information
This was observed after broader recovery work that improved overall system behavior. By the time of these logs, the system could again reply in Slack and execute work, but Control UI latency remained present.
Observed evidence shows that
sessions.listitself is fast while some other calls during the same periods are much slower, especiallynode.listandcron.runs. Beyond that observation, root cause is NOT_ENOUGH_INFO.