Bug: Control UI becomes progressively stuck after being open for a while (2026.3.12)
Summary
After upgrading to OpenClaw 2026.3.12, the Control UI / dashboard becomes progressively sluggish and then effectively stuck after being open for a while.
This is not limited to the Channels tab.
At first the dashboard loads and works, but after some time:
- switching tabs becomes slow or appears frozen
- Channels is an obvious trigger
- eventually other tabs like Sessions and Usage also feel stuck
The gateway itself appears healthy, so this looks like a Control UI / dashboard-v2 regression or frontend/backend interaction issue, not a full gateway outage.
Version
Environment
- Host OS: Linux
- Local dashboard URL:
- Also using a Tailscale-served dashboard origin
Control UI allowed origins currently include localhost/127.0.0.1 and a Tailscale-served dashboard origin.
What happens
- Open dashboard using a fresh URL from:
openclaw dashboard --no-open
- Dashboard loads normally
- After some time, switching tabs becomes sluggish/stuck
- Overview → Channels is one obvious trigger
- After the UI has been open for a bit, Sessions, Usage, and other tabs can also become stuck/sluggish
What does NOT seem to be the problem
- Gateway is not dead
- WebSocket can connect
openclaw status reports healthy
- dashboard is reachable
- restart does not permanently solve the issue
So this does not look like a basic service outage.
Relevant observations
1. channels.status appears slow
Log timings repeatedly show channels.status taking 5–7 seconds.
Examples observed:
- 7617ms
- 7121ms
- 6757ms
- 7028ms
- 7137ms
- 6891ms
- 7360ms
Sometimes it is fast (~50–100ms), but often it becomes slow.
2. Channels controller uses an 8s timeout
In the bundled Control UI code, the Channels refresh path calls:
So the UI is definitely willing to wait a long time there.
3. Repeated webchat connect/disconnect churn
Logs also show repeated patterns like:
webchat connected
webchat disconnected code=1001
- reconnect
- connected again
- disconnected again
This suggests the dashboard session may be getting churny/unstable over time.
4. Recent dashboard-v2 refactor in this release
2026.3.12 includes the refreshed modular Control UI / dashboard-v2 work.
Given the timing, this feels plausibly related to a recent frontend regression or a new frontend behavior that handles slow backend responses poorly.
Suspicious provider state
The slow channels.status may be related to one or more channel providers.
Observed in logs:
- WhatsApp
- stale socket restart
- restored corrupted creds from backup
- Discord
- websocket close / resume attempts
Telegram seems less suspicious.
So one possibility is:
- a slow provider status check
- plus the new Control UI not degrading gracefully
- causing the whole UI to feel jammed after a while
Expected behavior
- Control UI should remain responsive even if one provider status check is slow
- Channels tab should not make the whole dashboard feel frozen
- slow status checks should ideally degrade gracefully or render partial results
- dashboard should not progressively become unusable across multiple tabs
Actual behavior
- dashboard becomes progressively sluggish/stuck after being open for some time
- Channels is an obvious trigger, but not the only affected tab
- eventually Sessions / Usage / other views can also feel stuck
What has already been tried
- restarted OpenClaw gateway
- used fresh tokenized dashboard URL
- fixed Control UI allowed origins
- reduced device-pairing noise / repaired local CLI identity issues
These did not eliminate the dashboard getting stuck over time.
Possible root cause ideas
This may be one or more of:
channels.status blocking too long on one provider
- dashboard-v2 not handling long-running status calls gracefully
- reconnect/disconnect churn poisoning frontend state over time
- frontend polling / promise / state bug causing progressive degradation
Useful debugging directions
- inspect
channels.status implementation for blocking behavior
- confirm whether provider checks are sequential or otherwise serialized badly
- check whether Control UI is waiting synchronously on slow channel state before rendering
- inspect dashboard-v2 websocket reconnect/state handling
- test whether a single slow provider can poison broader tab navigation responsiveness
If needed, I can also provide logs/timings showing repeated slow channels.status responses and frequent webchat reconnect/disconnect cycles.
Bug: Control UI becomes progressively stuck after being open for a while (2026.3.12)
Summary
After upgrading to OpenClaw 2026.3.12, the Control UI / dashboard becomes progressively sluggish and then effectively stuck after being open for a while.
This is not limited to the Channels tab.
At first the dashboard loads and works, but after some time:
The gateway itself appears healthy, so this looks like a Control UI / dashboard-v2 regression or frontend/backend interaction issue, not a full gateway outage.
Version
Environment
http://127.0.0.1:18789Control UI allowed origins currently include localhost/127.0.0.1 and a Tailscale-served dashboard origin.
What happens
What does NOT seem to be the problem
openclaw statusreports healthySo this does not look like a basic service outage.
Relevant observations
1.
channels.statusappears slowLog timings repeatedly show
channels.statustaking 5–7 seconds.Examples observed:
Sometimes it is fast (~50–100ms), but often it becomes slow.
2. Channels controller uses an 8s timeout
In the bundled Control UI code, the Channels refresh path calls:
channels.statustimeoutMs: 8000So the UI is definitely willing to wait a long time there.
3. Repeated webchat connect/disconnect churn
Logs also show repeated patterns like:
webchat connectedwebchat disconnected code=1001This suggests the dashboard session may be getting churny/unstable over time.
4. Recent dashboard-v2 refactor in this release
2026.3.12 includes the refreshed modular Control UI / dashboard-v2 work.
Given the timing, this feels plausibly related to a recent frontend regression or a new frontend behavior that handles slow backend responses poorly.
Suspicious provider state
The slow
channels.statusmay be related to one or more channel providers.Observed in logs:
Telegram seems less suspicious.
So one possibility is:
Expected behavior
Actual behavior
What has already been tried
These did not eliminate the dashboard getting stuck over time.
Possible root cause ideas
This may be one or more of:
channels.statusblocking too long on one providerUseful debugging directions
channels.statusimplementation for blocking behaviorIf needed, I can also provide logs/timings showing repeated slow
channels.statusresponses and frequent webchat reconnect/disconnect cycles.