Skip to content

Bug: Control UI becomes progressively stuck after being open for a while (2026.3.12) #45698

@JLDynamics

Description

@JLDynamics

Bug: Control UI becomes progressively stuck after being open for a while (2026.3.12)

Summary

After upgrading to OpenClaw 2026.3.12, the Control UI / dashboard becomes progressively sluggish and then effectively stuck after being open for a while.

This is not limited to the Channels tab.

At first the dashboard loads and works, but after some time:

  • switching tabs becomes slow or appears frozen
  • Channels is an obvious trigger
  • eventually other tabs like Sessions and Usage also feel stuck

The gateway itself appears healthy, so this looks like a Control UI / dashboard-v2 regression or frontend/backend interaction issue, not a full gateway outage.


Version

  • OpenClaw: 2026.3.12

Environment

  • Host OS: Linux
  • Local dashboard URL:
    • http://127.0.0.1:18789
  • Also using a Tailscale-served dashboard origin

Control UI allowed origins currently include localhost/127.0.0.1 and a Tailscale-served dashboard origin.


What happens

  1. Open dashboard using a fresh URL from:
    openclaw dashboard --no-open
  2. Dashboard loads normally
  3. After some time, switching tabs becomes sluggish/stuck
  4. Overview → Channels is one obvious trigger
  5. After the UI has been open for a bit, Sessions, Usage, and other tabs can also become stuck/sluggish

What does NOT seem to be the problem

  • Gateway is not dead
  • WebSocket can connect
  • openclaw status reports healthy
  • dashboard is reachable
  • restart does not permanently solve the issue

So this does not look like a basic service outage.


Relevant observations

1. channels.status appears slow

Log timings repeatedly show channels.status taking 5–7 seconds.

Examples observed:

  • 7617ms
  • 7121ms
  • 6757ms
  • 7028ms
  • 7137ms
  • 6891ms
  • 7360ms

Sometimes it is fast (~50–100ms), but often it becomes slow.

2. Channels controller uses an 8s timeout

In the bundled Control UI code, the Channels refresh path calls:

  • channels.status
  • with:
    • timeoutMs: 8000

So the UI is definitely willing to wait a long time there.

3. Repeated webchat connect/disconnect churn

Logs also show repeated patterns like:

  • webchat connected
  • webchat disconnected code=1001
  • reconnect
  • connected again
  • disconnected again

This suggests the dashboard session may be getting churny/unstable over time.

4. Recent dashboard-v2 refactor in this release

2026.3.12 includes the refreshed modular Control UI / dashboard-v2 work.
Given the timing, this feels plausibly related to a recent frontend regression or a new frontend behavior that handles slow backend responses poorly.


Suspicious provider state

The slow channels.status may be related to one or more channel providers.

Observed in logs:

  • WhatsApp
    • stale socket restart
    • restored corrupted creds from backup
  • Discord
    • websocket close / resume attempts

Telegram seems less suspicious.

So one possibility is:

  • a slow provider status check
  • plus the new Control UI not degrading gracefully
  • causing the whole UI to feel jammed after a while

Expected behavior

  • Control UI should remain responsive even if one provider status check is slow
  • Channels tab should not make the whole dashboard feel frozen
  • slow status checks should ideally degrade gracefully or render partial results
  • dashboard should not progressively become unusable across multiple tabs

Actual behavior

  • dashboard becomes progressively sluggish/stuck after being open for some time
  • Channels is an obvious trigger, but not the only affected tab
  • eventually Sessions / Usage / other views can also feel stuck

What has already been tried

  • restarted OpenClaw gateway
  • used fresh tokenized dashboard URL
  • fixed Control UI allowed origins
  • reduced device-pairing noise / repaired local CLI identity issues

These did not eliminate the dashboard getting stuck over time.


Possible root cause ideas

This may be one or more of:

  1. channels.status blocking too long on one provider
  2. dashboard-v2 not handling long-running status calls gracefully
  3. reconnect/disconnect churn poisoning frontend state over time
  4. frontend polling / promise / state bug causing progressive degradation

Useful debugging directions

  • inspect channels.status implementation for blocking behavior
  • confirm whether provider checks are sequential or otherwise serialized badly
  • check whether Control UI is waiting synchronously on slow channel state before rendering
  • inspect dashboard-v2 websocket reconnect/state handling
  • test whether a single slow provider can poison broader tab navigation responsiveness

If needed, I can also provide logs/timings showing repeated slow channels.status responses and frequent webchat reconnect/disconnect cycles.

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Type

    No type
    No fields configured for issues without a type.

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions