Description
After upgrading to OpenClaw 2026.4.14 on Windows, the chat experience regressed badly in the web/dashboard UI.
This does not look like a simple slow-model issue. The main visible failures are in input rendering and streamed reply rendering:
- Typed user input often does not appear immediately in the input/chat surface.
- Sometimes part of the typed input appears much later.
- Sometimes user input appears to be dropped entirely.
- Assistant output is no longer reliably visible as a live stream.
- In many cases the reply only becomes visible after manually refreshing the page.
- The normal "assistant is typing" animated dots often flash once and then disappear, leaving a blank area.
- Sometimes the UI briefly shows activity and then turns blank with no visible streamed content.
Expected behavior
- User input should appear immediately while typing / after send.
- Assistant output should stream live without requiring a manual refresh.
- Typing indicator should remain visible while the assistant is generating.
- No user-entered text should be lost.
Actual behavior
- Input text is visually swallowed / delayed / occasionally lost.
- Streamed output often fails to render live.
- Refreshing the page can reveal content that was not shown in real time.
- Typing indicator is unstable and often disappears too early.
Why this seems like a regression
- This was noticeably better on the previous version in the same environment.
- We also tested configuration changes around
active-memory and model assignment, but the problem persisted.
- In particular,
active-memory worked acceptably on the previous version and became much more problematic on this version.
- Because input rendering itself is affected, this appears more like a frontend/render/update regression than just model latency.
Environment
- OpenClaw version:
2026.4.14 (323493f)
- OS: Windows 10 Pro 22H2 (
10.0.19045, build 19045, 64-bit)
- Install/update path: upgraded to the official
2026.4.14 release
- Local model setup: LM Studio is enabled and used as the primary model path in this installation
Steps to reproduce
- Upgrade/start OpenClaw
2026.4.14 on Windows.
- Open the OpenClaw chat/dashboard UI.
- Send normal short messages such as a greeting.
- Observe the input field/chat timeline while typing and immediately after send.
- Observe assistant streaming behavior during generation.
- Refresh the page and compare what becomes visible only after refresh.
What we observed repeatedly
- Sending a simple greeting can result in the typed text not appearing immediately in the conversation.
- The assistant may begin to respond internally, but the UI only flashes the typing indicator briefly and then shows blank space.
- A manual refresh can reveal content that was not rendered live.
Suspected area
Possibly a regression in one or more of:
- chat input state synchronization
- websocket/event-stream to UI reconciliation
- streamed token rendering
- active-memory related UI/update path
- optimistic message insertion / pending message replacement
Additional notes
- Please treat this as a UX-blocking regression: it makes normal conversation feel unreliable even when the backend may still be doing work.
- No secrets, tokens, or personal paths included here.
Description
After upgrading to OpenClaw 2026.4.14 on Windows, the chat experience regressed badly in the web/dashboard UI.
This does not look like a simple slow-model issue. The main visible failures are in input rendering and streamed reply rendering:
Expected behavior
Actual behavior
Why this seems like a regression
active-memoryand model assignment, but the problem persisted.active-memoryworked acceptably on the previous version and became much more problematic on this version.Environment
2026.4.14 (323493f)10.0.19045, build19045, 64-bit)2026.4.14releaseSteps to reproduce
2026.4.14on Windows.What we observed repeatedly
Suspected area
Possibly a regression in one or more of:
Additional notes