Skip to content

[Feature]: Improve gateway configuration and Web UI chat responsiveness #71631

@mmy4shadow

Description

@mmy4shadow

Summary

Improve perceived and actual responsiveness when using gateway configuration screens and when chatting in the Web UI. The current experience can feel slow in two high-frequency paths: editing/switching gateway configuration, and waiting for model responses in the Web UI chat.

Problem

When using OpenClaw through the Web UI, the interaction loop can feel sluggish enough that users are unsure whether the system is applying configuration, sending a request, waiting on the model, blocked on the gateway, or simply stuck.

The most visible slow paths are:

  • gateway configuration screens reacting slowly when loading, saving, switching, or validating configuration
  • Web UI chat taking too long before the user sees a visible response or progress
  • model response latency feeling worse than expected even when the provider/model itself should be fast
  • UI controls not always giving immediate feedback after a click, which makes users click again or inspect logs

This hurts the daily experience because gateway configuration and chat are the two places users touch most often.

Desired outcome

Make the Web UI feel immediately responsive even when backend work is still running.

The target should be:

  • user actions acknowledge immediately in the UI
  • gateway configuration operations show explicit loading/saving/testing states
  • Web UI chat shows fast request acceptance and fast first visible feedback
  • long-running model/tool work does not freeze unrelated UI surfaces
  • slow backend calls are measurable and attributable rather than feeling like a generic hang

Proposed improvements

1. Add responsiveness instrumentation

Add timing metrics around the full interaction path, not only one backend RPC:

  • click/action timestamp in the Web UI
  • request queued/sent timestamp
  • gateway RPC start/end
  • provider/model request start/end
  • first token or first visible chat update
  • final response completed
  • config save/test/load completed

This should make it possible to distinguish UI render delay, WebSocket/RPC delay, gateway processing delay, provider latency, and model generation latency.

2. Optimize gateway configuration interactions

For gateway configuration screens:

  • avoid blocking the entire page while loading secondary data
  • cache stable config metadata where safe
  • debounce expensive validation or discovery calls
  • show per-control saving/testing state instead of making the whole UI feel frozen
  • allow config edits to remain responsive while background status checks are still running
  • surface slow validation/provider checks as explicit progress instead of silent waiting

3. Improve Web UI chat perceived speed

For Web UI chat:

  • acknowledge message submission immediately
  • show a clear accepted/running state before the model response is ready
  • prioritize low-latency first visible feedback over waiting for the complete response
  • ensure streaming, if available, renders incrementally without batching delays that make it look non-streaming
  • make cancellation/interruption responsive and visible
  • avoid unrelated gateway calls blocking chat rendering

4. Prevent slow RPCs from freezing unrelated UI paths

If one gateway call is slow, for example config discovery, node listing, provider status, cron run loading, or session metadata refresh, it should not make chat input or visible response rendering feel blocked.

Suggested implementation direction:

  • isolate slow panels and background refreshes
  • use request cancellation or stale-response guards when users switch views quickly
  • cap or lazy-load expensive lists
  • avoid waterfall loading where possible
  • render partial UI while slower data continues loading

Acceptance criteria

  • Gateway configuration pages provide immediate visual feedback after load/save/test/switch actions.
  • Web UI chat shows immediate submission acknowledgement after the user sends a message.
  • The UI exposes a visible running state before the first model token or final answer arrives.
  • Slow config/status/background calls do not block chat input or chat response rendering.
  • Streaming responses, when supported by the provider/gateway path, appear incrementally in the Web UI with minimal batching delay.
  • Slow operations are instrumented so logs/devtools can identify whether latency is in the UI, gateway, provider, or model generation path.
  • Repeated clicks caused by missing feedback are reduced by disabling or marking only the affected control while the operation is pending.
  • Mobile and desktop layouts remain stable while loading or streaming states are shown.

Suggested performance targets

These numbers are not strict requirements, but they would make the experience feel much better:

  • UI acknowledgement after click/send: under 100 ms locally
  • config save/test visible pending state: under 100 ms
  • chat message appears in transcript after send: under 100 ms
  • first visible running/accepted state: under 250 ms
  • first streamed token or first progress indicator: under 1 second when the backend has accepted the request
  • no unrelated panel/background RPC should block typing in the composer

Related issues

Impact

Improving responsiveness here would make OpenClaw feel much more reliable during normal use. Users would spend less time guessing whether the gateway is applying configuration, whether the model is working, or whether the Web UI has stalled. It would also reduce duplicate prompts, repeated clicks, and unnecessary log inspection.

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Type

    No type
    No fields configured for issues without a type.

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions