fix(ui): reduce polling intervals to prevent progressive UI sluggishness#45930
fix(ui): reduce polling intervals to prevent progressive UI sluggishness#45930McMuff86 wants to merge 2 commits intoopenclaw:mainfrom
Conversation
Previous polling intervals (Nodes: 5s, Logs: 2s, Debug: 3s) caused excessive RPC traffic that leads to progressive dashboard slowdown, especially with many active sessions or when left open in a background tab. New intervals balance freshness with resource usage: - Nodes: 30s (topology rarely changes) - Logs: 5s (still near-realtime tail) - Debug: 10s (status data is slow-moving) Magic numbers replaced with named constants for maintainability. Relates to openclaw#45698
Greptile SummaryThis PR reduces the polling frequencies for the three Control UI data feeds — nodes, logs, and debug — and extracts the previously inline magic numbers into named, documented constants. The changes are minimal, focused, and well-explained. Key changes:
Observation (pre-existing, not introduced by this PR): Confidence Score: 5/5
Last reviewed commit: b3f5ae6 |
There was a problem hiding this comment.
💡 Codex Review
Here are some automated review suggestions for this pull request.
Reviewed commit: 34af89795d
ℹ️ About Codex in GitHub
Your team has set up Codex to review pull requests in this repo. Reviews are triggered when you
- Open a pull request for review
- Mark a draft as ready
- Comment "@codex review".
If Codex has suggestions, it will comment; otherwise it will react with 👍.
Codex can also answer questions or update the PR. Try commenting "@codex address that feedback".
| * - Debug: 10 s (status data is slow-moving) | ||
| */ | ||
| const NODES_POLL_MS = 30_000; | ||
| const LOGS_POLL_MS = 5_000; |
There was a problem hiding this comment.
Restore a shorter logs polling interval
Increasing LOGS_POLL_MS to 5 s makes the Logs tab lose data on busy gateways, not just update less often. The UI still requests only 500 lines / 250 KB per poll (ui/src/ui/app.ts:431-432), and logs.tail explicitly resets to the last maxBytes chunk whenever size - cursor > maxBytes (src/gateway/server-methods/logs.ts:81-91). That means any gateway emitting more than ~250 KB over 5 seconds will skip older lines between polls and show only the newest chunk. With the previous 2 s cadence the threshold was much higher, so this is a functional regression for live tailing sessions that generate lots of logs.
Useful? React with 👍 / 👎.
|
Codex review: needs changes before merge. Summary Reproducibility: yes. for the PR-introduced regression by source inspection: if the active Logs tab falls more than the 250KB/500-line tail budget behind between cursor requests, Next step before merge Security Review findings
Review detailsBest possible solution: Ship a narrow Control UI polling cleanup that keeps named constants and lower-risk Nodes/Debug churn reduction, preserves active Logs live-tail semantics or makes cursor recovery lossless, and records the user-facing fix in the changelog. Do we have a high-confidence way to reproduce the issue? Yes for the PR-introduced regression by source inspection: if the active Logs tab falls more than the 250KB/500-line tail budget behind between cursor requests, Is this the best way to solve the issue? No, not as written. The safer fix is to keep the Logs cadence at 2s or implement lossless cursor catch-up before increasing it, while handling Nodes/Debug polling churn and the changelog separately. Full review comments:
Overall correctness: patch is incorrect Acceptance criteria:
What I checked:
Likely related people:
Remaining risk / open question:
Codex review notes: model gpt-5.5, reasoning high; reviewed against 443f7035a2e5. |
|
Maintainer overlap triage after opening #75986: This PR is related but not duplicate-closed or superseded by #75986. The scopes are different:
I would keep this PR deferred/needs-changes rather than close it as duplicate of #75986. The existing blockers still apply: preserve live log-tail semantics or restore the shorter logs cadence, and add the required changelog entry. |
Problem
The Control UI polling intervals are very aggressive:
This causes excessive RPC traffic that leads to progressive dashboard slowdown — especially with many active sessions or when the dashboard is left open in a background tab.
Relates to #45698.
Changes
Reduced polling intervals to more reasonable defaults:
Also extracted magic numbers into named constants for maintainability.
Impact
Testing
Verified on a local instance with 89 active sessions where the dashboard was noticeably sluggish before. After the change, CPU and network usage dropped significantly and the UI remained responsive.