Summary
The OpenClaw Gateway process consumes 100% CPU from the moment it starts and never drops. Memory usage is 724MB (peak 900MB). The node.list RPC call takes 21-35 seconds, and sessions.list takes 2-5 seconds. The Web UI (openclaw-control-ui) polls these endpoints every 2-3 seconds, causing request pileup and the Gateway being permanently stuck at 100% CPU.
Restarting the Gateway does not help — CPU hits 100% immediately after startup.
Environment
OpenClaw: 2026.4.29 (a448042)
Node: v24.15.0
OS: macOS 26.4.1 (ARM64, Apple Silicon)
Gateway PID: 5276 → 9075 (after restart)
Plugins: feishu, llm-task, memclaw
MCP servers: mempalace (Python), context-mode (npx)
Evidence
CPU from startup
$ ps -p 9075 -o %cpu,comm
99.9% node
node.list latency (right after restart)
22:16:25 node.list 21,005ms
22:16:49 node.list 4,228ms
22:17:51 node.list 5,281ms
22:18:21 node.list 14,074ms
sessions.list latency
22:15:41 sessions.list 3,183ms
22:16:09 sessions.list 4,840ms
22:16:12 sessions.list 2,990ms
22:16:24 sessions.list 3,011ms
Memory
Physical footprint: 724.0M
Physical footprint (peak): 900.8M
Gateway sample (CPU profile)
777 samples on main thread, all in:
uv_run → uv__io_poll → uv__stream_io → OnUvRead → JS callback
Network connections
183.240.229.82:443 — Feishu API (4 connections)
111.13.213.62:443 — XiaomiMimo API (2 connections)
localhost:18789 — WebSocket (Web UI)
6 unix sockets — MCP server pipes
Web UI reconnection pattern
21:19:34 connected b859...
21:25:56 disconnected (6 minutes)
21:26:29 connected 48b3... (33s later)
21:53:47 disconnected (27 minutes)
21:53:48 connected 1329... (1s later)
22:00:20 disconnected + immediate reconnect
Analysis
node.list is the bottleneck — 21-35s per call, should be <100ms
- Not a state bloat issue — happens immediately after fresh restart
- Web UI is a victim, not the cause — requests pile up because each takes seconds
- Plugin initialization — feishu and llm-task each load 30 bundled runtime deps at startup
- Possibly related to MCP server startup — 7 MCP processes (3 Python + 4 Node) spawn during init
Comparison with existing issues
Steps to reproduce
- Start Gateway:
openclaw gateway start
- Open Web UI (openclaw-control-ui)
- Observe CPU immediately hits 100%
- Check gateway.log for
node.list latency >20s
Expected behavior
Gateway should idle at <5% CPU when no active LLM requests are in flight.
Summary
The OpenClaw Gateway process consumes 100% CPU from the moment it starts and never drops. Memory usage is 724MB (peak 900MB). The
node.listRPC call takes 21-35 seconds, andsessions.listtakes 2-5 seconds. The Web UI (openclaw-control-ui) polls these endpoints every 2-3 seconds, causing request pileup and the Gateway being permanently stuck at 100% CPU.Restarting the Gateway does not help — CPU hits 100% immediately after startup.
Environment
Evidence
CPU from startup
node.list latency (right after restart)
sessions.list latency
Memory
Gateway sample (CPU profile)
Network connections
Web UI reconnection pattern
Analysis
node.listis the bottleneck — 21-35s per call, should be <100msComparison with existing issues
node.listlatencySteps to reproduce
openclaw gateway startnode.listlatency >20sExpected behavior
Gateway should idle at <5% CPU when no active LLM requests are in flight.