Version: OpenClaw v2026.4.15
Component: extensions/active-memory, src/agents/pi-embedded-runner
Description:
active-memory config.timeoutMs fires the AbortController at the configured value, but runRecallSubagent() does not return until ~15-20s later due to the wait-for-idle-before-flush cleanup path not respecting the abort signal promptly.
Config:
plugins.entries.active-memory.config.timeoutMs: 3000
Log evidence (two models, same behavior):
Model 1 — openai-codex/gpt-5.4-mini:
active-memory: start timeoutMs=3000
embedded run failover decision: reason=timeout from=openai-codex/gpt-5.4-mini
active-memory: done status=timeout elapsedMs=20623
Model 2 — openrouter/arcee-ai/trinity-large-preview:free:
active-memory: start timeoutMs=3000
embedded run failover decision: reason=timeout from=openrouter/arcee-ai/trinity-large-preview:free
active-memory: done status=timeout elapsedMs=15562
Expected: runRecallSubagent() returns shortly after timeoutMs fires.
Actual: Returns ~15-20s later regardless of timeoutMs value or model used.
Code pointers:
- extensions/active-memory/index.ts:1720 (startedAt)
- extensions/active-memory/index.ts:1764-1768 (AbortController)
- src/agents/pi-embedded-runner/run/attempt.ts:1531-1539 (fire-and-forget abort)
- src/agents/pi-embedded-runner/wait-for-idle-before-flush.ts:10 (30s idle wait ceiling)
Hypothesis: abort() is advisory/fire-and-forget on the embedded runner. waitForIdle() resolves on an internal ~15s timeout boundary rather than immediately on abort, causing the full cleanup to take 15-20s before active-memory caller gets control back.
Impact: Every active-memory recall adds 15-20s latency to user responses regardless of timeoutMs setting, making the config option effectively useless for reducing response time.
Version: OpenClaw v2026.4.15
Component: extensions/active-memory, src/agents/pi-embedded-runner
Description:
active-memory config.timeoutMs fires the AbortController at the configured value, but runRecallSubagent() does not return until ~15-20s later due to the wait-for-idle-before-flush cleanup path not respecting the abort signal promptly.
Config:
plugins.entries.active-memory.config.timeoutMs: 3000
Log evidence (two models, same behavior):
Model 1 — openai-codex/gpt-5.4-mini:
active-memory: start timeoutMs=3000
embedded run failover decision: reason=timeout from=openai-codex/gpt-5.4-mini
active-memory: done status=timeout elapsedMs=20623
Model 2 — openrouter/arcee-ai/trinity-large-preview:free:
active-memory: start timeoutMs=3000
embedded run failover decision: reason=timeout from=openrouter/arcee-ai/trinity-large-preview:free
active-memory: done status=timeout elapsedMs=15562
Expected: runRecallSubagent() returns shortly after timeoutMs fires.
Actual: Returns ~15-20s later regardless of timeoutMs value or model used.
Code pointers:
Hypothesis: abort() is advisory/fire-and-forget on the embedded runner. waitForIdle() resolves on an internal ~15s timeout boundary rather than immediately on abort, causing the full cleanup to take 15-20s before active-memory caller gets control back.
Impact: Every active-memory recall adds 15-20s latency to user responses regardless of timeoutMs setting, making the config option effectively useless for reducing response time.