Skip to content

[Bug]: Compaction issues: safeguard counter stuck at 0, default mode triggers TPM burst #38905

@fenglanhua

Description

@fenglanhua

Bug type

Regression (worked before, now fails)

Summary

Two related compaction issues: (1) safeguard mode actively trims context but Compactions counter in /status always stays at 0, making monitoring impossible. (2) default mode compaction triggers a large summarization API request that exceeds Google's 1M TPM limit, causing failover.

Steps to reproduce

Issue 1 (safeguard counter):

Set compaction.mode: "safeguard", contextTokens: 160000, maxHistoryShare: 0.4
Use google/gemini-3-flash-preview and have a continuous conversation
Observe /status as context grows past the limit and gets trimmed
Compactions counter remains 0 despite active trimming

Issue 2 (default TPM burst):

Switch compaction.mode to "default" with same settings
Have a conversation until context approaches the limit
Default compaction triggers a large summarization request
Request exceeds Google 1M TPM limit → 429 error → failover to backup model

Expected behavior

Safeguard: Compactions counter should increment when context is trimmed, or /status should show a separate indicator for safeguard trimming events
Default: Compaction should not send a request that exceeds TPM limits. At minimum, documentation should warn that default mode can cause TPM spikes with TPM-limited providers

Actual behavior

Safeguard: Context drops from 199k to 102k between turns (trimming works), but Compactions stays at 0 throughout. Observed during continuous conversation with no manual /compact or /new:

21:18 → 100k/160k (63%) · Compactions: 0
21:22 → 199k/160k (125%) · Compactions: 0
21:23 → 102k/160k (64%) · Compactions: 0

Default: Compaction immediately triggered a large API request that exceeded 1M TPM, caused 429 error, and forced failover to google/gemini-3.1-pro-preview

OpenClaw version

OpenClaw: 2026.3.2 (85377a2)

Operating system

Rocky Linux (Proxmox VM)

Install method

No response

Logs, screenshots, and evidence

"compaction": {
  "mode": "safeguard",
  "maxHistoryShare": 0.4
}
Model: google/gemini-3-flash-preview, contextTokens: 160000

Impact and severity

safeguard mode is functionally working but unmonitorable
default mode is unusable with TPM-limited providers like Google AI Studio (1M TPM)
Users have no compaction mode that both works correctly and is monitorable

Additional information

Reverted to safeguard mode as workaround. Default mode caused immediate service disruption.

Metadata

Metadata

Assignees

No one assigned

    Labels

    bugSomething isn't workingregressionBehavior that previously worked and now fails

    Type

    No type
    No fields configured for issues without a type.

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions