Skip to content

Default compaction mode (safeguard) silently fails on large contexts; docs incorrectly state default is "default" #7477

@michael-b-halvorsen

Description

@michael-b-halvorsen

Description

Fresh OpenClaw installations receive compaction.mode: "safeguard" by default (via applyCompactionDefaults in src/config/defaults.ts), but this mode silently fails when contexts reach ~180k tokens, producing "Summary unavailable due to context limits" instead of actual AI-generated summaries.

This causes conversations to lose context without any warning to the user.

Evidence

1. Code default is safeguard:

// src/config/defaults.ts, line ~460
export function applyCompactionDefaults(cfg: OpenClawConfig): OpenClawConfig {
  // ...
  return {
    ...cfg,
    agents: {
      ...cfg.agents,
      defaults: {
        ...defaults,
        compaction: {
          ...compaction,
          mode: "safeguard",  // ← This is applied to all new installations
        },
      },
    },
  };
}

2. Documentation says otherwise:

The docs at docs.clawdbot.com state:

agents.defaults.compaction.mode selects the compaction summarization strategy. Defaults to default; set safeguard to enable chunked summarization for very long histories.

3. Session logs show failures:

{
  "type": "compaction",
  "summary": "Summary unavailable due to context limits. Older messages were truncated.",
  "deletedCount": 847
}

This appears repeatedly in session JSONL files when using the default config.

4. Wizard doesn't configure this:

The setup wizard (src/commands/configure.wizard.ts) has no section for compaction settings. Users cannot opt out of safeguard mode without manually editing their config.

Impact

  • All new users get broken compaction by default
  • Silent failure — no warning that summarization failed
  • Context loss — AI loses conversation history without proper summary
  • Users may not realize the issue for weeks/months

Expected Behavior

Either:

  1. Change the default from "safeguard" to "default" (which works reliably)
  2. Fix safeguard mode to actually produce summaries on large contexts
  3. Add a warning when safeguard mode can't summarize
  4. Add compaction settings to the setup wizard
  5. Update documentation to match actual code behavior

Workaround

Users can manually set in ~/.openclaw/openclaw.json:

{
  "agents": {
    "defaults": {
      "compaction": {
        "mode": "default",
        "reserveTokensFloor": 40000
      }
    }
  }
}

Increasing reserveTokensFloor from the default 20,000 to 40,000 triggers compaction earlier, giving more headroom for the summarization API call.

Environment

  • OpenClaw version: 2026.1.29 (latest)
  • Model: Claude Opus 4.5 (200k context)
  • OS: Linux

Roadmap Alignment

This issue aligns with the current priority listed in CONTRIBUTING.md:

Performance: Optimizing token usage and compaction logic.

Happy to help with a PR if maintainers point me in the right direction.

Related

  • src/config/defaults.tsapplyCompactionDefaults()
  • src/agents/pi-settings.tsDEFAULT_PI_COMPACTION_RESERVE_TOKENS_FLOOR
  • src/commands/configure.wizard.ts — no compaction section
  • Documentation at docs.clawdbot.com/gateway/configuration

I'm writing a blog post about OpenClaw memory optimization and discovered this while investigating why my bot kept "forgetting" context. Happy to help test any fix.

I can take a crack at a PR if you'd like — the fix seems straightforward (change default in applyCompactionDefaults or update docs). Let me know which direction you prefer.

Metadata

Metadata

Assignees

No one assigned

    Labels

    bugSomething isn't working

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions