Skip to content

model.input.includes() crashes for custom providers without input field #42068

@woriwka-ai

Description

@woriwka-ai

Bug type

Crash (process/app exits or hangs)

Summary

LCM crashes when using custom OpenAI-compatible providers without an input field on the model object, because convertMessages() calls model.input.includes("image") without null-checking.

Steps to reproduce

  1. Configure a custom provider (e.g., MiniMax) in OpenClaw with api: "openai-completions".
  2. Install lossless-claw plugin and set it as contextEngine.
  3. Send messages to trigger LCM compaction.
  4. Observe repeated errors in gateway.err.log.

Expected behavior

LCM compaction should succeed for custom OpenAI-compatible providers without throwing errors, and completeSimple() should return the compacted content instead of falling back to truncation.

Actual behavior

convertMessages() calls model.input.includes("image") when model.input is undefined, causing:
TypeError: Cannot read properties of undefined (reading 'includes')
completeSimple() catches the error and returns empty content, so LCM always falls back to truncation.

OpenClaw version

OpenClaw v2026.3.8

Operating system

macOS

Install method

Installed via official OpenClaw installer

Model

MiniMax M2.5 (via minimax-portal, OpenAI-compatible API)

Provider / routing chain

minimax-portal → api: "openai-completions" (pi-ai)

Config file / key location

Custom provider configured in gateway config (redacted)

Additional provider/model setup details

Custom provider is not part of pi-ai's built-in model registry, so getModel() returns undefined and the caller constructs a minimal model object without input field.

Logs, screenshots, and evidence

[lcm] completeSimple error: Cannot read properties of undefined (reading 'includes')
[lcm] all extraction attempts exhausted; source=fallback

Impact and severity

Affects any custom OpenAI-compatible providers that do not define model.input.
Severity: blocks LCM compaction for these providers.
Frequency: always reproducible.
Consequence: LCM cannot compact context and falls back to truncation on every attempt.

Additional information

Proposed fix in providers/openai-completions.js:

// Line ~445

  • const filteredContent = !model.input.includes("image")
  • const filteredContent = !model.input?.includes("image")

// Line ~562

  • if (hasImages && model.input.includes("image")) {
  • if (hasImages && model.input?.includes("image")) {

Related feature request for lossless-claw: add "minimax-portal" and "minimax" to keyMap in resolveApiKey() to support MINIMAX_API_KEY environment variable.

Metadata

Metadata

Assignees

No one assigned

    Labels

    bugSomething isn't workingbug:crashProcess/app exits unexpectedly or hangs

    Type

    No type
    No fields configured for issues without a type.

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions