Skip to content

fix(doctor): resolve false positive for local memory search when no explicit modelPath#32014

Merged
steipete merged 4 commits intoopenclaw:mainfrom
adhishthite:fix/doctor-local-memory-search-false-positive
Mar 2, 2026
Merged

fix(doctor): resolve false positive for local memory search when no explicit modelPath#32014
steipete merged 4 commits intoopenclaw:mainfrom
adhishthite:fix/doctor-local-memory-search-false-positive

Conversation

@adhishthite
Copy link
Contributor

Summary

Fixes a false positive warning in openclaw doctor where it reports "Memory search provider is set to "local" but no local model file was found" even when local memory search is working correctly with the default auto-resolved model.

Problem

When memorySearch.provider is "local" (or "auto") and no explicit local.modelPath is configured, the runtime in createLocalEmbeddingProvider() auto-resolves to DEFAULT_LOCAL_MODEL (hf:ggml-org/embeddinggemma-300m-qat-q8_0-GGUF). However, the doctor's hasLocalEmbeddings() only inspected the config value and returned false when modelPath was empty, triggering a misleading warning.

openclaw memory status --deep confirms everything is working:

Provider: local (requested: local)
Model: hf:ggml-org/embeddinggemma-300m-qat-q8_0-GGUF/embeddinggemma-300m-qat-Q8_0.gguf
Embeddings: ready
48/48 files indexed, 212 chunks

Fix

In hasLocalEmbeddings(), fall back to DEFAULT_LOCAL_MODEL when no explicit modelPath is set — matching the runtime behavior in createLocalEmbeddingProvider():

// Before
const modelPath = local.modelPath?.trim();

// After
const modelPath = local.modelPath?.trim() || DEFAULT_LOCAL_MODEL;

Tests

  • Added test: local provider with no explicit modelPath should not warn (default model fallback)
  • Added test: local provider with explicit hf: modelPath should not warn
  • Updated auto-mode test to reflect that default local model is now correctly detected
  • All 11 tests pass

Closes #31998

@openclaw-barnacle openclaw-barnacle bot added commands Command implementations size: XS labels Mar 2, 2026
Copy link

@chatgpt-codex-connector chatgpt-codex-connector bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

💡 Codex Review

Here are some automated review suggestions for this pull request.

Reviewed commit: 3cbe22737e

ℹ️ About Codex in GitHub

Codex has been enabled to automatically review pull requests in this repo. Reviews are triggered when you

  • Open a pull request for review
  • Mark a draft as ready
  • Comment "@codex review".

If Codex has suggestions, it will comment; otherwise it will react with 👍.

When you sign up for Codex through ChatGPT, Codex can also answer questions or update the PR, like "@codex address that feedback".

@greptile-apps
Copy link
Contributor

greptile-apps bot commented Mar 2, 2026

Greptile Summary

Fixes a false positive in openclaw doctor where it incorrectly reported missing local model files even when the default auto-resolved model was available. The fix ensures the doctor's hasLocalEmbeddings() check matches the runtime behavior in createLocalEmbeddingProvider() by falling back to DEFAULT_LOCAL_MODEL when no explicit modelPath is configured.

  • Correctly mirrors runtime logic that auto-resolves to the default HuggingFace model (hf:ggml-org/embeddinggemma-300m-qat-q8_0-GGUF)
  • Added comprehensive test coverage for both no-modelPath and explicit hf: modelPath scenarios
  • Updated auto-mode test to reflect that default local model is now properly detected

Confidence Score: 5/5

  • This PR is safe to merge with minimal risk
  • The fix is a simple one-line change that adds proper fallback logic to match existing runtime behavior. It's well-tested with three test cases covering the scenarios, and the change is isolated to the doctor health check functionality without affecting runtime behavior
  • No files require special attention

Last reviewed commit: 3cbe227

Copy link

@chatgpt-codex-connector chatgpt-codex-connector bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

💡 Codex Review

Here are some automated review suggestions for this pull request.

Reviewed commit: 43ca93b385

ℹ️ About Codex in GitHub

Codex has been enabled to automatically review pull requests in this repo. Reviews are triggered when you

  • Open a pull request for review
  • Mark a draft as ready
  • Comment "@codex review".

If Codex has suggestions, it will comment; otherwise it will react with 👍.

When you sign up for Codex through ChatGPT, Codex can also answer questions or update the PR, like "@codex address that feedback".

Comment on lines +46 to +47
if (hasLocalEmbeddings(resolved.local, true)) {
return; // local model file exists (or default model will be auto-downloaded)

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

P2 Badge Preserve probe-based warning for local default model path

This new early return bypasses gatewayMemoryProbe handling whenever provider is "local" and modelPath is unset, because hasLocalEmbeddings(..., true) now treats DEFAULT_LOCAL_MODEL as immediately available. In doctor.ts, this function is called with probe data, so when the gateway reports ready: false (for example, local embeddings cannot initialize due to missing node-llama-cpp or model download/setup failure), doctor now emits no memory-search warning and can report a broken local setup as healthy.

Useful? React with 👍 / 👎.

@adhishthite
Copy link
Contributor Author

CI Failures — Pre-existing, Unrelated to This PR

The two test failures in this PR are not caused by our changes. Both fail on main as well:

1. isolated-agent.uses-last-non-empty-agent-text-as.test.ts (line 424)

AssertionError: expected 'anthropic' to be 'openai'

The test expects openai as the default provider after a model override round-trip, but the model resolution logic now returns anthropic. Likely a recent change to the model catalog or default provider resolution.

2. temp-path-guard.test.ts (line 239)

Expected: []
Received: ["src/gateway/server.auth.control-ui.suite.ts",
           "src/gateway/server.auth.default-token.suite.ts", 
           "src/gateway/server.auth.shared.ts"]

Three auth test files use dynamic tmpdir joins that trigger the security guardrail scanner. These files likely need to use the repo's safe tmpdir helper.


Our change (doctor-memory-search.ts + tests) passes in all runners. The check job (lint + type-check) is green ✅.

Adhish and others added 4 commits March 2, 2026 18:34
…xplicit modelPath

When memorySearch.provider is 'local' (or 'auto') and no explicit
local.modelPath is configured, the runtime auto-resolves to
DEFAULT_LOCAL_MODEL (embeddinggemma-300m via HuggingFace). However,
the doctor's hasLocalEmbeddings() check only inspected the config
value and returned false when modelPath was empty, triggering a
misleading warning.

Fix: fall back to DEFAULT_LOCAL_MODEL in hasLocalEmbeddings(), matching
the runtime behavior in createLocalEmbeddingProvider().

Closes openclaw#31998
Address review feedback: canAutoSelectLocal() in the runtime skips
local for empty/hf: model paths in auto mode. The DEFAULT_LOCAL_MODEL
fallback should only apply when provider is explicitly 'local', not
when provider is 'auto' — otherwise users with no local file and no
API keys would get a clean doctor report but no working embeddings.

Add useDefaultFallback parameter to hasLocalEmbeddings() to
distinguish the two code paths.
…odel

When hasLocalEmbeddings returns true via DEFAULT_LOCAL_MODEL fallback,
also check the gateway memory probe if available. If the probe reports
not-ready (e.g. node-llama-cpp missing or model download failed),
emit a warning instead of silently reporting healthy.

Addresses review feedback about bypassing probe-based validation.
@steipete steipete force-pushed the fix/doctor-local-memory-search-false-positive branch from cd7510a to b6df323 Compare March 2, 2026 18:35
@steipete steipete merged commit 63734df into openclaw:main Mar 2, 2026
9 checks passed
@steipete
Copy link
Contributor

steipete commented Mar 2, 2026

Landed via temp rebase onto main.

  • Gate: bunx vitest run src/commands/doctor-memory-search.test.ts
  • Land commit: b6df323
  • Merge commit: 63734df

Thanks @adhishthite!

execute008 pushed a commit to execute008/openclaw that referenced this pull request Mar 2, 2026
…xplicit modelPath (openclaw#32014)

* fix(doctor): resolve false positive for local memory search when no explicit modelPath

When memorySearch.provider is 'local' (or 'auto') and no explicit
local.modelPath is configured, the runtime auto-resolves to
DEFAULT_LOCAL_MODEL (embeddinggemma-300m via HuggingFace). However,
the doctor's hasLocalEmbeddings() check only inspected the config
value and returned false when modelPath was empty, triggering a
misleading warning.

Fix: fall back to DEFAULT_LOCAL_MODEL in hasLocalEmbeddings(), matching
the runtime behavior in createLocalEmbeddingProvider().

Closes openclaw#31998

* fix: scope DEFAULT_LOCAL_MODEL fallback to explicit provider:local only

Address review feedback: canAutoSelectLocal() in the runtime skips
local for empty/hf: model paths in auto mode. The DEFAULT_LOCAL_MODEL
fallback should only apply when provider is explicitly 'local', not
when provider is 'auto' — otherwise users with no local file and no
API keys would get a clean doctor report but no working embeddings.

Add useDefaultFallback parameter to hasLocalEmbeddings() to
distinguish the two code paths.

* fix: preserve gateway probe warning for local provider with default model

When hasLocalEmbeddings returns true via DEFAULT_LOCAL_MODEL fallback,
also check the gateway memory probe if available. If the probe reports
not-ready (e.g. node-llama-cpp missing or model download failed),
emit a warning instead of silently reporting healthy.

Addresses review feedback about bypassing probe-based validation.

* fix: add changelog attribution for doctor local fallback fix (openclaw#32014) (thanks @adhishthite)

---------

Co-authored-by: Adhish <adhishthite@Adhishs-MacBook-Pro.local>
Co-authored-by: Peter Steinberger <steipete@gmail.com>
dawi369 pushed a commit to dawi369/davis that referenced this pull request Mar 3, 2026
…xplicit modelPath (openclaw#32014)

* fix(doctor): resolve false positive for local memory search when no explicit modelPath

When memorySearch.provider is 'local' (or 'auto') and no explicit
local.modelPath is configured, the runtime auto-resolves to
DEFAULT_LOCAL_MODEL (embeddinggemma-300m via HuggingFace). However,
the doctor's hasLocalEmbeddings() check only inspected the config
value and returned false when modelPath was empty, triggering a
misleading warning.

Fix: fall back to DEFAULT_LOCAL_MODEL in hasLocalEmbeddings(), matching
the runtime behavior in createLocalEmbeddingProvider().

Closes openclaw#31998

* fix: scope DEFAULT_LOCAL_MODEL fallback to explicit provider:local only

Address review feedback: canAutoSelectLocal() in the runtime skips
local for empty/hf: model paths in auto mode. The DEFAULT_LOCAL_MODEL
fallback should only apply when provider is explicitly 'local', not
when provider is 'auto' — otherwise users with no local file and no
API keys would get a clean doctor report but no working embeddings.

Add useDefaultFallback parameter to hasLocalEmbeddings() to
distinguish the two code paths.

* fix: preserve gateway probe warning for local provider with default model

When hasLocalEmbeddings returns true via DEFAULT_LOCAL_MODEL fallback,
also check the gateway memory probe if available. If the probe reports
not-ready (e.g. node-llama-cpp missing or model download failed),
emit a warning instead of silently reporting healthy.

Addresses review feedback about bypassing probe-based validation.

* fix: add changelog attribution for doctor local fallback fix (openclaw#32014) (thanks @adhishthite)

---------

Co-authored-by: Adhish <adhishthite@Adhishs-MacBook-Pro.local>
Co-authored-by: Peter Steinberger <steipete@gmail.com>
OWALabuy pushed a commit to kcinzgg/openclaw that referenced this pull request Mar 4, 2026
…xplicit modelPath (openclaw#32014)

* fix(doctor): resolve false positive for local memory search when no explicit modelPath

When memorySearch.provider is 'local' (or 'auto') and no explicit
local.modelPath is configured, the runtime auto-resolves to
DEFAULT_LOCAL_MODEL (embeddinggemma-300m via HuggingFace). However,
the doctor's hasLocalEmbeddings() check only inspected the config
value and returned false when modelPath was empty, triggering a
misleading warning.

Fix: fall back to DEFAULT_LOCAL_MODEL in hasLocalEmbeddings(), matching
the runtime behavior in createLocalEmbeddingProvider().

Closes openclaw#31998

* fix: scope DEFAULT_LOCAL_MODEL fallback to explicit provider:local only

Address review feedback: canAutoSelectLocal() in the runtime skips
local for empty/hf: model paths in auto mode. The DEFAULT_LOCAL_MODEL
fallback should only apply when provider is explicitly 'local', not
when provider is 'auto' — otherwise users with no local file and no
API keys would get a clean doctor report but no working embeddings.

Add useDefaultFallback parameter to hasLocalEmbeddings() to
distinguish the two code paths.

* fix: preserve gateway probe warning for local provider with default model

When hasLocalEmbeddings returns true via DEFAULT_LOCAL_MODEL fallback,
also check the gateway memory probe if available. If the probe reports
not-ready (e.g. node-llama-cpp missing or model download failed),
emit a warning instead of silently reporting healthy.

Addresses review feedback about bypassing probe-based validation.

* fix: add changelog attribution for doctor local fallback fix (openclaw#32014) (thanks @adhishthite)

---------

Co-authored-by: Adhish <adhishthite@Adhishs-MacBook-Pro.local>
Co-authored-by: Peter Steinberger <steipete@gmail.com>
zooqueen pushed a commit to hanzoai/bot that referenced this pull request Mar 6, 2026
…xplicit modelPath (openclaw#32014)

* fix(doctor): resolve false positive for local memory search when no explicit modelPath

When memorySearch.provider is 'local' (or 'auto') and no explicit
local.modelPath is configured, the runtime auto-resolves to
DEFAULT_LOCAL_MODEL (embeddinggemma-300m via HuggingFace). However,
the doctor's hasLocalEmbeddings() check only inspected the config
value and returned false when modelPath was empty, triggering a
misleading warning.

Fix: fall back to DEFAULT_LOCAL_MODEL in hasLocalEmbeddings(), matching
the runtime behavior in createLocalEmbeddingProvider().

Closes openclaw#31998

* fix: scope DEFAULT_LOCAL_MODEL fallback to explicit provider:local only

Address review feedback: canAutoSelectLocal() in the runtime skips
local for empty/hf: model paths in auto mode. The DEFAULT_LOCAL_MODEL
fallback should only apply when provider is explicitly 'local', not
when provider is 'auto' — otherwise users with no local file and no
API keys would get a clean doctor report but no working embeddings.

Add useDefaultFallback parameter to hasLocalEmbeddings() to
distinguish the two code paths.

* fix: preserve gateway probe warning for local provider with default model

When hasLocalEmbeddings returns true via DEFAULT_LOCAL_MODEL fallback,
also check the gateway memory probe if available. If the probe reports
not-ready (e.g. node-llama-cpp missing or model download failed),
emit a warning instead of silently reporting healthy.

Addresses review feedback about bypassing probe-based validation.

* fix: add changelog attribution for doctor local fallback fix (openclaw#32014) (thanks @adhishthite)

---------

Co-authored-by: Adhish <adhishthite@Adhishs-MacBook-Pro.local>
Co-authored-by: Peter Steinberger <steipete@gmail.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

commands Command implementations size: S

Projects

None yet

Development

Successfully merging this pull request may close these issues.

Doctor: false positive warning for local memory search when no explicit modelPath set

2 participants