fix(doctor): resolve false positive for local memory search when no explicit modelPath#32014
Conversation
There was a problem hiding this comment.
💡 Codex Review
Here are some automated review suggestions for this pull request.
Reviewed commit: 3cbe22737e
ℹ️ About Codex in GitHub
Codex has been enabled to automatically review pull requests in this repo. Reviews are triggered when you
- Open a pull request for review
- Mark a draft as ready
- Comment "@codex review".
If Codex has suggestions, it will comment; otherwise it will react with 👍.
When you sign up for Codex through ChatGPT, Codex can also answer questions or update the PR, like "@codex address that feedback".
Greptile SummaryFixes a false positive in
Confidence Score: 5/5
Last reviewed commit: 3cbe227 |
There was a problem hiding this comment.
💡 Codex Review
Here are some automated review suggestions for this pull request.
Reviewed commit: 43ca93b385
ℹ️ About Codex in GitHub
Codex has been enabled to automatically review pull requests in this repo. Reviews are triggered when you
- Open a pull request for review
- Mark a draft as ready
- Comment "@codex review".
If Codex has suggestions, it will comment; otherwise it will react with 👍.
When you sign up for Codex through ChatGPT, Codex can also answer questions or update the PR, like "@codex address that feedback".
src/commands/doctor-memory-search.ts
Outdated
| if (hasLocalEmbeddings(resolved.local, true)) { | ||
| return; // local model file exists (or default model will be auto-downloaded) |
There was a problem hiding this comment.
Preserve probe-based warning for local default model path
This new early return bypasses gatewayMemoryProbe handling whenever provider is "local" and modelPath is unset, because hasLocalEmbeddings(..., true) now treats DEFAULT_LOCAL_MODEL as immediately available. In doctor.ts, this function is called with probe data, so when the gateway reports ready: false (for example, local embeddings cannot initialize due to missing node-llama-cpp or model download/setup failure), doctor now emits no memory-search warning and can report a broken local setup as healthy.
Useful? React with 👍 / 👎.
CI Failures — Pre-existing, Unrelated to This PRThe two test failures in this PR are not caused by our changes. Both fail on 1.
|
…xplicit modelPath When memorySearch.provider is 'local' (or 'auto') and no explicit local.modelPath is configured, the runtime auto-resolves to DEFAULT_LOCAL_MODEL (embeddinggemma-300m via HuggingFace). However, the doctor's hasLocalEmbeddings() check only inspected the config value and returned false when modelPath was empty, triggering a misleading warning. Fix: fall back to DEFAULT_LOCAL_MODEL in hasLocalEmbeddings(), matching the runtime behavior in createLocalEmbeddingProvider(). Closes openclaw#31998
Address review feedback: canAutoSelectLocal() in the runtime skips local for empty/hf: model paths in auto mode. The DEFAULT_LOCAL_MODEL fallback should only apply when provider is explicitly 'local', not when provider is 'auto' — otherwise users with no local file and no API keys would get a clean doctor report but no working embeddings. Add useDefaultFallback parameter to hasLocalEmbeddings() to distinguish the two code paths.
…odel When hasLocalEmbeddings returns true via DEFAULT_LOCAL_MODEL fallback, also check the gateway memory probe if available. If the probe reports not-ready (e.g. node-llama-cpp missing or model download failed), emit a warning instead of silently reporting healthy. Addresses review feedback about bypassing probe-based validation.
cd7510a to
b6df323
Compare
|
Landed via temp rebase onto main.
Thanks @adhishthite! |
…xplicit modelPath (openclaw#32014) * fix(doctor): resolve false positive for local memory search when no explicit modelPath When memorySearch.provider is 'local' (or 'auto') and no explicit local.modelPath is configured, the runtime auto-resolves to DEFAULT_LOCAL_MODEL (embeddinggemma-300m via HuggingFace). However, the doctor's hasLocalEmbeddings() check only inspected the config value and returned false when modelPath was empty, triggering a misleading warning. Fix: fall back to DEFAULT_LOCAL_MODEL in hasLocalEmbeddings(), matching the runtime behavior in createLocalEmbeddingProvider(). Closes openclaw#31998 * fix: scope DEFAULT_LOCAL_MODEL fallback to explicit provider:local only Address review feedback: canAutoSelectLocal() in the runtime skips local for empty/hf: model paths in auto mode. The DEFAULT_LOCAL_MODEL fallback should only apply when provider is explicitly 'local', not when provider is 'auto' — otherwise users with no local file and no API keys would get a clean doctor report but no working embeddings. Add useDefaultFallback parameter to hasLocalEmbeddings() to distinguish the two code paths. * fix: preserve gateway probe warning for local provider with default model When hasLocalEmbeddings returns true via DEFAULT_LOCAL_MODEL fallback, also check the gateway memory probe if available. If the probe reports not-ready (e.g. node-llama-cpp missing or model download failed), emit a warning instead of silently reporting healthy. Addresses review feedback about bypassing probe-based validation. * fix: add changelog attribution for doctor local fallback fix (openclaw#32014) (thanks @adhishthite) --------- Co-authored-by: Adhish <adhishthite@Adhishs-MacBook-Pro.local> Co-authored-by: Peter Steinberger <steipete@gmail.com>
…xplicit modelPath (openclaw#32014) * fix(doctor): resolve false positive for local memory search when no explicit modelPath When memorySearch.provider is 'local' (or 'auto') and no explicit local.modelPath is configured, the runtime auto-resolves to DEFAULT_LOCAL_MODEL (embeddinggemma-300m via HuggingFace). However, the doctor's hasLocalEmbeddings() check only inspected the config value and returned false when modelPath was empty, triggering a misleading warning. Fix: fall back to DEFAULT_LOCAL_MODEL in hasLocalEmbeddings(), matching the runtime behavior in createLocalEmbeddingProvider(). Closes openclaw#31998 * fix: scope DEFAULT_LOCAL_MODEL fallback to explicit provider:local only Address review feedback: canAutoSelectLocal() in the runtime skips local for empty/hf: model paths in auto mode. The DEFAULT_LOCAL_MODEL fallback should only apply when provider is explicitly 'local', not when provider is 'auto' — otherwise users with no local file and no API keys would get a clean doctor report but no working embeddings. Add useDefaultFallback parameter to hasLocalEmbeddings() to distinguish the two code paths. * fix: preserve gateway probe warning for local provider with default model When hasLocalEmbeddings returns true via DEFAULT_LOCAL_MODEL fallback, also check the gateway memory probe if available. If the probe reports not-ready (e.g. node-llama-cpp missing or model download failed), emit a warning instead of silently reporting healthy. Addresses review feedback about bypassing probe-based validation. * fix: add changelog attribution for doctor local fallback fix (openclaw#32014) (thanks @adhishthite) --------- Co-authored-by: Adhish <adhishthite@Adhishs-MacBook-Pro.local> Co-authored-by: Peter Steinberger <steipete@gmail.com>
…xplicit modelPath (openclaw#32014) * fix(doctor): resolve false positive for local memory search when no explicit modelPath When memorySearch.provider is 'local' (or 'auto') and no explicit local.modelPath is configured, the runtime auto-resolves to DEFAULT_LOCAL_MODEL (embeddinggemma-300m via HuggingFace). However, the doctor's hasLocalEmbeddings() check only inspected the config value and returned false when modelPath was empty, triggering a misleading warning. Fix: fall back to DEFAULT_LOCAL_MODEL in hasLocalEmbeddings(), matching the runtime behavior in createLocalEmbeddingProvider(). Closes openclaw#31998 * fix: scope DEFAULT_LOCAL_MODEL fallback to explicit provider:local only Address review feedback: canAutoSelectLocal() in the runtime skips local for empty/hf: model paths in auto mode. The DEFAULT_LOCAL_MODEL fallback should only apply when provider is explicitly 'local', not when provider is 'auto' — otherwise users with no local file and no API keys would get a clean doctor report but no working embeddings. Add useDefaultFallback parameter to hasLocalEmbeddings() to distinguish the two code paths. * fix: preserve gateway probe warning for local provider with default model When hasLocalEmbeddings returns true via DEFAULT_LOCAL_MODEL fallback, also check the gateway memory probe if available. If the probe reports not-ready (e.g. node-llama-cpp missing or model download failed), emit a warning instead of silently reporting healthy. Addresses review feedback about bypassing probe-based validation. * fix: add changelog attribution for doctor local fallback fix (openclaw#32014) (thanks @adhishthite) --------- Co-authored-by: Adhish <adhishthite@Adhishs-MacBook-Pro.local> Co-authored-by: Peter Steinberger <steipete@gmail.com>
…xplicit modelPath (openclaw#32014) * fix(doctor): resolve false positive for local memory search when no explicit modelPath When memorySearch.provider is 'local' (or 'auto') and no explicit local.modelPath is configured, the runtime auto-resolves to DEFAULT_LOCAL_MODEL (embeddinggemma-300m via HuggingFace). However, the doctor's hasLocalEmbeddings() check only inspected the config value and returned false when modelPath was empty, triggering a misleading warning. Fix: fall back to DEFAULT_LOCAL_MODEL in hasLocalEmbeddings(), matching the runtime behavior in createLocalEmbeddingProvider(). Closes openclaw#31998 * fix: scope DEFAULT_LOCAL_MODEL fallback to explicit provider:local only Address review feedback: canAutoSelectLocal() in the runtime skips local for empty/hf: model paths in auto mode. The DEFAULT_LOCAL_MODEL fallback should only apply when provider is explicitly 'local', not when provider is 'auto' — otherwise users with no local file and no API keys would get a clean doctor report but no working embeddings. Add useDefaultFallback parameter to hasLocalEmbeddings() to distinguish the two code paths. * fix: preserve gateway probe warning for local provider with default model When hasLocalEmbeddings returns true via DEFAULT_LOCAL_MODEL fallback, also check the gateway memory probe if available. If the probe reports not-ready (e.g. node-llama-cpp missing or model download failed), emit a warning instead of silently reporting healthy. Addresses review feedback about bypassing probe-based validation. * fix: add changelog attribution for doctor local fallback fix (openclaw#32014) (thanks @adhishthite) --------- Co-authored-by: Adhish <adhishthite@Adhishs-MacBook-Pro.local> Co-authored-by: Peter Steinberger <steipete@gmail.com>
Summary
Fixes a false positive warning in
openclaw doctorwhere it reports "Memory search provider is set to "local" but no local model file was found" even when local memory search is working correctly with the default auto-resolved model.Problem
When
memorySearch.provideris"local"(or"auto") and no explicitlocal.modelPathis configured, the runtime increateLocalEmbeddingProvider()auto-resolves toDEFAULT_LOCAL_MODEL(hf:ggml-org/embeddinggemma-300m-qat-q8_0-GGUF). However, the doctor'shasLocalEmbeddings()only inspected the config value and returnedfalsewhenmodelPathwas empty, triggering a misleading warning.openclaw memory status --deepconfirms everything is working:Fix
In
hasLocalEmbeddings(), fall back toDEFAULT_LOCAL_MODELwhen no explicitmodelPathis set — matching the runtime behavior increateLocalEmbeddingProvider():Tests
hf:modelPath should not warnCloses #31998