Skip to content

feat: add Ollama model provider support#575

Merged
lefarcen merged 8 commits intomainfrom
feat/ollama-provider
Mar 26, 2026
Merged

feat: add Ollama model provider support#575
lefarcen merged 8 commits intomainfrom
feat/ollama-provider

Conversation

@mrcfps
Copy link
Copy Markdown
Contributor

@mrcfps mrcfps commented Mar 26, 2026

What

Add Ollama as a model provider and fix model selection so saved defaults stay compatible and remain synced in the UI.

Closes #565

Why

Users need to configure Ollama models alongside existing providers, and the model picker was not reliably reflecting saved defaults or legacy default IDs.

How

  • add Ollama to provider definitions, controller services, config compilation, and desktop-compatible routes
  • add the Ollama provider icon, labels, and model management UI updates in the web dashboard
  • add regression coverage for provider service behavior, config compilation, and model selection sync

Affected areas

  • Desktop app (Electron shell)
  • Controller (backend / API)
  • Web dashboard (React UI)
  • OpenClaw runtime
  • Skills
  • Shared schemas / packages
  • Build / CI / Tooling

Checklist

  • pnpm typecheck passes
  • pnpm lint passes
  • pnpm test passes
  • pnpm generate-types run (if API routes/schemas changed)
  • No credentials or tokens in code or logs
  • No any types introduced (use unknown with narrowing)

Notes for reviewers

Please focus on the saved-default migration path and the model picker sync behavior for legacy provider IDs.

Summary by CodeRabbit

  • New Features

    • Added Ollama as a supported local AI provider with branding, icon and EN/ZH descriptions
    • UI: hide API key for Ollama, allow saving/enabling without a key, and add a "refresh models" button
    • Provider verification: verify Ollama endpoints and auto-fetch available models
  • Tests

    • Added tests covering Ollama provider integration, verification, model refresh, and model-selection logic

@coderabbitai
Copy link
Copy Markdown

coderabbitai Bot commented Mar 26, 2026

No actionable comments were generated in the recent review. 🎉

ℹ️ Recent review info
⚙️ Run configuration

Configuration used: defaults

Review profile: CHILL

Plan: Pro

Run ID: a707abb6-b057-49a8-8ef1-994945919ff8

📥 Commits

Reviewing files that changed from the base of the PR and between ffbf078 and d51b9f3.

📒 Files selected for processing (2)
  • apps/web/src/pages/models.tsx
  • tests/web/models-selection.test.ts
🚧 Files skipped from review as they are similar to previous changes (2)
  • tests/web/models-selection.test.ts
  • apps/web/src/pages/models.tsx

📝 Walkthrough

Walkthrough

Adds Ollama as a BYOK provider across API schemas, backend services (upsert/verify), OpenClaw config/runtime resolution, frontend UI/types/i18n/branding, and tests; includes provider-specific handling (dummy key, model discovery) and a model-refresh flow.

Changes

Cohort / File(s) Summary
OpenAPI & client types
apps/controller/openapi.json, apps/web/lib/api/types.gen.ts
Add "ollama" to providerId enums and generated request-path union types for provider PUT/DELETE/POST verify endpoints.
BYOK registry & OpenClaw mapping
apps/controller/src/lib/byok-providers.ts, apps/controller/src/lib/openclaw-config-compiler.ts
Include ollama in supported BYOK IDs; add default base URL http://127.0.0.1:11434 and map provider API id to "ollama" during OpenClaw resolution.
Provider management & verification
apps/controller/src/services/model-provider-service.ts, tests/desktop/model-provider-service.test.ts
Special-case Ollama in upsert (force dummy apiKey when absent) and verify (call {baseUrl}/api/tags, parse model names); add persistence and verify tests.
OpenClaw runtime & config compile tests
apps/controller/src/runtime/openclaw-process.ts, tests/desktop/openclaw-config-compiler.test.ts
Add workspace-root discovery to prefer workspace runtime entry; test compile includes Ollama provider mapping and models.
Frontend BYOK UI, types & selection logic
apps/web/src/pages/models.tsx, tests/web/models-selection.test.ts
Export isModelSelected, add Ollama provider metadata, dummy key behavior, hide API key input, add refresh models button and related tests.
Branding & i18n
apps/web/src/components/provider-logo.tsx, apps/web/src/i18n/locales/en.ts, apps/web/src/i18n/locales/zh-CN.ts
Add Ollama icon mapping and English/Chinese description strings.
Desktop compatibility route
apps/controller/src/routes/desktop-compat-routes.ts
Change default model retrieval to use configStore.runtime.defaultModelId and normalize via resolveModelId.

Sequence Diagram(s)

sequenceDiagram
    participant Web as Web UI
    participant Controller as Controller API
    participant Service as ModelProviderService
    participant Ollama as Local Ollama
    participant Config as NexuConfigStore

    Web->>Controller: POST /api/v1/providers/ollama/verify { baseUrl, apiKey? }
    Controller->>Service: verifyProvider("ollama", input)
    Service->>Ollama: GET {baseUrl}/api/tags (Authorization if provided)
    Ollama-->>Service: 200 [{"name":"modelA"}, {"name":"modelB"}]
    Service-->>Controller: { valid: true, models: ["modelA","modelB"] }
    Controller-->>Web: 200 { valid: true, models: [...] }

    alt User saves provider
        Web->>Controller: PUT /api/v1/providers/ollama { baseUrl, apiKey? }
        Controller->>Service: upsertProvider(...)
        Service->>Config: persist provider (apiKey -> "ollama-local" if missing)
        Config-->>Service: saved
        Service-->>Controller: 200 OK
        Controller-->>Web: 200 OK
    end
Loading

Estimated code review effort

🎯 3 (Moderate) | ⏱️ ~25 minutes

Possibly related PRs

Suggested reviewers

  • lefarcen
  • nettee

Poem

🐰 I hop to the meadow where Ollama hums,

I sniff base URLs and tally model sums.
No key in my pouch — "ollama-local" will stay,
I fetch tags, refresh models, then bounce on my way.

🚥 Pre-merge checks | ✅ 4 | ❌ 1

❌ Failed checks (1 warning)

Check name Status Explanation Resolution
Docstring Coverage ⚠️ Warning Docstring coverage is 14.29% which is insufficient. The required threshold is 80.00%. Write docstrings for the functions missing them to satisfy the coverage threshold.
✅ Passed checks (4 passed)
Check name Status Explanation
Title check ✅ Passed The title 'feat: add Ollama model provider support' is clear, concise, and accurately summarizes the main change—adding Ollama as a new model provider across the codebase.
Description check ✅ Passed The description covers all key sections: What (add Ollama provider), Why (user need), How (implementation approach), and Affected areas. Checklist items confirm generated types were run and no credentials/any types introduced, though typecheck/lint/test completion is not indicated.
Linked Issues check ✅ Passed The PR implements all core requirements from issue #565: Ollama provider option added, custom service address supported (default http://127.0.0.1:11434), automatic model discovery via /api/tags endpoint, OpenAI-compatible Chat Completion API adaptation, and error handling for provider verification.
Out of Scope Changes check ✅ Passed Changes include a desktop-compat route modification and config compiler updates related to runtime entry-point resolution, which are peripheral to Ollama support but necessary for integration. The model selection fix and isModelSelected export are closely tied to the Ollama feature objectives.

✏️ Tip: You can configure your own custom pre-merge checks in the settings.

✨ Finishing Touches
📝 Generate docstrings
  • Create stacked PR
  • Commit on current branch
🧪 Generate unit tests (beta)
  • Create PR with unit tests
  • Commit unit tests in branch feat/ollama-provider

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

Copy link
Copy Markdown

@chatgpt-codex-connector chatgpt-codex-connector Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

💡 Codex Review

Here are some automated review suggestions for this pull request.

Reviewed commit: f1606325ed

ℹ️ About Codex in GitHub

Your team has set up Codex to review pull requests in this repo. Reviews are triggered when you

  • Open a pull request for review
  • Mark a draft as ready
  • Comment "@codex review".

If Codex has suggestions, it will comment; otherwise it will react with 👍.

Codex can also answer questions or update the PR. Try commenting "@codex address that feedback".

Comment thread apps/web/src/pages/models.tsx Outdated
Comment thread apps/controller/src/routes/desktop-compat-routes.ts Outdated
Copy link
Copy Markdown

@coderabbitai coderabbitai Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

🧹 Nitpick comments (2)
apps/controller/src/services/model-provider-service.ts (1)

498-523: Consider supporting auth headers for secured Ollama instances.

The verification request doesn't include any authentication headers, but input.apiKey is available. If a user configures a secured Ollama instance (e.g., behind a reverse proxy with API key auth), the verification will fail even with valid credentials.

Consider adding optional auth support:

💡 Suggested enhancement
       if (providerId === "ollama") {
+        const headers: Record<string, string> = {};
+        if (input.apiKey && input.apiKey !== OLLAMA_DUMMY_API_KEY) {
+          headers.Authorization = `Bearer ${input.apiKey}`;
+        }
         const response = await fetch(
           buildProviderUrl(
             input.baseUrl ?? PROVIDER_BASE_URLS[providerId] ?? null,
             "/api/tags",
           ) ?? verifyUrl,
           {
+            headers: Object.keys(headers).length > 0 ? headers : undefined,
             signal: AbortSignal.timeout(10000),
           },
         );
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@apps/controller/src/services/model-provider-service.ts` around lines 498 -
523, The Ollama verification branch (providerId === "ollama" inside
validate/check routine) currently calls fetch(buildProviderUrl(...) ??
verifyUrl, ...) without any auth headers; update the request to include
authentication when input.apiKey is present—add an Authorization (e.g., Bearer)
or X-API-Key header as appropriate to the fetch options so secured Ollama
instances behind API-key auth can be validated; keep using buildProviderUrl and
preserve the AbortSignal.timeout behavior and error handling.
apps/web/src/pages/models.tsx (1)

1352-1354: Consider extracting the placeholder constant.

The hardcoded "ollama-local" string is used as a dummy API key for Ollama (which doesn't require authentication). This value propagates to the backend and database. Consider extracting it to a named constant for clarity and to avoid magic strings.

💡 Suggested refactor
+const OLLAMA_PLACEHOLDER_KEY = "ollama-local";
+
 function ByokProviderDetail({
   // ...
 }) {
   // ...
   const isOllama = providerId === "ollama";
   const hostBridge = getModelsHostInvokeBridge();
-  const effectiveApiKey = isOllama ? "ollama-local" : apiKey;
+  const effectiveApiKey = isOllama ? OLLAMA_PLACEHOLDER_KEY : apiKey;
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@apps/web/src/pages/models.tsx` around lines 1352 - 1354, Replace the magic
string "ollama-local" used when computing effectiveApiKey with a named constant
(e.g., OLLAMA_DUMMY_API_KEY or OLLAMA_LOCAL_API_KEY); update the code that sets
effectiveApiKey (the isOllama check and assignment) to use that constant and
move the constant to an appropriate scope/file (same module or shared constants)
so it’s clear this is a deliberate dummy key and can be reused wherever isOllama
or effectiveApiKey logic is referenced.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@apps/controller/src/lib/openclaw-config-compiler.ts`:
- Around line 70-71: In the switch mapping that handles provider types (the
branch matching case "ollama" in openclaw-config-compiler.ts), change the
returned API type from "ollama" to "openai-completions" so Ollama is treated as
the OpenAI-compatible completions API; update the return value in the case
"ollama" branch to "openai-completions".

---

Nitpick comments:
In `@apps/controller/src/services/model-provider-service.ts`:
- Around line 498-523: The Ollama verification branch (providerId === "ollama"
inside validate/check routine) currently calls fetch(buildProviderUrl(...) ??
verifyUrl, ...) without any auth headers; update the request to include
authentication when input.apiKey is present—add an Authorization (e.g., Bearer)
or X-API-Key header as appropriate to the fetch options so secured Ollama
instances behind API-key auth can be validated; keep using buildProviderUrl and
preserve the AbortSignal.timeout behavior and error handling.

In `@apps/web/src/pages/models.tsx`:
- Around line 1352-1354: Replace the magic string "ollama-local" used when
computing effectiveApiKey with a named constant (e.g., OLLAMA_DUMMY_API_KEY or
OLLAMA_LOCAL_API_KEY); update the code that sets effectiveApiKey (the isOllama
check and assignment) to use that constant and move the constant to an
appropriate scope/file (same module or shared constants) so it’s clear this is a
deliberate dummy key and can be reused wherever isOllama or effectiveApiKey
logic is referenced.
🪄 Autofix (Beta)

Fix all unresolved CodeRabbit comments on this PR:

  • Push a commit to this branch (recommended)
  • Create a new PR with the fixes

ℹ️ Review info
⚙️ Run configuration

Configuration used: defaults

Review profile: CHILL

Plan: Pro

Run ID: 193f302a-bc7a-41d8-b4d3-d9478325e2ba

📥 Commits

Reviewing files that changed from the base of the PR and between cd19976 and f160632.

📒 Files selected for processing (13)
  • apps/controller/openapi.json
  • apps/controller/src/lib/byok-providers.ts
  • apps/controller/src/lib/openclaw-config-compiler.ts
  • apps/controller/src/routes/desktop-compat-routes.ts
  • apps/controller/src/services/model-provider-service.ts
  • apps/web/lib/api/types.gen.ts
  • apps/web/src/components/provider-logo.tsx
  • apps/web/src/i18n/locales/en.ts
  • apps/web/src/i18n/locales/zh-CN.ts
  • apps/web/src/pages/models.tsx
  • tests/desktop/model-provider-service.test.ts
  • tests/desktop/openclaw-config-compiler.test.ts
  • tests/web/models-selection.test.ts

Comment thread apps/controller/src/lib/openclaw-config-compiler.ts
Copy link
Copy Markdown

@chatgpt-codex-connector chatgpt-codex-connector Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

💡 Codex Review

Here are some automated review suggestions for this pull request.

Reviewed commit: 703b319e62

ℹ️ About Codex in GitHub

Your team has set up Codex to review pull requests in this repo. Reviews are triggered when you

  • Open a pull request for review
  • Mark a draft as ready
  • Comment "@codex review".

If Codex has suggestions, it will comment; otherwise it will react with 👍.

Codex can also answer questions or update the PR. Try commenting "@codex address that feedback".

Comment thread apps/web/src/pages/models.tsx Outdated
Copy link
Copy Markdown

@coderabbitai coderabbitai Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

🧹 Nitpick comments (4)
apps/controller/src/runtime/openclaw-process.ts (1)

22-38: Consider extracting findWorkspaceRoot() to a shared utility.

This function is duplicated verbatim in apps/controller/src/services/model-provider-service.ts (lines 181-194). Both files use identical logic to discover the monorepo root for openclaw entry path resolution.

Extracting this to a shared module (e.g., lib/workspace.ts or lib/paths.ts) would reduce duplication and ensure both call sites stay in sync.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@apps/controller/src/runtime/openclaw-process.ts` around lines 22 - 38,
Extract the duplicated findWorkspaceRoot logic into a shared utility module
(e.g., create lib/workspace.ts or lib/paths.ts) that exports a single function
named findWorkspaceRoot(startDir: string): string | null; replace the inline
implementations in openclaw-process.ts and model-provider-service.ts with
imports from that new module and call the exported findWorkspaceRoot to preserve
behavior; ensure the new module imports path and existsSync and re-exports the
function signature unchanged so both call sites (findWorkspaceRoot in
openclaw-process.ts and the duplicate in model-provider-service.ts) require no
other changes.
tests/desktop/openclaw-config-compiler.test.ts (1)

292-292: Misleading test name: "native ollama API" vs "openai-completions".

The test name suggests Ollama is compiled with its "native" API, but the assertion expects api: "openai-completions" (Line 316). While Ollama does expose an OpenAI-compatible endpoint, calling it "native" may confuse future maintainers. Consider a clearer name:

-  it("compiles ollama providers with the native ollama API", () => {
+  it("compiles ollama providers using the openai-completions API", () => {
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@tests/desktop/openclaw-config-compiler.test.ts` at line 292, The test name is
misleading: change the "it" description that currently reads 'compiles ollama
providers with the native ollama API' to clearly indicate the OpenAI-compatible
endpoint (e.g., 'compiles ollama providers with the OpenAI-compatible API') or
alternatively update the expectation to match true native behavior; locate the
test starting with it("compiles ollama providers with the native ollama API", ()
=> { and adjust the string to reference "openai-completions" (or make the
assertion match the native Ollama API) so the test name and the assertion api:
"openai-completions" are consistent.
apps/web/src/pages/models.tsx (2)

2177-2196: Consider extracting the verification result display.

This JSX block (lines 2177-2196) duplicates the verification feedback from lines 2120-2139. Consider extracting to a small component or helper to reduce duplication.

💡 Example extraction
function VerifyResultFeedback({
  result,
  t,
}: {
  result: { valid: boolean; models?: string[]; error?: string } | undefined;
  t: (key: string, options?: Record<string, unknown>) => string;
}) {
  if (!result) return null;
  return (
    <div
      className={cn(
        "mt-1.5 text-[10px]",
        result.valid ? "text-emerald-600" : "text-red-500",
      )}
    >
      {result.valid
        ? t("models.byok.keyValid", { count: result.models?.length ?? 0 })
        : t("models.byok.keyInvalid", {
            error: result.error ?? t("models.byok.keyInvalidUnknown"),
          })}
    </div>
  );
}
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@apps/web/src/pages/models.tsx` around lines 2177 - 2196, Extract the
duplicated JSX that renders verification feedback into a small reusable
component (e.g., VerifyResultFeedback) and use it in both places where the block
currently appears; the component should accept the verify result
(verifyMutation.data or similar) and the translation function t, return null if
no result, apply the same cn styling logic ("mt-1.5 text-[10px]" + result.valid
? "text-emerald-600" : "text-red-500"), and render the same translation calls
(t("models.byok.keyValid", { count: result.models?.length ?? 0 }) or
t("models.byok.keyInvalid", { error: result.error ??
t("models.byok.keyInvalidUnknown") })) so both instances (the JSX using
verifyMutation and the other duplicated block) simply render
<VerifyResultFeedback result={verifyMutation.data} t={t} />.

1543-1573: Consider adding error feedback for refreshModelsMutation.

The mutation throws on verification failure but lacks onError/onSuccess handlers to show toast feedback to the user. Other mutations in this file (e.g., saveMutation, verifyMutation) handle errors more gracefully or show success feedback.

💡 Suggested improvement
   const refreshModelsMutation = useMutation({
     mutationFn: async () => {
       // ... existing code ...
       return models;
     },
+    onSuccess: () => {
+      toast.success(t("models.byok.refreshSuccess"));
+    },
+    onError: (error) => {
+      toast.error(
+        error instanceof Error
+          ? error.message
+          : t("models.byok.refreshFailed"),
+      );
+    },
   });
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@apps/web/src/pages/models.tsx` around lines 1543 - 1573, The
refreshModelsMutation currently throws on verification failure but has no
onError/onSuccess handlers to surface toast feedback; update the useMutation
call for refreshModelsMutation to add onSuccess and onError callbacks that (1)
show success toasts when models are loaded (using the returned models and e.g.,
t messages), call setVerifiedModels as needed, and invalidate queries via
queryClient.invalidateQueries (["providers"], ["models"]) after saveProvider,
and (2) show error toasts when verifyApiKey returns invalid or when saveProvider
fails (capture and log the error), ensuring you reference refreshModelsMutation,
verifyApiKey, saveProvider, setVerifiedModels, and queryClient.invalidateQueries
in the handlers.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@apps/web/src/pages/models.tsx`:
- Around line 2267-2293: The refresh button is wrongly disabled when a saved
provider key exists but apiKey is empty; update the disabled and className
condition to allow clicking when either isOllama, apiKey, or
dbProvider?.hasApiKey is true (replace the current (!isOllama && !apiKey) /
(isOllama || apiKey) checks). Also adjust refreshModelsMutation invocation/logic
(and any place computing effectiveApiKey) so that when apiKey is empty it falls
back to dbProvider?.hasApiKey / the saved key for non-Ollama providers (ensure
symbols: apiKey, dbProvider?.hasApiKey, isOllama, refreshModelsMutation,
effectiveApiKey are updated accordingly).

---

Nitpick comments:
In `@apps/controller/src/runtime/openclaw-process.ts`:
- Around line 22-38: Extract the duplicated findWorkspaceRoot logic into a
shared utility module (e.g., create lib/workspace.ts or lib/paths.ts) that
exports a single function named findWorkspaceRoot(startDir: string): string |
null; replace the inline implementations in openclaw-process.ts and
model-provider-service.ts with imports from that new module and call the
exported findWorkspaceRoot to preserve behavior; ensure the new module imports
path and existsSync and re-exports the function signature unchanged so both call
sites (findWorkspaceRoot in openclaw-process.ts and the duplicate in
model-provider-service.ts) require no other changes.

In `@apps/web/src/pages/models.tsx`:
- Around line 2177-2196: Extract the duplicated JSX that renders verification
feedback into a small reusable component (e.g., VerifyResultFeedback) and use it
in both places where the block currently appears; the component should accept
the verify result (verifyMutation.data or similar) and the translation function
t, return null if no result, apply the same cn styling logic ("mt-1.5
text-[10px]" + result.valid ? "text-emerald-600" : "text-red-500"), and render
the same translation calls (t("models.byok.keyValid", { count:
result.models?.length ?? 0 }) or t("models.byok.keyInvalid", { error:
result.error ?? t("models.byok.keyInvalidUnknown") })) so both instances (the
JSX using verifyMutation and the other duplicated block) simply render
<VerifyResultFeedback result={verifyMutation.data} t={t} />.
- Around line 1543-1573: The refreshModelsMutation currently throws on
verification failure but has no onError/onSuccess handlers to surface toast
feedback; update the useMutation call for refreshModelsMutation to add onSuccess
and onError callbacks that (1) show success toasts when models are loaded (using
the returned models and e.g., t messages), call setVerifiedModels as needed, and
invalidate queries via queryClient.invalidateQueries (["providers"], ["models"])
after saveProvider, and (2) show error toasts when verifyApiKey returns invalid
or when saveProvider fails (capture and log the error), ensuring you reference
refreshModelsMutation, verifyApiKey, saveProvider, setVerifiedModels, and
queryClient.invalidateQueries in the handlers.

In `@tests/desktop/openclaw-config-compiler.test.ts`:
- Line 292: The test name is misleading: change the "it" description that
currently reads 'compiles ollama providers with the native ollama API' to
clearly indicate the OpenAI-compatible endpoint (e.g., 'compiles ollama
providers with the OpenAI-compatible API') or alternatively update the
expectation to match true native behavior; locate the test starting with
it("compiles ollama providers with the native ollama API", () => { and adjust
the string to reference "openai-completions" (or make the assertion match the
native Ollama API) so the test name and the assertion api: "openai-completions"
are consistent.
🪄 Autofix (Beta)

Fix all unresolved CodeRabbit comments on this PR:

  • Push a commit to this branch (recommended)
  • Create a new PR with the fixes

ℹ️ Review info
⚙️ Run configuration

Configuration used: defaults

Review profile: CHILL

Plan: Pro

Run ID: 6c115cd2-b32f-4be2-9a95-2eccc670733e

📥 Commits

Reviewing files that changed from the base of the PR and between f160632 and 703b319.

📒 Files selected for processing (6)
  • apps/controller/src/lib/openclaw-config-compiler.ts
  • apps/controller/src/routes/desktop-compat-routes.ts
  • apps/controller/src/runtime/openclaw-process.ts
  • apps/controller/src/services/model-provider-service.ts
  • apps/web/src/pages/models.tsx
  • tests/desktop/openclaw-config-compiler.test.ts
✅ Files skipped from review due to trivial changes (1)
  • apps/controller/src/lib/openclaw-config-compiler.ts
🚧 Files skipped from review as they are similar to previous changes (2)
  • apps/controller/src/services/model-provider-service.ts
  • apps/controller/src/routes/desktop-compat-routes.ts

Comment thread apps/web/src/pages/models.tsx
Copy link
Copy Markdown

@chatgpt-codex-connector chatgpt-codex-connector Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

💡 Codex Review

Here are some automated review suggestions for this pull request.

Reviewed commit: d51b9f3114

ℹ️ About Codex in GitHub

Your team has set up Codex to review pull requests in this repo. Reviews are triggered when you

  • Open a pull request for review
  • Mark a draft as ready
  • Comment "@codex review".

If Codex has suggestions, it will comment; otherwise it will react with 👍.

Codex can also answer questions or update the PR. Try commenting "@codex address that feedback".

Comment thread apps/web/src/pages/models.tsx
@mrcfps mrcfps requested a review from lefarcen March 26, 2026 12:19
@lefarcen lefarcen merged commit 81b3c59 into main Mar 26, 2026
8 checks passed
@lefarcen lefarcen mentioned this pull request Mar 30, 2026
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

[Feature] Support Ollama Provider

2 participants