Skip to content

Conversation

@zerob13
Copy link
Collaborator

@zerob13 zerob13 commented Oct 15, 2025

Summary

  • expose model configuration controls for Ollama provider models, including enable toggles and capability badges
  • sync Ollama models with stored configuration/status data so custom parameters persist across refreshes

Testing

  • pnpm run lint

https://chatgpt.com/codex/tasks/task_e_68ef64391c28832c99ab056c811faf49

Summary by CodeRabbit

  • New Features
    • Richer per-model details for Ollama: type, enabled state, vision, function calling, reasoning, and search enablement.
    • Additional configurable fields: temperature, reasoning effort, verbosity, thinking budget, forced search, and search strategy.
  • Improvements
    • More accurate model typing and capability detection from metadata.
    • Enhanced sync of local Ollama models to global lists with preserved settings.
    • Better handling of pulling models and display sorting.
    • Reliable enable/disable controls with immediate status updates.

@coderabbitai
Copy link
Contributor

coderabbitai bot commented Oct 15, 2025

Walkthrough

Refactors Ollama model handling to use per-model metadata, capability-derived typing, and batch status mapping. Updates the settings store to asynchronously merge local Ollama models into global lists with enriched fields. Adjusts the Ollama settings component to new event names, bindings, and metadata-driven rendering and enables model status updates.

Changes

Cohort / File(s) Summary of Changes
Renderer UI: Ollama settings component
src/renderer/settings/components/OllamaProviderSettingsDetail.vue
Switches to per-model metadata and capability-derived attributes (type, enabled, vision, functionCall, reasoning, enableSearch). Adds providerModelMetas, resolves model-id from meta, updates sort comparator typing, introduces handleModelEnabledChange, and migrates template events to kebab-case (@enabled-change, @config-changed, @delete-model). Imports RENDERER_MODEL_META.
Settings store: Ollama model sync and enrichment
src/renderer/src/stores/settings.ts
Rewrites Ollama-to-global model mapping to async flow with existing model map and batch status map. Merges config, existing data, and capabilities to compute enabled, type, contextLength, maxTokens, and Ollama-specific fields (vision, functionCall, reasoning, enableSearch, temperature, reasoningEffort, verbosity, thinkingBudget, forcedSearch, searchStrategy). Updates allProviderModels and enabledModels and refreshes search assistant models.

Sequence Diagram(s)

sequenceDiagram
  autonumber
  participant UI as Settings UI (Ollama)
  participant Store as Settings Store
  participant Ollama as Ollama Local Models
  participant Status as Status Service

  rect rgba(200,230,255,0.3)
  note over UI,Store: Sync local Ollama models into global lists
  UI->>Store: refreshProviderModels()
  Store->>Ollama: listLocalModels()
  Store->>Status: getBatchModelStatus(modelIds)
  Status-->>Store: modelStatusMap
  Store->>Store: merge config + existing + capabilities -> enriched models
  Store-->>UI: update allProviderModels, enabledModels
  end
Loading
sequenceDiagram
  autonumber
  participant UI as Settings UI (Ollama)
  participant Store as Settings Store

  rect rgba(220,255,220,0.35)
  note over UI,Store: Toggle per-model enabled status
  UI->>Store: setModelStatus(modelId, enabled)
  Store-->>UI: success | error (log on failure)
  end
Loading

Estimated code review effort

🎯 4 (Complex) | ⏱️ ~60 minutes

Possibly related PRs

Poem

A nibble of bytes, a hop through the grass,
I map every model, no carrot to pass.
Capabilities bloom, statuses align,
Metadata whispers, “this one is fine.”
With toggles and syncs, I boop every node—
Thump-thump! The Ollama now walks the road. 🥕🐇

Pre-merge checks and finishing touches

✅ Passed checks (3 passed)
Check name Status Explanation
Description Check ✅ Passed Check skipped - CodeRabbit’s high-level summary is enabled.
Title Check ✅ Passed The title accurately summarizes the primary enhancement of the changeset by highlighting the addition of custom model configuration support for the Ollama provider, matching the detailed objective of exposing per-model settings and synchronizing those configurations.
Docstring Coverage ✅ Passed No functions found in the changes. Docstring coverage check skipped.
✨ Finishing touches
  • 📝 Generate docstrings
🧪 Generate unit tests (beta)
  • Create PR with unit tests
  • Post copyable unit tests in a comment
  • Commit unit tests in branch codex/add-custom-model-parameter-settings-to-ollama

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

Copy link

@chatgpt-codex-connector chatgpt-codex-connector bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

💡 Codex Review

Here are some automated review suggestions for this pull request.

ℹ️ About Codex in GitHub

Your team has set up Codex to review pull requests in this repo. Reviews are triggered when you

  • Open a pull request for review
  • Mark a draft as ready
  • Comment "@codex review".

If Codex has suggestions, it will comment; otherwise it will react with 👍.

Codex can also answer questions or update the PR. Try commenting

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

🧹 Nitpick comments (6)
src/renderer/src/stores/settings.ts (1)

1212-1226: Strongly type modelConfig to avoid silent shape drift

modelConfig is used for fields like contextLength, maxTokens, type, vision, functionCall, reasoning, enableSearch, temperature, etc., but is untyped. Define a config type and use it here for safety.

Example:

type OllamaModelConfig = {
  contextLength?: number
  maxTokens?: number
  type?: ModelType
  vision?: boolean
  functionCall?: boolean
  reasoning?: boolean
  enableSearch?: boolean
  temperature?: number
  reasoningEffort?: string
  verbosity?: string
  thinkingBudget?: number
  forcedSearch?: boolean
  searchStrategy?: string
}
const modelConfig = (await configP.getModelConfig(model.name, 'ollama')) as OllamaModelConfig

As per coding guidelines

src/renderer/settings/components/OllamaProviderSettingsDetail.vue (5)

132-144: Ensure ModelConfigItem emits new kebab-case events and supports new props

You switched to @enabled-change, @config-changed, @delete-model and added props like enable-search. Verify the child component’s emits/props are updated to match; otherwise, handlers won’t fire.

If not already done, update ModelConfigItem’s emits and prop definitions accordingly.


749-782: Improve type-safety and capability derivation in display models

Avoid any and use the shared OllamaModel type; keep capability sources typed.

Apply this diff:

-const displayLocalModels = computed(() => {
-  const metaMap = new Map<string, RENDERER_MODEL_META & { ollamaModel?: any }>(
+const displayLocalModels = computed(() => {
+  const metaMap = new Map<string, RENDERER_MODEL_META & { ollamaModel?: import('@shared/presenter').OllamaModel }>(
     providerModelMetas.value.map((meta) => [
       meta.id,
-      meta as RENDERER_MODEL_META & { ollamaModel?: any }
+      meta as RENDERER_MODEL_META & { ollamaModel?: import('@shared/presenter').OllamaModel }
     ])
   )
 
-  const models = localModels.value.map((model: any) => {
+  const models = localModels.value.map((model) => {
     const meta = metaMap.get(model.name)
     const capabilitySources: string[] = []
     if (Array.isArray(model?.capabilities)) {
       capabilitySources.push(...model.capabilities)
     }
-    if (meta?.ollamaModel && Array.isArray(meta.ollamaModel?.capabilities)) {
+    if (meta?.ollamaModel && Array.isArray(meta.ollamaModel?.capabilities)) {
       capabilitySources.push(...(meta.ollamaModel.capabilities as string[]))
     }
     const capabilitySet = new Set(capabilitySources)

As per coding guidelines


785-823: Include capabilities for placeholder pulling entries; keep typing consistent

When adding pulling models, populate capabilities from meta if available, and keep types tight.

Apply this diff:

-      models.unshift({
+      models.unshift({
         name: modelName,
         model: modelName,
         modified_at: new Date(),
         size: 0,
         digest: '',
         details: {
           format: '',
           family: '',
           families: [],
           parameter_size: '',
           quantization_level: ''
         },
         model_info: {
           context_length: meta?.contextLength ?? 0,
           embedding_length: 0
         },
-        capabilities: [],
+        capabilities: Array.isArray(meta?.ollamaModel?.capabilities)
+          ? (meta!.ollamaModel!.capabilities as string[])
+          : [],
         pulling: true,
         progress,
         meta,
         enabled: meta?.enabled ?? true,
         vision: meta?.vision ?? capabilitySet.has('vision'),
         functionCall: meta?.functionCall ?? capabilitySet.has('tools'),
         reasoning: meta?.reasoning ?? capabilitySet.has('thinking'),
         enableSearch: meta?.enableSearch ?? false,
         type: resolvedType
       })

874-880: Optional: optimistic UI for enable toggle

Consider optimistically updating local state to reduce flicker, then revert on failure.

Example:

const prev = model.enabled
updateLocalState(modelName, enabled) // local optimistic update
try {
  await settingsStore.updateModelStatus(props.provider.id, modelName, enabled)
} catch (e) {
  updateLocalState(modelName, prev) // revert
  console.error(`Failed to update model status for ${modelName}:`, e)
}

6-7: Internationalize static labels

Labels 'API URL' and 'API Key' should use i18n keys like other strings.

Replace with something like:

<Label :for="`${provider.id}-url`" class="flex-1 cursor-pointer">{{ t('settings.provider.apiUrl') }}</Label>
<Label :for="`${provider.id}-apikey`" class="flex-1 cursor-pointer">{{ t('settings.provider.apiKey') }}</Label>

As per coding guidelines

Also applies to: 25-26

📜 Review details

Configuration used: CodeRabbit UI

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 508b582 and ed98072.

📒 Files selected for processing (2)
  • src/renderer/settings/components/OllamaProviderSettingsDetail.vue (5 hunks)
  • src/renderer/src/stores/settings.ts (1 hunks)
🧰 Additional context used
📓 Path-based instructions (17)
src/renderer/**/*.{vue,ts,js,tsx,jsx}

📄 CodeRabbit inference engine (.cursor/rules/project-structure.mdc)

渲染进程代码放在 src/renderer

Files:

  • src/renderer/settings/components/OllamaProviderSettingsDetail.vue
  • src/renderer/src/stores/settings.ts
src/renderer/**/*.{ts,tsx,vue}

📄 CodeRabbit inference engine (.cursor/rules/vue-shadcn.mdc)

src/renderer/**/*.{ts,tsx,vue}: Use descriptive variable names with auxiliary verbs (e.g., isLoading, hasError).
Use TypeScript for all code; prefer types over interfaces.
Avoid enums; use const objects instead.
Use arrow functions for methods and computed properties.
Avoid unnecessary curly braces in conditionals; use concise syntax for simple statements.

Files:

  • src/renderer/settings/components/OllamaProviderSettingsDetail.vue
  • src/renderer/src/stores/settings.ts
src/renderer/**/*.{vue,ts}

📄 CodeRabbit inference engine (.cursor/rules/vue-shadcn.mdc)

Implement lazy loading for routes and components.

Files:

  • src/renderer/settings/components/OllamaProviderSettingsDetail.vue
  • src/renderer/src/stores/settings.ts
src/renderer/**/*.{ts,vue}

📄 CodeRabbit inference engine (.cursor/rules/vue-shadcn.mdc)

src/renderer/**/*.{ts,vue}: Use useFetch and useAsyncData for data fetching.
Implement SEO best practices using Nuxt's useHead and useSeoMeta.

Use Pinia for frontend state management (do not introduce alternative state libraries)

Files:

  • src/renderer/settings/components/OllamaProviderSettingsDetail.vue
  • src/renderer/src/stores/settings.ts
**/*.{ts,tsx,js,vue}

📄 CodeRabbit inference engine (CLAUDE.md)

Use English for all logs and comments

Files:

  • src/renderer/settings/components/OllamaProviderSettingsDetail.vue
  • src/renderer/src/stores/settings.ts
**/*.{ts,tsx,vue}

📄 CodeRabbit inference engine (CLAUDE.md)

Enable and adhere to strict TypeScript typing (avoid implicit any, prefer precise types)

Use PascalCase for TypeScript types and classes

Files:

  • src/renderer/settings/components/OllamaProviderSettingsDetail.vue
  • src/renderer/src/stores/settings.ts
**/*.{ts,tsx,js,jsx,vue,css,scss,md,json,yml,yaml}

📄 CodeRabbit inference engine (AGENTS.md)

Prettier style: single quotes, no semicolons, print width 100; run pnpm run format

Files:

  • src/renderer/settings/components/OllamaProviderSettingsDetail.vue
  • src/renderer/src/stores/settings.ts
**/*.{ts,tsx,js,jsx,vue}

📄 CodeRabbit inference engine (AGENTS.md)

**/*.{ts,tsx,js,jsx,vue}: Use OxLint for JS/TS code; keep lint clean
Use camelCase for variables and functions
Use SCREAMING_SNAKE_CASE for constants

Files:

  • src/renderer/settings/components/OllamaProviderSettingsDetail.vue
  • src/renderer/src/stores/settings.ts
src/renderer/**/*.vue

📄 CodeRabbit inference engine (AGENTS.md)

Name Vue component files in PascalCase (e.g., ChatInput.vue)

Files:

  • src/renderer/settings/components/OllamaProviderSettingsDetail.vue
**/*.{js,jsx,ts,tsx}

📄 CodeRabbit inference engine (.cursor/rules/development-setup.mdc)

**/*.{js,jsx,ts,tsx}: 使用 OxLint 进行代码检查
Log和注释使用英文书写

Files:

  • src/renderer/src/stores/settings.ts
src/{main,renderer}/**/*.ts

📄 CodeRabbit inference engine (.cursor/rules/electron-best-practices.mdc)

src/{main,renderer}/**/*.ts: Use context isolation for improved security
Implement proper inter-process communication (IPC) patterns
Optimize application startup time with lazy loading
Implement proper error handling and logging for debugging

Files:

  • src/renderer/src/stores/settings.ts
**/*.{ts,tsx}

📄 CodeRabbit inference engine (.cursor/rules/error-logging.mdc)

**/*.{ts,tsx}: 始终使用 try-catch 处理可能的错误
提供有意义的错误信息
记录详细的错误日志
优雅降级处理
日志应包含时间戳、日志级别、错误代码、错误描述、堆栈跟踪(如适用)、相关上下文信息
日志级别应包括 ERROR、WARN、INFO、DEBUG
不要吞掉错误
提供用户友好的错误信息
实现错误重试机制
避免记录敏感信息
使用结构化日志
设置适当的日志级别

Files:

  • src/renderer/src/stores/settings.ts
src/renderer/src/**/*

📄 CodeRabbit inference engine (.cursor/rules/i18n.mdc)

src/renderer/src/**/*: All user-facing strings must use i18n keys (avoid hardcoded user-visible text in code)
Use the 'vue-i18n' framework for all internationalization in the renderer
Ensure all user-visible text in the renderer uses the translation system

Files:

  • src/renderer/src/stores/settings.ts
src/renderer/src/stores/**/*.{vue,ts,tsx,js,jsx}

📄 CodeRabbit inference engine (.cursor/rules/pinia-best-practices.mdc)

src/renderer/src/stores/**/*.{vue,ts,tsx,js,jsx}: Use modules to organize related state and actions
Implement proper state persistence for maintaining data across sessions
Use getters for computed state properties
Utilize actions for side effects and asynchronous operations
Keep the store focused on global state, not component-specific data

Files:

  • src/renderer/src/stores/settings.ts
src/renderer/src/**/*.{vue,ts,tsx,js,jsx}

📄 CodeRabbit inference engine (.cursor/rules/vue-best-practices.mdc)

src/renderer/src/**/*.{vue,ts,tsx,js,jsx}: Use the Composition API for better code organization and reusability
Implement proper state management with Pinia
Utilize Vue Router for navigation and route management
Leverage Vue's built-in reactivity system for efficient data handling

Files:

  • src/renderer/src/stores/settings.ts
src/renderer/src/**

📄 CodeRabbit inference engine (AGENTS.md)

Place Vue 3 app source under src/renderer/src (components, stores, views, i18n, lib)

Files:

  • src/renderer/src/stores/settings.ts
src/renderer/src/**/*.{vue,ts}

📄 CodeRabbit inference engine (AGENTS.md)

All user-facing strings must use vue-i18n ($t/keys) rather than hardcoded literals

Files:

  • src/renderer/src/stores/settings.ts
🧬 Code graph analysis (1)
src/renderer/src/stores/settings.ts (1)
src/shared/types/presenters/legacy.presenters.d.ts (2)
  • RENDERER_MODEL_META (543-557)
  • OllamaModel (1079-1101)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (1)
  • GitHub Check: build-check (x64)
🔇 Additional comments (2)
src/renderer/settings/components/OllamaProviderSettingsDetail.vue (2)

283-283: Type import addition is correct

Importing RENDERER_MODEL_META here is appropriate for meta-driven rendering.


310-314: Good: deriving provider model metas from the store

This keeps the UI aligned with normalized global models.

Comment on lines +1200 to +1282
const existingModelMap = new Map<string, RENDERER_MODEL_META & { ollamaModel?: OllamaModel }>(
existingOllamaModels.map((model) => [
model.id,
model as RENDERER_MODEL_META & { ollamaModel?: OllamaModel }
])
)

const modelNames = ollamaLocalModels.value.map((model) => model.name)
const modelStatusMap =
modelNames.length > 0 ? await configP.getBatchModelStatus('ollama', modelNames) : {}

// 将 Ollama 本地模型转换为全局模型格式
const ollamaModelsAsGlobal = ollamaLocalModels.value.map((model) => {
// 检查是否已存在相同ID的模型,如果存在,保留其现有的配置
const existingModel = existingOllamaModels.find((m) => m.id === model.name)
const ollamaModelsAsGlobal = await Promise.all(
ollamaLocalModels.value.map(async (model) => {
const existingModel = existingModelMap.get(model.name)
const existingModelExtra = existingModel as
| (RENDERER_MODEL_META & {
temperature?: number
reasoningEffort?: string
verbosity?: string
thinkingBudget?: number
forcedSearch?: boolean
searchStrategy?: string
})
| undefined
const modelConfig = await configP.getModelConfig(model.name, 'ollama')

return {
id: model.name,
name: model.name,
contextLength: model.model_info.context_length || 4096, // 使用模型定义值或默认值
maxTokens: existingModel?.maxTokens || 2048, // 使用现有值或默认值
provider: 'ollama',
group: existingModel?.group || 'local',
enabled: true,
isCustom: existingModel?.isCustom || false,
providerId: 'ollama',
vision: model.capabilities.indexOf('vision') > -1,
functionCall: model.capabilities.indexOf('tools') > -1,
reasoning: model.capabilities.indexOf('thinking') > -1,
type: model.capabilities.indexOf('embedding') > -1 ? ModelType.Embedding : ModelType.Chat,
// 保留现有的其他配置,但确保更新 Ollama 特有数据
...(existingModel ? { ...existingModel } : {}),
ollamaModel: model
} as RENDERER_MODEL_META & { ollamaModel: OllamaModel }
})
const capabilitySources: string[] = []
if (Array.isArray((model as any)?.capabilities)) {
capabilitySources.push(...((model as any).capabilities as string[]))
}
if (
existingModel?.ollamaModel &&
Array.isArray((existingModel.ollamaModel as any)?.capabilities)
) {
capabilitySources.push(...((existingModel.ollamaModel as any).capabilities as string[]))
}
const capabilitySet = new Set(capabilitySources)

const contextLength =
modelConfig?.contextLength ??
existingModel?.contextLength ??
(model as any)?.model_info?.context_length ??
4096

const maxTokens = modelConfig?.maxTokens ?? existingModel?.maxTokens ?? 2048

const statusFromStore = modelStatusMap[model.name]
const enabled = statusFromStore ?? existingModel?.enabled ?? true

const type =
modelConfig?.type ??
existingModel?.type ??
(capabilitySet.has('embedding') ? ModelType.Embedding : ModelType.Chat)

return {
...existingModel,
id: model.name,
name: model.name,
contextLength,
maxTokens,
provider: 'ollama',
group: existingModel?.group || 'local',
enabled,
isCustom: existingModel?.isCustom || false,
providerId: 'ollama',
vision: modelConfig?.vision ?? existingModel?.vision ?? capabilitySet.has('vision'),
functionCall:
modelConfig?.functionCall ?? existingModel?.functionCall ?? capabilitySet.has('tools'),
reasoning:
modelConfig?.reasoning ?? existingModel?.reasoning ?? capabilitySet.has('thinking'),
enableSearch: modelConfig?.enableSearch ?? existingModel?.enableSearch ?? false,
temperature: modelConfig?.temperature ?? existingModelExtra?.temperature,
reasoningEffort: modelConfig?.reasoningEffort ?? existingModelExtra?.reasoningEffort,
verbosity: modelConfig?.verbosity ?? existingModelExtra?.verbosity,
thinkingBudget: modelConfig?.thinkingBudget ?? existingModelExtra?.thinkingBudget,
forcedSearch: modelConfig?.forcedSearch ?? existingModelExtra?.forcedSearch,
searchStrategy: modelConfig?.searchStrategy ?? existingModelExtra?.searchStrategy,
type,
ollamaModel: model
} as RENDERER_MODEL_META & { ollamaModel: OllamaModel }
})
)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Prevent disabled Ollama models from surfacing; drop non-typed field; reduce IPC overhead

  • Do not add disabled provider models to enabledModels or search assistant candidates. Gate by provider.enable.
  • Remove the extraneous provider field (not in RENDERER_MODEL_META); keep providerId.
  • Consider batching config retrieval to avoid per-model IPC calls.

Apply these diffs:

  1. Remove non-typed provider and gate enabledModels/search assistant by provider status:
         return {
           ...existingModel,
           id: model.name,
           name: model.name,
           contextLength,
           maxTokens,
-          provider: 'ollama',
           group: existingModel?.group || 'local',
           enabled,
           isCustom: existingModel?.isCustom || false,
           providerId: 'ollama',
           ...
         } as RENDERER_MODEL_META & { ollamaModel: OllamaModel }
-    const enabledIndex = enabledModels.value.findIndex((item) => item.providerId === 'ollama')
-    const enabledOllamaModels = ollamaModelsAsGlobal.filter((model) => model.enabled)
-    if (enabledIndex !== -1) {
-      enabledModels.value[enabledIndex].models = enabledOllamaModels
-    } else if (enabledOllamaModels.length > 0) {
-      enabledModels.value.push({
-        providerId: 'ollama',
-        models: enabledOllamaModels
-      })
-    }
+    if (ollamaProvider.enable) {
+      const enabledIndex = enabledModels.value.findIndex((item) => item.providerId === 'ollama')
+      const enabledOllamaModels = ollamaModelsAsGlobal.filter((model) => model.enabled)
+      if (enabledIndex !== -1) {
+        enabledModels.value[enabledIndex].models = enabledOllamaModels
+      } else if (enabledOllamaModels.length > 0) {
+        enabledModels.value.push({
+          providerId: 'ollama',
+          models: enabledOllamaModels
+        })
+      }
+    }
-    await initOrUpdateSearchAssistantModel()
+    if (ollamaProvider.enable) {
+      await initOrUpdateSearchAssistantModel()
+    }
  1. Optional: batch-fetch model configs to reduce IPC calls:
-    const ollamaModelsAsGlobal = await Promise.all(
-      ollamaLocalModels.value.map(async (model) => {
-        const modelConfig = await configP.getModelConfig(model.name, 'ollama')
+    // const configs = await configP.getBatchModelConfig?.('ollama', modelNames) ?? {}
+    const ollamaModelsAsGlobal = await Promise.all(
+      ollamaLocalModels.value.map(async (model) => {
+        const modelConfig = await configP.getModelConfig(model.name, 'ollama')
         ...

If getBatchModelConfig doesn’t exist, consider adding it to the presenter for better performance.
As per coding guidelines

Also applies to: 1298-1312, 1310-1312

@zerob13 zerob13 merged commit 0302f2d into dev Oct 15, 2025
2 checks passed
@zerob13
Copy link
Collaborator Author

zerob13 commented Oct 15, 2025

fixed #1017

zerob13 added a commit that referenced this pull request Oct 22, 2025
* style(settings): format about page link handler (#1016)

* style(ollama): format model config handlers (#1018)

* fix: think content scroll issue (#1023)

* fix: remove shimmer for think content

* chore: update screen shot and fix scroll issue

* chore: update markdown renderer

* fix: import button bug and prevent backup overwriting during import (#1024)

* fix(sync): fix import button bug and prevent backup overwriting during import

* fix(sync): fix import button bug and prevent backup overwriting during import

* fix(sync): fix import button bug and prevent backup overwriting during import

* refactor(messageList): refactor message list ui components (#1026)

* feat: remove new thread button, add clean button.

* refactor(messageList): refactor message list ui components

* feat: add configurable fields for chat settings

- Introduced ConfigFieldHeader component for consistent field headers.
- Added ConfigInputField, ConfigSelectField, ConfigSliderField, and ConfigSwitchField components for various input types.
- Created types for field configurations in types.ts to standardize field definitions.
- Implemented useChatConfigFields composable to manage field configurations dynamically.
- Added useModelCapabilities and useModelTypeDetection composables for handling model-specific capabilities and requirements.
- Developed useSearchConfig and useThinkingBudget composables for managing search and budget configurations.

* feat: implement input history management in prompt input

- Added `useInputHistory` composable for managing input history and navigation.
- Implemented methods for setting, clearing, and confirming history placeholders.
- Integrated arrow key navigation for browsing through input history.

feat: enhance mention data handling in prompt input

- Created `useMentionData` composable to aggregate mention data from selected files and MCP resources.
- Implemented watchers to update mention data based on selected files, MCP resources, tools, and prompts.

feat: manage prompt input configuration with store synchronization

- Developed `usePromptInputConfig` composable for managing model configuration.
- Implemented bidirectional sync between local config and chat store.
- Added debounced watcher to reduce updates and improve performance.

feat: streamline TipTap editor operations in prompt input

- Introduced `usePromptInputEditor` composable for managing TipTap editor lifecycle and content transformation.
- Implemented methods for handling mentions, pasting content, and clearing editor content.

feat: handle file operations in prompt input

- Created `usePromptInputFiles` composable for managing file selection, paste, and drag-drop operations.
- Implemented methods for processing files, handling dropped files, and clearing selected files.

feat: manage rate limit status in prompt input

- Developed `useRateLimitStatus` composable for displaying and polling rate limit status.
- Implemented methods for handling rate limit events and computing status icons, classes, and tooltips.

* refactor(artifacts): migrate component logic to composables and update documentation

- Refactor ArtifactDialog.vue to use composables for view mode, viewport size, code editor, and export functionality
- Simplify HTMLArtifact.vue by removing drag-resize logic and using fixed viewport dimensions
- Clean up MermaidArtifact.vue styling and structure
- Update component refactoring guide to reflect new patterns and best practices
- Adjust prompt input composable to allow delayed editor initialization
- Update internationalization files for new responsive label

* fix(lint): unused variables

* fix(format): format code

* CodeRabbit Generated Unit Tests: Add renderer unit tests for components and composables

* feat: implement input history management in chat input component

- Added `useInputHistory` composable for managing input history and placeholder navigation.
- Implemented methods for setting, clearing, and confirming history placeholders.
- Integrated arrow key navigation for cycling through input history.

feat: enhance mention data handling in chat input

- Created `useMentionData` composable to manage mention data aggregation.
- Implemented watchers for selected files and MCP resources/tools/prompts to update mention data.

feat: manage prompt input configuration and synchronization

- Developed `usePromptInputConfig` composable for managing model configuration.
- Implemented bidirectional sync between local config refs and chat store.
- Added debounced watcher to reduce updates to the store.

feat: manage prompt input editor operations

- Introduced `usePromptInputEditor` composable for handling TipTap editor operations.
- Implemented content transformation, mention insertion, and paste handling.
- Added methods for handling editor updates and restoring focus.

feat: handle prompt input files management

- Created `usePromptInputFiles` composable for managing file operations in prompt input.
- Implemented file selection, paste, drag-drop, and prompt files integration.

feat: implement rate limit status management

- Developed `useRateLimitStatus` composable for managing rate limit status display and polling.
- Added methods for retrieving rate limit status icon, class, tooltip, and wait time formatting.

* feat: enhance chat input component with context length management and settings integration

* feat: update model configuration and enhance error handling in providers

* feat: add MCP tools list component and integrate with chat settings
feat: enhance artifact dialog with improved error handling and localization
fix: update Mermaid artifact rendering error handling and localization
fix: improve input settings error handling and state management
fix: update drag and drop composable to handle drag events correctly
fix: update Vitest configuration for better project structure and alias resolution

* fix(i18n): add unknownError translation

---------

Co-authored-by: deepinsect <deepinsect@github.com>
Co-authored-by: coderabbitai[bot] <136622811+coderabbitai[bot]@users.noreply.github.com>

* feat: add Poe provider integration and icon support  (#1028)

* feat: add Poe provider integration and icon support

* chore: format and lint

---------

Co-authored-by: zerob13 <zerob13@gmail.com>

* fix: make auto scroll works (#1030)

* fix: allow settings window links to open externally (#1029)

* fix(settings): allow target blank links

* fix: harden settings window link handling

* feat: enhance GitHub Copilot Device Flow with OAuth token management and API token retrieval (#1021)

* feat: enhance GitHub Copilot Device Flow with OAuth token management and API token retrieval

- Fixed request header for managing OAuth tokens and retrieving API tokens.
- Enhanced model definitions and added new models for better compatibility.

* fix: remove privacy related log

* fix: OAuth 2.0 for slow_down response

* fix: handle lint errors

* fix: provider fetched from publicdb

* fix(githubCopilotProvider): update request body logging format for clarity

* fix(githubCopilotProvider): improve error handling and logging in device flow

* feat(theme): fix message paragraph gap and toolcall block (#1031)

Co-authored-by: deepinsect <deepinsect@github.com>

* fix: scroll to bottom (#1034)

* fix: add debounce for renderer

* feat: add max wait for renderer

* chore(deps): upgrade markdown renderer add worker support

* chore: bump markdown version

* fix(build): use es module worker format (#1037)

* feat: remove function deleteOllamaModel (#1036)

* feat: remove function deleteOllamaModel

* fix(build): use es module worker format (#1037)

---------

Co-authored-by: duskzhen <zerob13@gmail.com>

* perf: update dependencies to use stream-monaco and bump vue-renderer-markdown version (#1038)

* feat(theme): add markdown layout style and table style (#1039)

* feat(theme): add markdown layout style and table style

* fix(lint): remove props

---------

Co-authored-by: deepinsect <deepinsect@github.com>

* feat: support effort and verbosity (#1040)

* chore: bump up version

* feat: add jiekou.ai as LLM provider (#1041)

* feat: add jiekou.ai as LLM provider

* fix: change api type to jiekou

---------

Co-authored-by: zerob13 <zerob13@gmail.com>

* chore: update provider db

---------

Co-authored-by: 韦伟 <xweimvp@gmail.com>
Co-authored-by: Happer <ericted8810us@gmail.com>
Co-authored-by: deepinsect <deepinsect@github.com>
Co-authored-by: coderabbitai[bot] <136622811+coderabbitai[bot]@users.noreply.github.com>
Co-authored-by: cp90 <153345481+cp90-pixel@users.noreply.github.com>
Co-authored-by: Cedric <14017092+douyixuan@users.noreply.github.com>
Co-authored-by: Simon He <57086651+Simon-He95@users.noreply.github.com>
Co-authored-by: yyhhyyyyyy <yyhhyyyyyy8@gmail.com>
Co-authored-by: cnJasonZ <gbdzxalbb@qq.com>
@zerob13 zerob13 deleted the codex/add-custom-model-parameter-settings-to-ollama branch January 6, 2026 12:18
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants