Skip to content

Conversation

@yyhhyyyyyy
Copy link
Collaborator

@yyhhyyyyyy yyhhyyyyyy commented Aug 13, 2025

  • add reasoning_effort parameter support for gpt-oss models
  • add reasoning effort UI support across all components

Summary by CodeRabbit

  • New Features

    • Added Reasoning Effort selector (minimal/low/medium/high) where supported.
    • Added Verbosity selector (low/medium/high) for GPT‑5 models.
    • Model defaults now auto-load into UI; reasoning effort and verbosity persist across new threads and title view.
    • Reasoning Effort sent to compatible providers when configured.
  • Chores

    • Updated GPT‑OSS 20B/120B defaults: lower temperature, reduced token limits, standardized 128k context, enabled reasoning with medium effort.

- add reasoning effort UI support across all components
@coderabbitai
Copy link
Contributor

coderabbitai bot commented Aug 13, 2025

Walkthrough

Adds reasoningEffort and verbosity settings across UI and config. Updates GPT-OSS model defaults/provider settings (temperatures, token/context limits, reasoning flags). Wires reasoningEffort into Ollama chat payload as reasoning_effort. Extends ChatConfig, NewThread, TitleView, and ModelConfigDialog to surface and persist these options.

Changes

Cohort / File(s) Summary
Model defaults and provider settings
src/main/presenter/configPresenter/modelDefaultSettings.ts, src/main/presenter/configPresenter/providerModelSettings.ts
Adjusted GPT-OSS 20b/120b temperatures, maxTokens and contextLength values; enabled reasoning and added reasoningEffort: 'medium' for those models.
Ollama provider payload
src/main/presenter/llmProviderPresenter/providers/ollamaProvider.ts
When modelConfig.reasoningEffort is present, include options.reasoning_effort in the chat request payload (keeps existing num_predict behavior).
Renderer UI: ChatConfig
src/renderer/src/components/ChatConfig.vue
Added reasoningEffort and verbosity props and corresponding update: emits; introduced Select UI and conditional rendering via supportsReasoningEffort and isGPT5Model.
Renderer integration: New thread & title
src/renderer/src/components/NewThread.vue, src/renderer/src/components/TitleView.vue
Added local state, v-model bindings, watchers, and initialization from model defaults for reasoningEffort and verbosity; included fields in thread creation payload and chatStore sync.
Settings dialog
src/renderer/src/components/settings/ModelConfigDialog.vue
Replaced GPT-5-only gate with feature detection (supportsReasoningEffort) to conditionally show the reasoning effort UI; verbosity gating unchanged.

Sequence Diagram(s)

sequenceDiagram
  participant User
  participant ChatConfig
  participant TitleView
  participant Store as Chat Store
  participant Presenter as Config Presenter
  participant Provider as OllamaProvider
  participant API as Ollama API

  User->>ChatConfig: change reasoningEffort / verbosity
  ChatConfig-->>TitleView: emit update events
  TitleView->>Store: persist chatConfig (includes new fields)
  TitleView->>Presenter: getModelDefaultConfig(modelId, providerId)
  Presenter-->>TitleView: return defaults (reasoningEffort, verbosity)
  User->>TitleView: create thread / send message
  TitleView->>Provider: handleChatCompletion(config incl. reasoningEffort)
  Provider->>API: POST chat { options.reasoning_effort, num_predict, ... }
  API-->>Provider: response
  Provider-->>TitleView: deliver message/result
Loading

Estimated code review effort

🎯 3 (Moderate) | ⏱️ ~20 minutes

Possibly related PRs

Suggested reviewers

  • zerob13

Poem

I twitch my whiskers, tweak a setting bright,
Reasoning effort set just right.
Verbosity hums, tokens fall in line,
Context grows where thoughts entwine.
A rabbit's nudge — small change, big light. 🐇✨


📜 Recent review details

Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 7a3a67c and 9ce4d76.

📒 Files selected for processing (2)
  • src/renderer/src/components/TitleView.vue (4 hunks)
  • src/renderer/src/components/settings/ModelConfigDialog.vue (2 hunks)
🚧 Files skipped from review as they are similar to previous changes (1)
  • src/renderer/src/components/settings/ModelConfigDialog.vue
🧰 Additional context used
📓 Path-based instructions (9)
**/*.{ts,tsx,js,jsx,vue}

📄 CodeRabbit Inference Engine (CLAUDE.md)

Use English for logs and comments

Files:

  • src/renderer/src/components/TitleView.vue
src/renderer/src/**/*.vue

📄 CodeRabbit Inference Engine (CLAUDE.md)

src/renderer/src/**/*.vue: Use Composition API for all Vue 3 components
Use Tailwind CSS with scoped styles for styling
Organize components by feature in src/renderer/src/
Follow existing component patterns in src/renderer/src/ when creating new UI components
Use Composition API with proper TypeScript typing for new UI components
Implement responsive design with Tailwind CSS for new UI components
Add proper error handling and loading states for new UI components

Use scoped styles to prevent CSS conflicts between components

Files:

  • src/renderer/src/components/TitleView.vue
src/renderer/src/**/*.{ts,tsx,vue}

📄 CodeRabbit Inference Engine (CLAUDE.md)

src/renderer/src/**/*.{ts,tsx,vue}: Use Pinia for frontend state management
Renderer to Main: Use usePresenter.ts composable for direct presenter method calls

Files:

  • src/renderer/src/components/TitleView.vue
src/renderer/src/**/*

📄 CodeRabbit Inference Engine (.cursor/rules/i18n.mdc)

src/renderer/src/**/*: All user-facing strings must use i18n keys (avoid hardcoded user-visible text in code)
Use the 'vue-i18n' framework for all internationalization in the renderer
Ensure all user-visible text in the renderer uses the translation system

Files:

  • src/renderer/src/components/TitleView.vue
src/renderer/**/*.{vue,ts,js,tsx,jsx}

📄 CodeRabbit Inference Engine (.cursor/rules/project-structure.mdc)

渲染进程代码放在 src/renderer

Files:

  • src/renderer/src/components/TitleView.vue
src/renderer/src/**/*.{vue,ts,tsx,js,jsx}

📄 CodeRabbit Inference Engine (.cursor/rules/vue-best-practices.mdc)

src/renderer/src/**/*.{vue,ts,tsx,js,jsx}: Use the Composition API for better code organization and reusability
Implement proper state management with Pinia
Utilize Vue Router for navigation and route management
Leverage Vue's built-in reactivity system for efficient data handling

Files:

  • src/renderer/src/components/TitleView.vue
src/renderer/**/*.{ts,tsx,vue}

📄 CodeRabbit Inference Engine (.cursor/rules/vue-shadcn.mdc)

src/renderer/**/*.{ts,tsx,vue}: Use descriptive variable names with auxiliary verbs (e.g., isLoading, hasError).
Use TypeScript for all code; prefer types over interfaces.
Avoid enums; use const objects instead.
Use arrow functions for methods and computed properties.
Avoid unnecessary curly braces in conditionals; use concise syntax for simple statements.

Files:

  • src/renderer/src/components/TitleView.vue
src/renderer/**/*.{vue,ts}

📄 CodeRabbit Inference Engine (.cursor/rules/vue-shadcn.mdc)

Implement lazy loading for routes and components.

Files:

  • src/renderer/src/components/TitleView.vue
src/renderer/**/*.{ts,vue}

📄 CodeRabbit Inference Engine (.cursor/rules/vue-shadcn.mdc)

src/renderer/**/*.{ts,vue}: Use useFetch and useAsyncData for data fetching.
Implement SEO best practices using Nuxt's useHead and useSeoMeta.

Files:

  • src/renderer/src/components/TitleView.vue
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (1)
  • GitHub Check: build-check (x64)
🔇 Additional comments (5)
src/renderer/src/components/TitleView.vue (5)

59-60: LGTM: Properly wired reasoning effort and verbosity to ChatConfig

The template correctly passes the new reasoningEffort and verbosity props to ChatConfig and handles their update events appropriately.

Also applies to: 68-69


111-136: LGTM: Well-structured model config loading logic

The loadModelConfig function properly:

  • Validates modelId and providerId before making API calls
  • Only updates undefined values to preserve user choices
  • Resets values to undefined when model doesn't support the feature
  • Includes proper error handling

220-226: LGTM: Clean update handlers with proper typing

The update functions are well-typed with appropriate union types for the reasoning effort and verbosity parameters.


281-303: LGTM: Proper bidirectional state synchronization

The watcher correctly:

  • Updates local state when the store changes
  • Handles optional fields by checking for undefined
  • Triggers model config loading when model/provider changes
  • Uses deep watching for comprehensive change detection

370-372: LGTM: Delayed initialization pattern

The setTimeout approach is appropriate here to ensure the component is fully mounted before loading model config, preventing potential race conditions.

✨ Finishing Touches
🧪 Generate unit tests
  • Create PR with unit tests
  • Post copyable unit tests in a comment
  • Commit unit tests in branch feat/gpt-oss-reasoning-effort-support

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share
🪧 Tips

Chat

There are 3 ways to chat with CodeRabbit:

  • Review comments: Directly reply to a review comment made by CodeRabbit. Example:
    • I pushed a fix in commit <commit_id>, please review it.
    • Open a follow-up GitHub issue for this discussion.
  • Files and specific lines of code (under the "Files changed" tab): Tag @coderabbitai in a new review comment at the desired location with your query.
  • PR comments: Tag @coderabbitai in a new PR comment to ask questions about the PR branch. For the best results, please provide a very specific query, as very limited context is provided in this mode. Examples:
    • @coderabbitai gather interesting stats about this repository and render them as a table. Additionally, render a pie chart showing the language distribution in the codebase.
    • @coderabbitai read the files in the src/scheduler package and generate a class diagram using mermaid and a README in the markdown format.

Support

Need help? Create a ticket on our support page for assistance with any issues or questions.

CodeRabbit Commands (Invoked using PR/Issue comments)

Type @coderabbitai help to get the list of available commands.

Other keywords and placeholders

  • Add @coderabbitai ignore anywhere in the PR description to prevent this PR from being reviewed.
  • Add @coderabbitai summary to generate the high-level summary at a specific location in the PR description.
  • Add @coderabbitai anywhere in the PR title to generate the title automatically.

CodeRabbit Configuration File (.coderabbit.yaml)

  • You can programmatically configure CodeRabbit by adding a .coderabbit.yaml file to the root of your repository.
  • Please see the configuration documentation for more information.
  • If your editor has YAML language server enabled, you can add the path at the top of this file to enable auto-completion and validation: # yaml-language-server: $schema=https://coderabbit.ai/integrations/schema.v2.json

Status, Documentation and Community

  • Visit our Status Page to check the current availability of CodeRabbit.
  • Visit our Documentation for detailed information on how to use CodeRabbit.
  • Join our Discord Community to get help, request features, and share feedback.
  • Follow us on X/Twitter for updates and announcements.

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 4

🧹 Nitpick comments (6)
src/main/presenter/llmProviderPresenter/providers/ollamaProvider.ts (1)

495-497: Conditional wiring of reasoning_effort looks good; verify Ollama type support

Passing reasoning_effort through options only when present is correct. However, the official Ollama Options type may not declare reasoning_effort. If strict TS fails here, augment the Ollama types or narrow-cast options to avoid type errors.

If you hit TS errors, prefer declaration merging over broad any-casts:

Type augmentation (new file suggestion, e.g., src/types/ollama.d.ts):

declare module 'ollama' {
  interface Options {
    reasoning_effort?: 'minimal' | 'low' | 'medium' | 'high'
  }
}
src/renderer/src/components/ChatConfig.vue (1)

92-96: Gating UI by presence of reasoningEffort is pragmatic; consider capability-based gating later

Using presence of the prop to gate visibility works. For future robustness, consider gating off a capability flag (e.g., supportsReasoningEffort) from model config rather than current value if you ever allow “unset” states.

src/renderer/src/components/TitleView.vue (2)

285-297: Race-safety: guard async loadModelConfig against rapid model/provider flips

The watcher awaits loadModelConfig after detecting model/provider changes. If the user switches models quickly, late responses can overwrite newer state. Consider a simple request token to ignore stale responses.

Example approach (outside this range):

let lastLoadToken = 0
const loadModelConfig = async () => {
  const token = ++lastLoadToken
  // ... fetch
  if (token !== lastLoadToken) return // ignore stale
  // ... commit state
}

366-369: Replace setTimeout with nextTick to avoid magic delays

A fixed 100ms delay is brittle. Use nextTick to run after initial render, or call loadModelConfig immediately since you already watch model/provider changes.

Apply:

-  setTimeout(async () => {
-    await loadModelConfig()
-  }, 100)
+  await nextTick()
+  await loadModelConfig()

Also add nextTick to imports:

-import { onMounted, onUnmounted, ref, watch, computed } from 'vue'
+import { onMounted, onUnmounted, ref, watch, computed, nextTick } from 'vue'
src/renderer/src/components/settings/ModelConfigDialog.vue (2)

136-166: Use English for comments in Vue templates

Template comments should follow the guideline to use English.

Apply:

-          <!-- 推理努力程度 -->
+          <!-- Reasoning effort -->

497-500: Feature detection via 'in' may cause UI jank; tighten the gate

Because this component initializes config with a default that always includes reasoningEffort, supportsReasoningEffort will be true briefly for unsupported models until loadConfig completes. Prefer a more robust gate (e.g., metadata/provider capability or hasOwnProperty on the loaded config).

Option A (quick improvement):

-const supportsReasoningEffort = computed(() => {
-  return 'reasoningEffort' in config.value
-})
+const supportsReasoningEffort = computed(() => {
+  return Object.prototype.hasOwnProperty.call(config.value, 'reasoningEffort')
+})

Option B (recommended if available): query capability from settingsStore/configPresenter using modelId/providerId and derive supportsReasoningEffort from model metadata rather than current config shape.

📜 Review details

Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 7a58eef and 7a3a67c.

📒 Files selected for processing (7)
  • src/main/presenter/configPresenter/modelDefaultSettings.ts (1 hunks)
  • src/main/presenter/configPresenter/providerModelSettings.ts (1 hunks)
  • src/main/presenter/llmProviderPresenter/providers/ollamaProvider.ts (1 hunks)
  • src/renderer/src/components/ChatConfig.vue (5 hunks)
  • src/renderer/src/components/NewThread.vue (4 hunks)
  • src/renderer/src/components/TitleView.vue (4 hunks)
  • src/renderer/src/components/settings/ModelConfigDialog.vue (2 hunks)
🧰 Additional context used
📓 Path-based instructions (17)
**/*.{ts,tsx,js,jsx,vue}

📄 CodeRabbit Inference Engine (CLAUDE.md)

Use English for logs and comments

Files:

  • src/main/presenter/llmProviderPresenter/providers/ollamaProvider.ts
  • src/main/presenter/configPresenter/modelDefaultSettings.ts
  • src/renderer/src/components/NewThread.vue
  • src/renderer/src/components/settings/ModelConfigDialog.vue
  • src/renderer/src/components/TitleView.vue
  • src/main/presenter/configPresenter/providerModelSettings.ts
  • src/renderer/src/components/ChatConfig.vue
**/*.{ts,tsx}

📄 CodeRabbit Inference Engine (CLAUDE.md)

Strict type checking enabled for TypeScript

**/*.{ts,tsx}: 始终使用 try-catch 处理可能的错误
提供有意义的错误信息
记录详细的错误日志
优雅降级处理
日志应包含时间戳、日志级别、错误代码、错误描述、堆栈跟踪(如适用)、相关上下文信息
日志级别应包括 ERROR、WARN、INFO、DEBUG
不要吞掉错误
提供用户友好的错误信息
实现错误重试机制
避免记录敏感信息
使用结构化日志
设置适当的日志级别

Files:

  • src/main/presenter/llmProviderPresenter/providers/ollamaProvider.ts
  • src/main/presenter/configPresenter/modelDefaultSettings.ts
  • src/main/presenter/configPresenter/providerModelSettings.ts
src/main/**/*.ts

📄 CodeRabbit Inference Engine (CLAUDE.md)

Main to Renderer: Use EventBus to broadcast events via mainWindow.webContents.send()

Use Electron's built-in APIs for file system and native dialogs

Files:

  • src/main/presenter/llmProviderPresenter/providers/ollamaProvider.ts
  • src/main/presenter/configPresenter/modelDefaultSettings.ts
  • src/main/presenter/configPresenter/providerModelSettings.ts
src/main/presenter/**/*.ts

📄 CodeRabbit Inference Engine (CLAUDE.md)

One presenter per functional domain

Files:

  • src/main/presenter/llmProviderPresenter/providers/ollamaProvider.ts
  • src/main/presenter/configPresenter/modelDefaultSettings.ts
  • src/main/presenter/configPresenter/providerModelSettings.ts
src/main/presenter/llmProviderPresenter/providers/*.ts

📄 CodeRabbit Inference Engine (CLAUDE.md)

src/main/presenter/llmProviderPresenter/providers/*.ts: Create provider file in src/main/presenter/llmProviderPresenter/providers/ when adding a new LLM provider
Implement coreStream method following standardized event interface in LLM provider files

src/main/presenter/llmProviderPresenter/providers/*.ts: Each file in src/main/presenter/llmProviderPresenter/providers/*.ts should handle interaction with a specific LLM API, including request/response formatting, tool definition conversion, native/non-native tool call management, and standardizing output streams to a common event format.
Provider implementations must use a coreStream method that yields standardized stream events to decouple the main loop from provider-specific details.
The coreStream method in each Provider must perform a single streaming API request per conversation round and must not contain multi-round tool call loop logic.
Provider files should implement helper methods such as formatMessages, convertToProviderTools, parseFunctionCalls, and prepareFunctionCallPrompt as needed for provider-specific logic.
All provider implementations must parse provider-specific data chunks and yield standardized events for text, reasoning, tool calls, usage, errors, stop reasons, and image data.
When a provider does not support native function calling, it must prepare messages using prompt wrapping (e.g., prepareFunctionCallPrompt) before making the API call.
When a provider supports native function calling, MCP tools must be converted to the provider's format (e.g., using convertToProviderTools) and included in the API request.
Provider implementations should aggregate and yield usage events as part of the standardized stream.
Provider implementations should yield image data events in the standardized format when applicable.
Provider implementations should yield reasoning events in the standardized format when applicable.
Provider implementations should yield tool call events (`tool_call_star...

Files:

  • src/main/presenter/llmProviderPresenter/providers/ollamaProvider.ts
**/*.{js,jsx,ts,tsx}

📄 CodeRabbit Inference Engine (.cursor/rules/development-setup.mdc)

**/*.{js,jsx,ts,tsx}: 使用 OxLint 进行代码检查
Log和注释使用英文书写

Files:

  • src/main/presenter/llmProviderPresenter/providers/ollamaProvider.ts
  • src/main/presenter/configPresenter/modelDefaultSettings.ts
  • src/main/presenter/configPresenter/providerModelSettings.ts
src/{main,renderer}/**/*.ts

📄 CodeRabbit Inference Engine (.cursor/rules/electron-best-practices.mdc)

src/{main,renderer}/**/*.ts: Use context isolation for improved security
Implement proper inter-process communication (IPC) patterns
Optimize application startup time with lazy loading
Implement proper error handling and logging for debugging

Files:

  • src/main/presenter/llmProviderPresenter/providers/ollamaProvider.ts
  • src/main/presenter/configPresenter/modelDefaultSettings.ts
  • src/main/presenter/configPresenter/providerModelSettings.ts
src/main/**/*.{ts,js,tsx,jsx}

📄 CodeRabbit Inference Engine (.cursor/rules/project-structure.mdc)

主进程代码放在 src/main

Files:

  • src/main/presenter/llmProviderPresenter/providers/ollamaProvider.ts
  • src/main/presenter/configPresenter/modelDefaultSettings.ts
  • src/main/presenter/configPresenter/providerModelSettings.ts
src/main/presenter/configPresenter/**/*.ts

📄 CodeRabbit Inference Engine (CLAUDE.md)

Centralize configuration in configPresenter/

Files:

  • src/main/presenter/configPresenter/modelDefaultSettings.ts
  • src/main/presenter/configPresenter/providerModelSettings.ts
src/renderer/src/**/*.vue

📄 CodeRabbit Inference Engine (CLAUDE.md)

src/renderer/src/**/*.vue: Use Composition API for all Vue 3 components
Use Tailwind CSS with scoped styles for styling
Organize components by feature in src/renderer/src/
Follow existing component patterns in src/renderer/src/ when creating new UI components
Use Composition API with proper TypeScript typing for new UI components
Implement responsive design with Tailwind CSS for new UI components
Add proper error handling and loading states for new UI components

Use scoped styles to prevent CSS conflicts between components

Files:

  • src/renderer/src/components/NewThread.vue
  • src/renderer/src/components/settings/ModelConfigDialog.vue
  • src/renderer/src/components/TitleView.vue
  • src/renderer/src/components/ChatConfig.vue
src/renderer/src/**/*.{ts,tsx,vue}

📄 CodeRabbit Inference Engine (CLAUDE.md)

src/renderer/src/**/*.{ts,tsx,vue}: Use Pinia for frontend state management
Renderer to Main: Use usePresenter.ts composable for direct presenter method calls

Files:

  • src/renderer/src/components/NewThread.vue
  • src/renderer/src/components/settings/ModelConfigDialog.vue
  • src/renderer/src/components/TitleView.vue
  • src/renderer/src/components/ChatConfig.vue
src/renderer/src/**/*

📄 CodeRabbit Inference Engine (.cursor/rules/i18n.mdc)

src/renderer/src/**/*: All user-facing strings must use i18n keys (avoid hardcoded user-visible text in code)
Use the 'vue-i18n' framework for all internationalization in the renderer
Ensure all user-visible text in the renderer uses the translation system

Files:

  • src/renderer/src/components/NewThread.vue
  • src/renderer/src/components/settings/ModelConfigDialog.vue
  • src/renderer/src/components/TitleView.vue
  • src/renderer/src/components/ChatConfig.vue
src/renderer/**/*.{vue,ts,js,tsx,jsx}

📄 CodeRabbit Inference Engine (.cursor/rules/project-structure.mdc)

渲染进程代码放在 src/renderer

Files:

  • src/renderer/src/components/NewThread.vue
  • src/renderer/src/components/settings/ModelConfigDialog.vue
  • src/renderer/src/components/TitleView.vue
  • src/renderer/src/components/ChatConfig.vue
src/renderer/src/**/*.{vue,ts,tsx,js,jsx}

📄 CodeRabbit Inference Engine (.cursor/rules/vue-best-practices.mdc)

src/renderer/src/**/*.{vue,ts,tsx,js,jsx}: Use the Composition API for better code organization and reusability
Implement proper state management with Pinia
Utilize Vue Router for navigation and route management
Leverage Vue's built-in reactivity system for efficient data handling

Files:

  • src/renderer/src/components/NewThread.vue
  • src/renderer/src/components/settings/ModelConfigDialog.vue
  • src/renderer/src/components/TitleView.vue
  • src/renderer/src/components/ChatConfig.vue
src/renderer/**/*.{ts,tsx,vue}

📄 CodeRabbit Inference Engine (.cursor/rules/vue-shadcn.mdc)

src/renderer/**/*.{ts,tsx,vue}: Use descriptive variable names with auxiliary verbs (e.g., isLoading, hasError).
Use TypeScript for all code; prefer types over interfaces.
Avoid enums; use const objects instead.
Use arrow functions for methods and computed properties.
Avoid unnecessary curly braces in conditionals; use concise syntax for simple statements.

Files:

  • src/renderer/src/components/NewThread.vue
  • src/renderer/src/components/settings/ModelConfigDialog.vue
  • src/renderer/src/components/TitleView.vue
  • src/renderer/src/components/ChatConfig.vue
src/renderer/**/*.{vue,ts}

📄 CodeRabbit Inference Engine (.cursor/rules/vue-shadcn.mdc)

Implement lazy loading for routes and components.

Files:

  • src/renderer/src/components/NewThread.vue
  • src/renderer/src/components/settings/ModelConfigDialog.vue
  • src/renderer/src/components/TitleView.vue
  • src/renderer/src/components/ChatConfig.vue
src/renderer/**/*.{ts,vue}

📄 CodeRabbit Inference Engine (.cursor/rules/vue-shadcn.mdc)

src/renderer/**/*.{ts,vue}: Use useFetch and useAsyncData for data fetching.
Implement SEO best practices using Nuxt's useHead and useSeoMeta.

Files:

  • src/renderer/src/components/NewThread.vue
  • src/renderer/src/components/settings/ModelConfigDialog.vue
  • src/renderer/src/components/TitleView.vue
  • src/renderer/src/components/ChatConfig.vue
🧠 Learnings (1)
📚 Learning: 2025-07-21T01:46:52.880Z
Learnt from: CR
PR: ThinkInAIXYZ/deepchat#0
File: .cursor/rules/llm-agent-loop.mdc:0-0
Timestamp: 2025-07-21T01:46:52.880Z
Learning: Applies to src/main/presenter/llmProviderPresenter/providers/*.ts : Provider implementations should yield reasoning events in the standardized format when applicable.

Applied to files:

  • src/main/presenter/llmProviderPresenter/providers/ollamaProvider.ts
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (1)
  • GitHub Check: build-check (x64)
🔇 Additional comments (15)
src/main/presenter/configPresenter/modelDefaultSettings.ts (2)

674-682: Add reasoningEffort default for gpt-oss-120b: aligned with PR objective

  • temperature/context/maxTokens updates are consistent
  • reasoning enabled with a sane default reasoningEffort: 'medium'

No further issues.


686-694: Add reasoningEffort default for gpt-oss-20b: aligned and consistent with 120B

Good consistency across both GPT-OSS variants. This enables the UI to surface reasoning effort without extra gating logic.

src/main/presenter/configPresenter/providerModelSettings.ts (2)

380-388: Provider override for gpt-oss:20b includes reasoningEffort and aligns with defaults

  • temperature/context/maxTokens match the new defaults
  • reasoning: true and reasoningEffort: 'medium' included

This ensures getProviderSpecificModelConfig returns reasoningEffort to providers.


392-400: Provider override for gpt-oss:120b mirrors the 20B setup correctly

Consistent with model defaults and guarantees the reasoningEffort flag reaches the provider layer.

src/renderer/src/components/ChatConfig.vue (5)

12-18: Select UI imports added correctly

Imports match the usage in the template; no issues.


31-33: New props: reasoningEffort and verbosity added with precise typing

Props are optional and use tight unions, which is ideal for UI controls.


42-44: Emits for reasoningEffort and verbosity correctly typed

Event names align with v-model usage in parents (Vue templates will kebab-case normalize).


295-341: Reasoning Effort UI: solid implementation with i18n and typed value passthrough

  • Uses i18n keys
  • Emits correctly cast unions
  • Keeps UI minimal and consistent

Looks good.


342-382: Verbosity UI (GPT-5 only): correct conditional and event wiring

The gating with isGPT5Model and presence of verbosity prop prevents accidental exposure to non-GPT-5 models. No issues.

src/renderer/src/components/NewThread.vue (3)

93-95: v-model bindings for reasoningEffort/verbosity are correct

Bindings match child emits (Vue normalizes camelCase emits to kebab-case listeners in DOM templates).


162-164: Local state types for reasoningEffort/verbosity are precise

Union types reflect available options and allow undefined initial state. Good.


182-184: Initialize from model defaults: good propagation path

Pulling config.reasoningEffort/config.verbosity from configPresenter ensures ChatConfig renders options appropriately.

src/renderer/src/components/TitleView.vue (3)

68-69: Emits handlers added appropriately

Handlers for update:reasoning-effort and update:verbosity are wired and scoped to local setters. Good.


216-223: Setters for reasoningEffort/verbosity are fine

Typed union signatures are clear and keep assignments constrained. No issues.


59-60: Props & Emits Alignment Verified

ChatConfig.vue correctly defines the following, so the kebab-case bindings in TitleView.vue will map as intended:

  • Props:
    • reasoningEffort?: 'minimal' | 'low' | 'medium' | 'high'
    • verbosity?: 'low' | 'medium' | 'high'
  • Emits:
    • update:reasoningEffort → [value: 'minimal' | 'low' | 'medium' | 'high']
    • update:verbosity → [value: 'low' | 'medium' | 'high']

No further changes required.

Comment on lines +413 to 417
reasoningEffort: reasoningEffort.value,
verbosity: verbosity.value,
enabledMcpTools: chatStore.chatConfig.enabledMcpTools
})
} as any)
console.log('threadId', threadId, activeModel.value)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🛠️ Refactor suggestion

Avoid any-casting the createThread payload; extend thread settings instead

Casting to any masks type drift and risks lost persistence/validation downstream. Prefer updating the shared Thread settings/type to include reasoningEffort and verbosity so they’re first-class citizens across the app.

Example follow-ups:

  • Extend the thread creation DTO and settings model in @shared to include:
    • reasoningEffort?: 'minimal' | 'low' | 'medium' | 'high'
    • verbosity?: 'low' | 'medium' | 'high'
  • Plumb these through chatStore persistence and presentation.

Short-term (if types aren’t ready), constrain the cast to a narrower subset to avoid polluting unrelated properties:

-  } as any)
+  } as unknown as {
+    providerId: string
+    modelId: string
+    systemPrompt: string
+    temperature: number
+    contextLength: number
+    maxTokens: number
+    artifacts: 0 | 1
+    enabledMcpTools: string[]
+    reasoningEffort?: 'minimal' | 'low' | 'medium' | 'high'
+    verbosity?: 'low' | 'medium' | 'high'
+  })
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
reasoningEffort: reasoningEffort.value,
verbosity: verbosity.value,
enabledMcpTools: chatStore.chatConfig.enabledMcpTools
})
} as any)
console.log('threadId', threadId, activeModel.value)
reasoningEffort: reasoningEffort.value,
verbosity: verbosity.value,
enabledMcpTools: chatStore.chatConfig.enabledMcpTools
} as unknown as {
providerId: string
modelId: string
systemPrompt: string
temperature: number
contextLength: number
maxTokens: number
artifacts: 0 | 1
enabledMcpTools: string[]
reasoningEffort?: 'minimal' | 'low' | 'medium' | 'high'
verbosity?: 'low' | 'medium' | 'high'
})
console.log('threadId', threadId, activeModel.value)

Comment on lines +107 to +109
const reasoningEffort = ref((chatStore.chatConfig as any).reasoningEffort)
const verbosity = ref((chatStore.chatConfig as any).verbosity)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🛠️ Refactor suggestion

Avoid any; add proper types for reasoningEffort and verbosity

Using any on chatStore.chatConfig weakens type-safety and can hide bugs. Type these refs explicitly and read from a typed Partial instead.

Apply:

-const reasoningEffort = ref((chatStore.chatConfig as any).reasoningEffort)
-const verbosity = ref((chatStore.chatConfig as any).verbosity)
+type ReasoningEffort = 'minimal' | 'low' | 'medium' | 'high'
+type Verbosity = 'low' | 'medium' | 'high'
+const chatCfg = chatStore.chatConfig as Partial<{
+  reasoningEffort: ReasoningEffort
+  verbosity: Verbosity
+}>
+const reasoningEffort = ref<ReasoningEffort | undefined>(chatCfg.reasoningEffort)
+const verbosity = ref<Verbosity | undefined>(chatCfg.verbosity)
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
const reasoningEffort = ref((chatStore.chatConfig as any).reasoningEffort)
const verbosity = ref((chatStore.chatConfig as any).verbosity)
<script lang="ts" setup>
import { ref } from 'vue'
import { useChatStore } from '@/stores/chat'
const chatStore = useChatStore()
type ReasoningEffort = 'minimal' | 'low' | 'medium' | 'high'
type Verbosity = 'low' | 'medium' | 'high'
const chatCfg = chatStore.chatConfig as Partial<{
reasoningEffort: ReasoningEffort
verbosity: Verbosity
}>
const reasoningEffort = ref<ReasoningEffort | undefined>(chatCfg.reasoningEffort)
const verbosity = ref<Verbosity | undefined>(chatCfg.verbosity)
// …rest of your setup code
</script>

Comment on lines 110 to 132
// 获取模型配置来初始化默认值
const loadModelConfig = async () => {
const modelId = chatStore.chatConfig.modelId
const providerId = chatStore.chatConfig.providerId
if (modelId && providerId) {
try {
const config = await configPresenter.getModelDefaultConfig(modelId, providerId)
if (config.reasoningEffort !== undefined) {
reasoningEffort.value = config.reasoningEffort
} else {
reasoningEffort.value = undefined
}
if (config.verbosity !== undefined) {
verbosity.value = config.verbosity
} else {
verbosity.value = undefined
}
} catch (error) {
console.error('Failed to load model config:', error)
}
}
}
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

💡 Verification agent

🧩 Analysis chain

Use English for comments; also ensure getModelDefaultConfig signature is used consistently

  • Comment is currently in Chinese; guidelines require English for logs/comments.
  • You now call getModelDefaultConfig(modelId, providerId) here. Elsewhere in this file (Line 359) it’s still called with a single argument, which can break if the signature changed.

Update the comment and verify all call sites:

-// 获取模型配置来初始化默认值
+// Load model config to initialize default values

Check all usages:


🏁 Script executed:

#!/bin/bash
# Find all usages of getModelDefaultConfig and show surrounding lines
rg -n -A 3 -B 1 'getModelDefaultConfig\s*\('

Length of output: 2556


🏁 Script executed:

#!/bin/bash
# Inspect getModelConfig implementation to see how it handles missing providerId
rg -n -A5 -B2 'getModelConfig' src/main/presenter/configPresenter/index.ts

Length of output: 1039


Translate comment to English and retain existing calls

  • In src/renderer/src/components/TitleView.vue (at the top of loadModelConfig), replace:
    // 获取模型配置来初始化默认值
    with
    // Load model config to initialize default values
  • No edits are required for other getModelDefaultConfig calls (e.g. line 359) since its signature still accepts an optional providerId.
🤖 Prompt for AI Agents
In src/renderer/src/components/TitleView.vue around lines 110 to 132, replace
the Chinese comment "// 获取模型配置来初始化默认值" at the top of loadModelConfig with the
English comment "// Load model config to initialize default values" and make no
other code changes; leave all getModelDefaultConfig calls (including other
occurrences) as-is since their signatures remain unchanged.

Comment on lines +230 to 272
[
temperature,
contextLength,
maxTokens,
systemPrompt,
artifacts,
thinkingBudget,
reasoningEffort,
verbosity
],
([
newTemp,
newContext,
newMaxTokens,
newSystemPrompt,
newArtifacts,
newThinkingBudget,
newReasoningEffort,
newVerbosity
]) => {
if (
newTemp !== chatStore.chatConfig.temperature ||
newContext !== chatStore.chatConfig.contextLength ||
newMaxTokens !== chatStore.chatConfig.maxTokens ||
newSystemPrompt !== chatStore.chatConfig.systemPrompt ||
newArtifacts !== chatStore.chatConfig.artifacts ||
newThinkingBudget !== chatStore.chatConfig.thinkingBudget
newThinkingBudget !== chatStore.chatConfig.thinkingBudget ||
newReasoningEffort !== (chatStore.chatConfig as any).reasoningEffort ||
newVerbosity !== (chatStore.chatConfig as any).verbosity
) {
chatStore.updateChatConfig({
temperature: newTemp,
contextLength: newContext,
maxTokens: newMaxTokens,
systemPrompt: newSystemPrompt,
artifacts: newArtifacts,
thinkingBudget: newThinkingBudget
})
thinkingBudget: newThinkingBudget,
reasoningEffort: newReasoningEffort,
verbosity: newVerbosity
} as any)
}
}
)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🛠️ Refactor suggestion

Don’t push undefined fields into chatConfig; build a partial update object

Updating the store with undefined for optional fields can introduce keys with undefined values and break “feature detection” (presence-based gating). Build a patch object that only includes defined optional fields.

Apply:

   ]) => {
     if (
       newTemp !== chatStore.chatConfig.temperature ||
       newContext !== chatStore.chatConfig.contextLength ||
       newMaxTokens !== chatStore.chatConfig.maxTokens ||
       newSystemPrompt !== chatStore.chatConfig.systemPrompt ||
       newArtifacts !== chatStore.chatConfig.artifacts ||
       newThinkingBudget !== chatStore.chatConfig.thinkingBudget ||
       newReasoningEffort !== (chatStore.chatConfig as any).reasoningEffort ||
       newVerbosity !== (chatStore.chatConfig as any).verbosity
     ) {
-      chatStore.updateChatConfig({
-        temperature: newTemp,
-        contextLength: newContext,
-        maxTokens: newMaxTokens,
-        systemPrompt: newSystemPrompt,
-        artifacts: newArtifacts,
-        thinkingBudget: newThinkingBudget,
-        reasoningEffort: newReasoningEffort,
-        verbosity: newVerbosity
-      } as any)
+      const patch: Record<string, unknown> = {
+        temperature: newTemp,
+        contextLength: newContext,
+        maxTokens: newMaxTokens,
+        systemPrompt: newSystemPrompt,
+        artifacts: newArtifacts
+      }
+      if (newThinkingBudget !== undefined) patch.thinkingBudget = newThinkingBudget
+      if (newReasoningEffort !== undefined) patch.reasoningEffort = newReasoningEffort
+      if (newVerbosity !== undefined) patch.verbosity = newVerbosity
+      chatStore.updateChatConfig(patch as any)
     }
   }
 )
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
[
temperature,
contextLength,
maxTokens,
systemPrompt,
artifacts,
thinkingBudget,
reasoningEffort,
verbosity
],
([
newTemp,
newContext,
newMaxTokens,
newSystemPrompt,
newArtifacts,
newThinkingBudget,
newReasoningEffort,
newVerbosity
]) => {
if (
newTemp !== chatStore.chatConfig.temperature ||
newContext !== chatStore.chatConfig.contextLength ||
newMaxTokens !== chatStore.chatConfig.maxTokens ||
newSystemPrompt !== chatStore.chatConfig.systemPrompt ||
newArtifacts !== chatStore.chatConfig.artifacts ||
newThinkingBudget !== chatStore.chatConfig.thinkingBudget
newThinkingBudget !== chatStore.chatConfig.thinkingBudget ||
newReasoningEffort !== (chatStore.chatConfig as any).reasoningEffort ||
newVerbosity !== (chatStore.chatConfig as any).verbosity
) {
chatStore.updateChatConfig({
temperature: newTemp,
contextLength: newContext,
maxTokens: newMaxTokens,
systemPrompt: newSystemPrompt,
artifacts: newArtifacts,
thinkingBudget: newThinkingBudget
})
thinkingBudget: newThinkingBudget,
reasoningEffort: newReasoningEffort,
verbosity: newVerbosity
} as any)
}
}
)
[
temperature,
contextLength,
maxTokens,
systemPrompt,
artifacts,
thinkingBudget,
reasoningEffort,
verbosity
],
([
newTemp,
newContext,
newMaxTokens,
newSystemPrompt,
newArtifacts,
newThinkingBudget,
newReasoningEffort,
newVerbosity
]) => {
if (
newTemp !== chatStore.chatConfig.temperature ||
newContext !== chatStore.chatConfig.contextLength ||
newMaxTokens !== chatStore.chatConfig.maxTokens ||
newSystemPrompt !== chatStore.chatConfig.systemPrompt ||
newArtifacts !== chatStore.chatConfig.artifacts ||
newThinkingBudget !== chatStore.chatConfig.thinkingBudget ||
newReasoningEffort !== (chatStore.chatConfig as any).reasoningEffort ||
newVerbosity !== (chatStore.chatConfig as any).verbosity
) {
const patch: Record<string, unknown> = {
temperature: newTemp,
contextLength: newContext,
maxTokens: newMaxTokens,
systemPrompt: newSystemPrompt,
artifacts: newArtifacts
}
if (newThinkingBudget !== undefined) patch.thinkingBudget = newThinkingBudget
if (newReasoningEffort !== undefined) patch.reasoningEffort = newReasoningEffort
if (newVerbosity !== undefined) patch.verbosity = newVerbosity
chatStore.updateChatConfig(patch as any)
}
}
)
🤖 Prompt for AI Agents
In src/renderer/src/components/TitleView.vue around lines 230 to 272, the
watcher currently calls chatStore.updateChatConfig with all fields even when
some are undefined; instead construct a partial patch object that always
includes required fields (temperature, contextLength, maxTokens, systemPrompt)
and only adds optional fields (artifacts, thinkingBudget, reasoningEffort,
verbosity) when their new values are !== undefined, then pass that patch to
updateChatConfig so undefined keys are never written into chatConfig.

@zerob13 zerob13 merged commit 3fffea3 into dev Aug 13, 2025
6 checks passed
zerob13 added a commit that referenced this pull request Aug 13, 2025
* fix: add AlertDialogDescription to resolve accessibility warning (#706)

* fix: resolve focus flicker when creating new windows with Ctrl+Shift+N (#707)

* feat: enhance window management by implementing main window ID handling (#709)

* docs: update zhipu developer doc website link (#715)

Co-authored-by: gongchao <chao.gong@aminer.cn>

* refactor: better translate (#716)

* chore: en-us i18n

* chore(i18n): polish ja-JP translations across UI; keep chat.input.placeholder unchanged

* chore(i18n): polish fr-FR translations; keep chat.input.placeholder unchanged

* chore(i18n): refine fr-FR MCP & Settings copy; idiomatic, concise, brand-consistent

* chore(i18n): polish ru-RU translations across UI; keep chat.input.placeholder unchanged

* chore(i18n): polish fa-IR translations across UI; keep chat.input.placeholder unchanged

* chore: fix format

* chore: fix i18n

* chore: lock rolldown-vite version

* feat: add GPT-5 series model support (#717)

* ci(vite): Bundle the main file into a single file to speed up loading. (#718)

* fix(math): parser by upgrade vue-renderer-markdown (#722)

* chore: bump deps (#721)

* chore: bump deps

* fix: rolldown-vite 7.1.0 and duckdb bundle issue

* chore: back to vite

* chore: update electron

* chore: update versions

* fix(math): parser by upgrade vue-renderer-markdown (#722)

* chore: bump deps

---------

Co-authored-by: Simon He <57086651+Simon-He95@users.noreply.github.com>

* fix: add scrollable support to PopoverContent to prevent overflow (#720)

* feat: implement floating chat window system with performance optimization (#724)

* feat: add mcp sync and modelscope provider #615 (#723)

* wip: add modelscope provider

* feat: add mcp sync to modelscope

* fix: add scrollable support to PopoverContent to prevent overflow (#720)

* feat: implement floating chat window system with performance optimization (#724)

* chore: i18n and format

* feat: better style

* fix: mcp tool display

---------

Co-authored-by: yyhhyyyyyy <yyhhyyyyyy8@gmail.com>

* fix: move_files newPath parse issue (#725)

* fix: move_files newPath 参数计算规则

* fix: move_files 移动前需要判断dest是目录还是文件

* feat: add Claude Opus 4.1 to anthropic default model list (#726)

* feat: Add mcprouter's MCP marketplace api support (#727)

* wip: add mcp market

* feat: mcp market install

* wip: mcp install status sync

* feat: mcp server config mask

* chore: remove working doc

* chore: add translate

* feat: add ESC key to close floating chat window (#728)

* feat: add floating button position persistence with boundary validation (#729)

* feat: add floating button position persistence with boundary validation

* feat: refactor floating button to use electron-window-state

* chore: bump to 0.3.0

* feat: add reasoning_effort parameter support for gpt-oss models (#731)

* feat: add reasoning_effort parameter support for gpt-oss models

- add reasoning effort UI support across all components

* fix: preserve user reasoning effort settings and improve display logic

* fix: artifacts code not streaming (#732)

* fix: artifact react load failed

* chore: remove log

* fix: artifacts code not stream

* fix: format

---------

Co-authored-by: yyhhyyyyyy <yyhhyyyyyy8@gmail.com>
Co-authored-by: hllshiro <40970081+hllshiro@users.noreply.github.com>
Co-authored-by: tomsun28 <tomsun28@outlook.com>
Co-authored-by: gongchao <chao.gong@aminer.cn>
Co-authored-by: Simon He <57086651+Simon-He95@users.noreply.github.com>
Co-authored-by: wanna <wanna.w@binarywalk.com>
zerob13 added a commit that referenced this pull request Aug 13, 2025
* fix: add AlertDialogDescription to resolve accessibility warning (#706)

* fix: resolve focus flicker when creating new windows with Ctrl+Shift+N (#707)

* feat: enhance window management by implementing main window ID handling (#709)

* docs: update zhipu developer doc website link (#715)

Co-authored-by: gongchao <chao.gong@aminer.cn>

* refactor: better translate (#716)

* chore: en-us i18n

* chore(i18n): polish ja-JP translations across UI; keep chat.input.placeholder unchanged

* chore(i18n): polish fr-FR translations; keep chat.input.placeholder unchanged

* chore(i18n): refine fr-FR MCP & Settings copy; idiomatic, concise, brand-consistent

* chore(i18n): polish ru-RU translations across UI; keep chat.input.placeholder unchanged

* chore(i18n): polish fa-IR translations across UI; keep chat.input.placeholder unchanged

* chore: fix format

* chore: fix i18n

* chore: lock rolldown-vite version

* feat: add GPT-5 series model support (#717)

* ci(vite): Bundle the main file into a single file to speed up loading. (#718)

* fix(math): parser by upgrade vue-renderer-markdown (#722)

* chore: bump deps (#721)

* chore: bump deps

* fix: rolldown-vite 7.1.0 and duckdb bundle issue

* chore: back to vite

* chore: update electron

* chore: update versions

* fix(math): parser by upgrade vue-renderer-markdown (#722)

* chore: bump deps

---------

Co-authored-by: Simon He <57086651+Simon-He95@users.noreply.github.com>

* fix: add scrollable support to PopoverContent to prevent overflow (#720)

* feat: implement floating chat window system with performance optimization (#724)

* feat: add mcp sync and modelscope provider #615 (#723)

* wip: add modelscope provider

* feat: add mcp sync to modelscope

* fix: add scrollable support to PopoverContent to prevent overflow (#720)

* feat: implement floating chat window system with performance optimization (#724)

* chore: i18n and format

* feat: better style

* fix: mcp tool display

---------

Co-authored-by: yyhhyyyyyy <yyhhyyyyyy8@gmail.com>

* fix: move_files newPath parse issue (#725)

* fix: move_files newPath 参数计算规则

* fix: move_files 移动前需要判断dest是目录还是文件

* feat: add Claude Opus 4.1 to anthropic default model list (#726)

* feat: Add mcprouter's MCP marketplace api support (#727)

* wip: add mcp market

* feat: mcp market install

* wip: mcp install status sync

* feat: mcp server config mask

* chore: remove working doc

* chore: add translate

* feat: add ESC key to close floating chat window (#728)

* feat: add floating button position persistence with boundary validation (#729)

* feat: add floating button position persistence with boundary validation

* feat: refactor floating button to use electron-window-state

* chore: bump to 0.3.0

* feat: add reasoning_effort parameter support for gpt-oss models (#731)

* feat: add reasoning_effort parameter support for gpt-oss models

- add reasoning effort UI support across all components

* fix: preserve user reasoning effort settings and improve display logic

* fix: artifacts code not streaming (#732)

* fix: artifact react load failed

* chore: remove log

* fix: artifacts code not stream

* fix: format

* feat: disable automatic model enabling for better UX (#734)

* feat: sync provider sorting from settings to model selection (#736)

* feat: sync provider sorting from settings to model selection

* feat: refactor ModelSelect to use computed providers for better reactivity

---------

Co-authored-by: yyhhyyyyyy <yyhhyyyyyy8@gmail.com>
Co-authored-by: hllshiro <40970081+hllshiro@users.noreply.github.com>
Co-authored-by: tomsun28 <tomsun28@outlook.com>
Co-authored-by: gongchao <chao.gong@aminer.cn>
Co-authored-by: Simon He <57086651+Simon-He95@users.noreply.github.com>
Co-authored-by: wanna <wanna.w@binarywalk.com>
@zerob13 zerob13 deleted the feat/gpt-oss-reasoning-effort-support branch November 23, 2025 13:15
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants