-
Notifications
You must be signed in to change notification settings - Fork 614
feat: add jiekou.ai as LLM provider #1041
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
WalkthroughAdds JieKou.AI as a new LLM provider: registers its config entry, implements a JiekouProvider (extending OpenAICompatibleProvider) that remaps model groups, wires provider instantiation in the factory, and registers a UI icon asset. Changes
Sequence Diagram(s)sequenceDiagram
participant UI as Renderer UI
participant Presenter as Main Presenter
participant Factory as LLMProviderPresenter
participant Jiekou as JiekouProvider
participant API as JieKou/OpenAI-compatible API
UI->>Presenter: request provider list / models
Presenter->>Factory: createProviderInstance(provider)
alt provider.id == "jiekou" or apiType == "jiekou"
Factory->>Jiekou: new JiekouProvider(provider, config)
Jiekou->>API: fetch models (OpenAI-compatible endpoint)
API-->>Jiekou: model list
Jiekou-->>Factory: models (group set to "JieKou.AI")
else other providers
Factory->>Factory: instantiate other provider
end
Factory-->>Presenter: provider instance / models
Presenter-->>UI: return provider info and models
Note over Jiekou: model metadata remapped to group "JieKou.AI"
Estimated code review effort🎯 3 (Moderate) | ⏱️ ~20 minutes Possibly related PRs
Poem
Pre-merge checks and finishing touches✅ Passed checks (3 passed)
✨ Finishing touches
🧪 Generate unit tests (beta)
📜 Recent review detailsConfiguration used: CodeRabbit UI Review profile: CHILL Plan: Pro 📒 Files selected for processing (1)
🚧 Files skipped from review as they are similar to previous changes (1)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (1)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 1
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
⛔ Files ignored due to path filters (3)
src/renderer/src/assets/llm-icons/jiekou-color.svgis excluded by!**/*.svgsrc/renderer/src/assets/llm-icons/jiekou-text.svgis excluded by!**/*.svgsrc/renderer/src/assets/llm-icons/jiekou.svgis excluded by!**/*.svg
📒 Files selected for processing (4)
src/main/presenter/configPresenter/providers.ts(1 hunks)src/main/presenter/llmProviderPresenter/index.ts(3 hunks)src/main/presenter/llmProviderPresenter/providers/jiekouProvider.ts(1 hunks)src/renderer/src/components/icons/ModelIcon.vue(2 hunks)
🧰 Additional context used
📓 Path-based instructions (25)
**/*.{js,jsx,ts,tsx}
📄 CodeRabbit inference engine (.cursor/rules/development-setup.mdc)
**/*.{js,jsx,ts,tsx}: 使用 OxLint 进行代码检查
Log和注释使用英文书写
Files:
src/main/presenter/llmProviderPresenter/providers/jiekouProvider.tssrc/main/presenter/llmProviderPresenter/index.tssrc/main/presenter/configPresenter/providers.ts
src/{main,renderer}/**/*.ts
📄 CodeRabbit inference engine (.cursor/rules/electron-best-practices.mdc)
src/{main,renderer}/**/*.ts: Use context isolation for improved security
Implement proper inter-process communication (IPC) patterns
Optimize application startup time with lazy loading
Implement proper error handling and logging for debugging
Files:
src/main/presenter/llmProviderPresenter/providers/jiekouProvider.tssrc/main/presenter/llmProviderPresenter/index.tssrc/main/presenter/configPresenter/providers.ts
src/main/**/*.ts
📄 CodeRabbit inference engine (.cursor/rules/electron-best-practices.mdc)
Use Electron's built-in APIs for file system and native dialogs
Files:
src/main/presenter/llmProviderPresenter/providers/jiekouProvider.tssrc/main/presenter/llmProviderPresenter/index.tssrc/main/presenter/configPresenter/providers.ts
**/*.{ts,tsx}
📄 CodeRabbit inference engine (.cursor/rules/error-logging.mdc)
**/*.{ts,tsx}: 始终使用 try-catch 处理可能的错误
提供有意义的错误信息
记录详细的错误日志
优雅降级处理
日志应包含时间戳、日志级别、错误代码、错误描述、堆栈跟踪(如适用)、相关上下文信息
日志级别应包括 ERROR、WARN、INFO、DEBUG
不要吞掉错误
提供用户友好的错误信息
实现错误重试机制
避免记录敏感信息
使用结构化日志
设置适当的日志级别
Files:
src/main/presenter/llmProviderPresenter/providers/jiekouProvider.tssrc/main/presenter/llmProviderPresenter/index.tssrc/main/presenter/configPresenter/providers.ts
src/main/presenter/llmProviderPresenter/providers/*.ts
📄 CodeRabbit inference engine (.cursor/rules/llm-agent-loop.mdc)
src/main/presenter/llmProviderPresenter/providers/*.ts: Each file insrc/main/presenter/llmProviderPresenter/providers/*.tsshould handle interaction with a specific LLM API, including request/response formatting, tool definition conversion, native/non-native tool call management, and standardizing output streams to a common event format.
Provider implementations must use acoreStreammethod that yields standardized stream events to decouple the main loop from provider-specific details.
ThecoreStreammethod in each Provider must perform a single streaming API request per conversation round and must not contain multi-round tool call loop logic.
Provider files should implement helper methods such asformatMessages,convertToProviderTools,parseFunctionCalls, andprepareFunctionCallPromptas needed for provider-specific logic.
All provider implementations must parse provider-specific data chunks and yield standardized events for text, reasoning, tool calls, usage, errors, stop reasons, and image data.
When a provider does not support native function calling, it must prepare messages using prompt wrapping (e.g.,prepareFunctionCallPrompt) before making the API call.
When a provider supports native function calling, MCP tools must be converted to the provider's format (e.g., usingconvertToProviderTools) and included in the API request.
Provider implementations should aggregate and yield usage events as part of the standardized stream.
Provider implementations should yield image data events in the standardized format when applicable.
Provider implementations should yield reasoning events in the standardized format when applicable.
Provider implementations should yield tool call events (tool_call_start,tool_call_chunk,tool_call_end) in the standardized format.
Provider implementations should yield stop events with appropriatestop_reasonin the standardized format.
Provider implementations should yield error events in the standardized format...
Files:
src/main/presenter/llmProviderPresenter/providers/jiekouProvider.ts
src/main/**/*.{ts,js,tsx,jsx}
📄 CodeRabbit inference engine (.cursor/rules/project-structure.mdc)
主进程代码放在
src/main
Files:
src/main/presenter/llmProviderPresenter/providers/jiekouProvider.tssrc/main/presenter/llmProviderPresenter/index.tssrc/main/presenter/configPresenter/providers.ts
**/*.{ts,tsx,js,vue}
📄 CodeRabbit inference engine (CLAUDE.md)
Use English for all logs and comments
Files:
src/main/presenter/llmProviderPresenter/providers/jiekouProvider.tssrc/renderer/src/components/icons/ModelIcon.vuesrc/main/presenter/llmProviderPresenter/index.tssrc/main/presenter/configPresenter/providers.ts
**/*.{ts,tsx,vue}
📄 CodeRabbit inference engine (CLAUDE.md)
Enable and adhere to strict TypeScript typing (avoid implicit any, prefer precise types)
Use PascalCase for TypeScript types and classes
Files:
src/main/presenter/llmProviderPresenter/providers/jiekouProvider.tssrc/renderer/src/components/icons/ModelIcon.vuesrc/main/presenter/llmProviderPresenter/index.tssrc/main/presenter/configPresenter/providers.ts
src/main/presenter/**/*.ts
📄 CodeRabbit inference engine (AGENTS.md)
Place Electron main-process presenters under src/main/presenter/ (Window, Tab, Thread, Mcp, Config, LLMProvider)
Files:
src/main/presenter/llmProviderPresenter/providers/jiekouProvider.tssrc/main/presenter/llmProviderPresenter/index.tssrc/main/presenter/configPresenter/providers.ts
**/*.{ts,tsx,js,jsx,vue,css,scss,md,json,yml,yaml}
📄 CodeRabbit inference engine (AGENTS.md)
Prettier style: single quotes, no semicolons, print width 100; run pnpm run format
Files:
src/main/presenter/llmProviderPresenter/providers/jiekouProvider.tssrc/renderer/src/components/icons/ModelIcon.vuesrc/main/presenter/llmProviderPresenter/index.tssrc/main/presenter/configPresenter/providers.ts
**/*.{ts,tsx,js,jsx,vue}
📄 CodeRabbit inference engine (AGENTS.md)
**/*.{ts,tsx,js,jsx,vue}: Use OxLint for JS/TS code; keep lint clean
Use camelCase for variables and functions
Use SCREAMING_SNAKE_CASE for constants
Files:
src/main/presenter/llmProviderPresenter/providers/jiekouProvider.tssrc/renderer/src/components/icons/ModelIcon.vuesrc/main/presenter/llmProviderPresenter/index.tssrc/main/presenter/configPresenter/providers.ts
src/renderer/src/**/*
📄 CodeRabbit inference engine (.cursor/rules/i18n.mdc)
src/renderer/src/**/*: All user-facing strings must use i18n keys (avoid hardcoded user-visible text in code)
Use the 'vue-i18n' framework for all internationalization in the renderer
Ensure all user-visible text in the renderer uses the translation system
Files:
src/renderer/src/components/icons/ModelIcon.vue
src/renderer/**/*.{vue,ts,js,tsx,jsx}
📄 CodeRabbit inference engine (.cursor/rules/project-structure.mdc)
渲染进程代码放在
src/renderer
Files:
src/renderer/src/components/icons/ModelIcon.vue
src/renderer/src/**/*.{vue,ts,tsx,js,jsx}
📄 CodeRabbit inference engine (.cursor/rules/vue-best-practices.mdc)
src/renderer/src/**/*.{vue,ts,tsx,js,jsx}: Use the Composition API for better code organization and reusability
Implement proper state management with Pinia
Utilize Vue Router for navigation and route management
Leverage Vue's built-in reactivity system for efficient data handling
Files:
src/renderer/src/components/icons/ModelIcon.vue
src/renderer/src/**/*.vue
📄 CodeRabbit inference engine (.cursor/rules/vue-best-practices.mdc)
Use scoped styles to prevent CSS conflicts between components
Files:
src/renderer/src/components/icons/ModelIcon.vue
src/renderer/**/*.{ts,tsx,vue}
📄 CodeRabbit inference engine (.cursor/rules/vue-shadcn.mdc)
src/renderer/**/*.{ts,tsx,vue}: Use descriptive variable names with auxiliary verbs (e.g., isLoading, hasError).
Use TypeScript for all code; prefer types over interfaces.
Avoid enums; use const objects instead.
Use arrow functions for methods and computed properties.
Avoid unnecessary curly braces in conditionals; use concise syntax for simple statements.
Files:
src/renderer/src/components/icons/ModelIcon.vue
src/renderer/**/*.{vue,ts}
📄 CodeRabbit inference engine (.cursor/rules/vue-shadcn.mdc)
Implement lazy loading for routes and components.
Files:
src/renderer/src/components/icons/ModelIcon.vue
src/renderer/**/*.{ts,vue}
📄 CodeRabbit inference engine (.cursor/rules/vue-shadcn.mdc)
src/renderer/**/*.{ts,vue}: Use useFetch and useAsyncData for data fetching.
Implement SEO best practices using Nuxt's useHead and useSeoMeta.Use Pinia for frontend state management (do not introduce alternative state libraries)
Files:
src/renderer/src/components/icons/ModelIcon.vue
src/renderer/{src,shell,floating}/**/*.vue
📄 CodeRabbit inference engine (CLAUDE.md)
src/renderer/{src,shell,floating}/**/*.vue: Use Vue 3 Composition API for all components
All user-facing strings must use i18n keys via vue-i18n (no hard-coded UI strings)
Use Tailwind CSS utilities and ensure styles are scoped in Vue components
Files:
src/renderer/src/components/icons/ModelIcon.vue
src/renderer/src/components/**/*
📄 CodeRabbit inference engine (CLAUDE.md)
Organize UI components by feature within src/renderer/src/
Files:
src/renderer/src/components/icons/ModelIcon.vue
src/renderer/src/**
📄 CodeRabbit inference engine (AGENTS.md)
Place Vue 3 app source under src/renderer/src (components, stores, views, i18n, lib)
Files:
src/renderer/src/components/icons/ModelIcon.vue
src/renderer/src/**/*.{vue,ts}
📄 CodeRabbit inference engine (AGENTS.md)
All user-facing strings must use vue-i18n ($t/keys) rather than hardcoded literals
Files:
src/renderer/src/components/icons/ModelIcon.vue
src/renderer/**/*.vue
📄 CodeRabbit inference engine (AGENTS.md)
Name Vue component files in PascalCase (e.g., ChatInput.vue)
Files:
src/renderer/src/components/icons/ModelIcon.vue
src/main/presenter/llmProviderPresenter/index.ts
📄 CodeRabbit inference engine (.cursor/rules/llm-agent-loop.mdc)
src/main/presenter/llmProviderPresenter/index.ts:src/main/presenter/llmProviderPresenter/index.tsshould manage the overall Agent loop, conversation history, tool execution viaMcpPresenter, and frontend communication viaeventBus.
The main Agent loop inllmProviderPresenter/index.tsshould handle multi-round LLM calls and tool usage, maintaining conversation state and controlling the loop withneedContinueConversationandtoolCallCount.
The main Agent loop should send standardizedSTREAM_EVENTS(RESPONSE,END,ERROR) to the frontend viaeventBus.
The main Agent loop should buffer text content, handle tool call events, format tool results for the next LLM call, and manage conversation continuation logic.
Files:
src/main/presenter/llmProviderPresenter/index.ts
src/main/presenter/configPresenter/providers.ts
📄 CodeRabbit inference engine (CLAUDE.md)
Add provider configuration entries in src/main/presenter/configPresenter/providers.ts
Files:
src/main/presenter/configPresenter/providers.ts
🧠 Learnings (6)
📚 Learning: 2025-09-06T03:07:23.817Z
Learnt from: CR
PR: ThinkInAIXYZ/deepchat#0
File: CLAUDE.md:0-0
Timestamp: 2025-09-06T03:07:23.817Z
Learning: Applies to src/main/presenter/llmProviderPresenter/providers/*.ts : New LLM providers must be added under src/main/presenter/llmProviderPresenter/providers/ as separate files
Applied to files:
src/main/presenter/llmProviderPresenter/providers/jiekouProvider.tssrc/main/presenter/llmProviderPresenter/index.ts
📚 Learning: 2025-07-23T00:45:57.322Z
Learnt from: CR
PR: ThinkInAIXYZ/deepchat#0
File: .cursor/rules/vue-shadcn.mdc:0-0
Timestamp: 2025-07-23T00:45:57.322Z
Learning: Applies to src/renderer/**/*.{vue} : Import Icon component from Iconify/Vue.
Applied to files:
src/renderer/src/components/icons/ModelIcon.vue
📚 Learning: 2025-07-23T00:45:57.322Z
Learnt from: CR
PR: ThinkInAIXYZ/deepchat#0
File: .cursor/rules/vue-shadcn.mdc:0-0
Timestamp: 2025-07-23T00:45:57.322Z
Learning: Applies to src/renderer/**/*.{vue} : Use Iconify/Vue for icon implementation.
Applied to files:
src/renderer/src/components/icons/ModelIcon.vue
📚 Learning: 2025-10-14T08:02:59.495Z
Learnt from: CR
PR: ThinkInAIXYZ/deepchat#0
File: AGENTS.md:0-0
Timestamp: 2025-10-14T08:02:59.495Z
Learning: Applies to src/main/presenter/LLMProvider/**/*.ts : Implement the two-layer LLM provider (Agent Loop + Provider) under src/main/presenter/LLMProvider
Applied to files:
src/main/presenter/llmProviderPresenter/index.ts
📚 Learning: 2025-09-04T11:03:30.184Z
Learnt from: CR
PR: ThinkInAIXYZ/deepchat#0
File: .cursor/rules/provider-guidelines.mdc:0-0
Timestamp: 2025-09-04T11:03:30.184Z
Learning: Integrate via the llmProviderPresenter entry point (src/main/presenter/llmProviderPresenter/index.ts) as the related implementation entry
Applied to files:
src/main/presenter/llmProviderPresenter/index.ts
📚 Learning: 2025-09-06T03:07:23.817Z
Learnt from: CR
PR: ThinkInAIXYZ/deepchat#0
File: CLAUDE.md:0-0
Timestamp: 2025-09-06T03:07:23.817Z
Learning: Applies to src/main/presenter/configPresenter/providers.ts : Add provider configuration entries in src/main/presenter/configPresenter/providers.ts
Applied to files:
src/main/presenter/llmProviderPresenter/index.tssrc/main/presenter/configPresenter/providers.ts
🧬 Code graph analysis (2)
src/main/presenter/llmProviderPresenter/providers/jiekouProvider.ts (1)
src/main/presenter/llmProviderPresenter/providers/openAICompatibleProvider.ts (1)
OpenAICompatibleProvider(70-1508)
src/main/presenter/llmProviderPresenter/index.ts (1)
src/main/presenter/llmProviderPresenter/providers/jiekouProvider.ts (1)
JiekouProvider(4-16)
🔇 Additional comments (5)
src/main/presenter/llmProviderPresenter/providers/jiekouProvider.ts (1)
1-16: LGTM! Clean OpenAI-compatible provider implementation.The implementation correctly extends
OpenAICompatibleProviderand overridesfetchOpenAIModelsto group all models under "JieKou.AI". The approach is minimal and appropriate—delegating to the parent for API calls and error handling while only customizing the presentation layer.src/renderer/src/components/icons/ModelIcon.vue (1)
65-65: LGTM! Icon registration follows the established pattern.The JieKou icon import and mapping are correctly added, following the same pattern as other provider icons.
Also applies to: 137-137
src/main/presenter/llmProviderPresenter/index.ts (3)
51-51: LGTM! Import follows the established pattern.
221-222: LGTM! Provider instantiation for id='jiekou' is correct.This case will properly instantiate
JiekouProviderwhen the provider configuration withid: 'jiekou'is enabled.
277-278: Verify necessity of apiType='jiekou' case.This fallback case handles
provider.apiType === 'jiekou', but the provider configuration insrc/main/presenter/configPresenter/providers.ts(lines 96-110) specifiesapiType: 'openai', not'jiekou'.Since the provider.id-based case (line 221-222) will match first for the default JieKou.AI configuration, this apiType case is currently unreachable unless:
- A user manually creates a custom provider with
apiType: 'jiekou', OR- The configuration is changed in the future
Recommendation: Either remove this case as unnecessary, or update the provider configuration to use
apiType: 'jiekou'if JieKou-specific handling is intended. See related comment onproviders.ts.
|
@codex review |
|
Codex Review: Didn't find any major issues. Swish! ℹ️ About Codex in GitHubYour team has set up Codex to review pull requests in this repo. Reviews are triggered when you
If Codex has suggestions, it will comment; otherwise it will react with 👍. Codex can also answer questions or update the PR. Try commenting "@codex address that feedback". |
* style(settings): format about page link handler (#1016) * style(ollama): format model config handlers (#1018) * fix: think content scroll issue (#1023) * fix: remove shimmer for think content * chore: update screen shot and fix scroll issue * chore: update markdown renderer * fix: import button bug and prevent backup overwriting during import (#1024) * fix(sync): fix import button bug and prevent backup overwriting during import * fix(sync): fix import button bug and prevent backup overwriting during import * fix(sync): fix import button bug and prevent backup overwriting during import * refactor(messageList): refactor message list ui components (#1026) * feat: remove new thread button, add clean button. * refactor(messageList): refactor message list ui components * feat: add configurable fields for chat settings - Introduced ConfigFieldHeader component for consistent field headers. - Added ConfigInputField, ConfigSelectField, ConfigSliderField, and ConfigSwitchField components for various input types. - Created types for field configurations in types.ts to standardize field definitions. - Implemented useChatConfigFields composable to manage field configurations dynamically. - Added useModelCapabilities and useModelTypeDetection composables for handling model-specific capabilities and requirements. - Developed useSearchConfig and useThinkingBudget composables for managing search and budget configurations. * feat: implement input history management in prompt input - Added `useInputHistory` composable for managing input history and navigation. - Implemented methods for setting, clearing, and confirming history placeholders. - Integrated arrow key navigation for browsing through input history. feat: enhance mention data handling in prompt input - Created `useMentionData` composable to aggregate mention data from selected files and MCP resources. - Implemented watchers to update mention data based on selected files, MCP resources, tools, and prompts. feat: manage prompt input configuration with store synchronization - Developed `usePromptInputConfig` composable for managing model configuration. - Implemented bidirectional sync between local config and chat store. - Added debounced watcher to reduce updates and improve performance. feat: streamline TipTap editor operations in prompt input - Introduced `usePromptInputEditor` composable for managing TipTap editor lifecycle and content transformation. - Implemented methods for handling mentions, pasting content, and clearing editor content. feat: handle file operations in prompt input - Created `usePromptInputFiles` composable for managing file selection, paste, and drag-drop operations. - Implemented methods for processing files, handling dropped files, and clearing selected files. feat: manage rate limit status in prompt input - Developed `useRateLimitStatus` composable for displaying and polling rate limit status. - Implemented methods for handling rate limit events and computing status icons, classes, and tooltips. * refactor(artifacts): migrate component logic to composables and update documentation - Refactor ArtifactDialog.vue to use composables for view mode, viewport size, code editor, and export functionality - Simplify HTMLArtifact.vue by removing drag-resize logic and using fixed viewport dimensions - Clean up MermaidArtifact.vue styling and structure - Update component refactoring guide to reflect new patterns and best practices - Adjust prompt input composable to allow delayed editor initialization - Update internationalization files for new responsive label * fix(lint): unused variables * fix(format): format code * CodeRabbit Generated Unit Tests: Add renderer unit tests for components and composables * feat: implement input history management in chat input component - Added `useInputHistory` composable for managing input history and placeholder navigation. - Implemented methods for setting, clearing, and confirming history placeholders. - Integrated arrow key navigation for cycling through input history. feat: enhance mention data handling in chat input - Created `useMentionData` composable to manage mention data aggregation. - Implemented watchers for selected files and MCP resources/tools/prompts to update mention data. feat: manage prompt input configuration and synchronization - Developed `usePromptInputConfig` composable for managing model configuration. - Implemented bidirectional sync between local config refs and chat store. - Added debounced watcher to reduce updates to the store. feat: manage prompt input editor operations - Introduced `usePromptInputEditor` composable for handling TipTap editor operations. - Implemented content transformation, mention insertion, and paste handling. - Added methods for handling editor updates and restoring focus. feat: handle prompt input files management - Created `usePromptInputFiles` composable for managing file operations in prompt input. - Implemented file selection, paste, drag-drop, and prompt files integration. feat: implement rate limit status management - Developed `useRateLimitStatus` composable for managing rate limit status display and polling. - Added methods for retrieving rate limit status icon, class, tooltip, and wait time formatting. * feat: enhance chat input component with context length management and settings integration * feat: update model configuration and enhance error handling in providers * feat: add MCP tools list component and integrate with chat settings feat: enhance artifact dialog with improved error handling and localization fix: update Mermaid artifact rendering error handling and localization fix: improve input settings error handling and state management fix: update drag and drop composable to handle drag events correctly fix: update Vitest configuration for better project structure and alias resolution * fix(i18n): add unknownError translation --------- Co-authored-by: deepinsect <deepinsect@github.com> Co-authored-by: coderabbitai[bot] <136622811+coderabbitai[bot]@users.noreply.github.com> * feat: add Poe provider integration and icon support (#1028) * feat: add Poe provider integration and icon support * chore: format and lint --------- Co-authored-by: zerob13 <zerob13@gmail.com> * fix: make auto scroll works (#1030) * fix: allow settings window links to open externally (#1029) * fix(settings): allow target blank links * fix: harden settings window link handling * feat: enhance GitHub Copilot Device Flow with OAuth token management and API token retrieval (#1021) * feat: enhance GitHub Copilot Device Flow with OAuth token management and API token retrieval - Fixed request header for managing OAuth tokens and retrieving API tokens. - Enhanced model definitions and added new models for better compatibility. * fix: remove privacy related log * fix: OAuth 2.0 for slow_down response * fix: handle lint errors * fix: provider fetched from publicdb * fix(githubCopilotProvider): update request body logging format for clarity * fix(githubCopilotProvider): improve error handling and logging in device flow * feat(theme): fix message paragraph gap and toolcall block (#1031) Co-authored-by: deepinsect <deepinsect@github.com> * fix: scroll to bottom (#1034) * fix: add debounce for renderer * feat: add max wait for renderer * chore(deps): upgrade markdown renderer add worker support * chore: bump markdown version * fix(build): use es module worker format (#1037) * feat: remove function deleteOllamaModel (#1036) * feat: remove function deleteOllamaModel * fix(build): use es module worker format (#1037) --------- Co-authored-by: duskzhen <zerob13@gmail.com> * perf: update dependencies to use stream-monaco and bump vue-renderer-markdown version (#1038) * feat(theme): add markdown layout style and table style (#1039) * feat(theme): add markdown layout style and table style * fix(lint): remove props --------- Co-authored-by: deepinsect <deepinsect@github.com> * feat: support effort and verbosity (#1040) * chore: bump up version * feat: add jiekou.ai as LLM provider (#1041) * feat: add jiekou.ai as LLM provider * fix: change api type to jiekou --------- Co-authored-by: zerob13 <zerob13@gmail.com> * chore: update provider db --------- Co-authored-by: 韦伟 <xweimvp@gmail.com> Co-authored-by: Happer <ericted8810us@gmail.com> Co-authored-by: deepinsect <deepinsect@github.com> Co-authored-by: coderabbitai[bot] <136622811+coderabbitai[bot]@users.noreply.github.com> Co-authored-by: cp90 <153345481+cp90-pixel@users.noreply.github.com> Co-authored-by: Cedric <14017092+douyixuan@users.noreply.github.com> Co-authored-by: Simon He <57086651+Simon-He95@users.noreply.github.com> Co-authored-by: yyhhyyyyyy <yyhhyyyyyy8@gmail.com> Co-authored-by: cnJasonZ <gbdzxalbb@qq.com>
Add JieKou.AI as new LLM provider.
Summary by CodeRabbit