-
Notifications
You must be signed in to change notification settings - Fork 614
Feature/enhance provider api | openrouter和ppio支持动态获取模型参数 #527
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
WalkthroughThis update introduces dynamic fetching and configuration of model metadata for both OpenRouter and PPIO providers, including new TypeScript interfaces and enhanced capability detection. It also updates the reasoning capability flag for the 'minimax-m1-80k' model in both default and provider-specific settings, reflecting improved reasoning support. Changes
Sequence Diagram(s)sequenceDiagram
participant User
participant Provider (OpenRouter/PPIO)
participant API
participant ConfigPresenter
User->>Provider: fetchOpenAIModels()
Provider->>API: Fetch models list
API-->>Provider: Return model metadata
Provider->>ConfigPresenter: Update model configuration if changed
Provider-->>User: Return processed MODEL_META array
Poem
📜 Recent review detailsConfiguration used: CodeRabbit UI 📒 Files selected for processing (1)
🚧 Files skipped from review as they are similar to previous changes (1)
⏰ Context from checks skipped due to timeout of 90000ms (1)
✨ Finishing Touches
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. 🪧 TipsChatThere are 3 ways to chat with CodeRabbit:
SupportNeed help? Create a ticket on our support page for assistance with any issues or questions. Note: Be mindful of the bot's finite context window. It's strongly recommended to break down tasks such as reading entire modules into smaller chunks. For a focused discussion, use review comments to chat about specific files and their changes, instead of using the PR comments. CodeRabbit Commands (Invoked using PR comments)
Other keywords and placeholders
CodeRabbit Configuration File (
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 4
♻️ Duplicate comments (1)
src/main/presenter/llmProviderPresenter/providers/ppioProvider.ts (1)
10-23: Same optional-field issue as OpenRouter interface
features,context_size, etc. can be missing; mark them optional to match runtime data.
🧹 Nitpick comments (5)
src/main/presenter/llmProviderPresenter/providers/openRouterProvider.ts (3)
234-256:configChangedcomparison risks NPE & misses deep equalityBesides the
undefinedissue above, comparing primitives on possibly-undefined values is fine, but once guarded, you can collapse the check:const configChanged = JSON.stringify(existingConfig) !== JSON.stringify(newConfig)Not performance-critical here (< 200 models).
259-276: MODEL_META missestype– information lostYou build
modelMetawithout carrying overtype(embedding/chat).
IfexistingConfig.type === ModelType.Embedding, this metadata will silently drop it, leading to a chat-only assumption elsewhere.reasoning: hasReasoning || existingConfig.reasoning || false, + type: existingConfig.type
276-283: Remove noisy console log before shipping
console.logof every fetch floods production logs and leaks model counts.
Use a debug logger or feature flag instead.src/main/presenter/llmProviderPresenter/providers/ppioProvider.ts (2)
204-218: Persisttypeandreasoninginto MODEL_METAAs with OpenRouter,
typeand the newly detectedreasoningcapability are dropped:- reasoning: existingConfig.reasoning || false + reasoning: existingConfig.reasoning || false, + type: existingConfig.type
222-227: Debug log – strip in productionRemove or guard the
console.logto keep server logs clean.
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (4)
src/main/presenter/configPresenter/modelDefaultSettings.ts(1 hunks)src/main/presenter/configPresenter/providerModelSettings.ts(1 hunks)src/main/presenter/llmProviderPresenter/providers/openRouterProvider.ts(3 hunks)src/main/presenter/llmProviderPresenter/providers/ppioProvider.ts(3 hunks)
🧰 Additional context used
🧬 Code Graph Analysis (2)
src/main/presenter/llmProviderPresenter/providers/ppioProvider.ts (1)
src/shared/presenter.d.ts (1)
MODEL_META(439-452)
src/main/presenter/llmProviderPresenter/providers/openRouterProvider.ts (1)
src/shared/presenter.d.ts (1)
MODEL_META(439-452)
⏰ Context from checks skipped due to timeout of 90000ms (1)
- GitHub Check: build-check (x64)
🔇 Additional comments (2)
src/main/presenter/configPresenter/modelDefaultSettings.ts (1)
1351-1362: Reasoning flag looks good – check for downstream assumptionsChanging
reasoningtotrueis consistent with the provider-side update.
Just make sure any feature-gating elsewhere (UI badges, prompts, pricing, etc.) now treats this model as “reasoning-capable”; otherwise the flag will have no visible effect.src/main/presenter/configPresenter/providerModelSettings.ts (1)
468-481: Keep the flag in sync with dynamic fetchSetting
reasoning: truehere aligns the static config with the new dynamic fetch logic – ✅.
Double-check thatPPIOProvider.fetchOpenAIModelsdoes not overwrite this back tofalse(it currently takesexistingConfig.reasoning, see comment in that file).
Pull Request Description (中文)
你的功能请求是否与某个问题有关?请描述一下。
解析了ppio和openrouter的models 接口中的额外参数,从而能准确的自动化设置模型的token和各类参数
Summary by CodeRabbit