feat(config): refactor provider architecture to protocol-based model_list#492
Conversation
…ine LLM model names.
…call compatibility
- Add ModelConfig struct with protocol prefix support (openai/, anthropic/, etc.) - Implement GetModelConfig with round-robin load balancing - Add CreateProviderFromConfig factory for protocol-based routing - Add ModelRegistry for thread-safe endpoint selection - Maintain full backward compatibility with legacy providers config - Update README.md and README.zh.md with model_list documentation - Add migration guide at docs/migration/model-list-migration.md Supported protocols: openai, anthropic, antigravity, claude-cli, codex-cli, github-copilot, openrouter, groq, deepseek, cerebras, qwen, zhipu, gemini Closes sipeed#283 Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Refactor command handlers into separate files to improve code organization and maintainability. Each command (agent, auth, cron, gateway, migrate, onboard, skills, status) now has its own dedicated file. Restructure provider creation to support new model_list configuration system that enables zero-code addition of OpenAI-compatible providers. Move legacy provider logic to separate file for backward compatibility. Move configuration functions from config.go to separate files (defaults.go, migration.go) for better organization.
- Move OAuth helper functions to factory_provider.go - Add auto-migration in LoadConfig: old providers -> model_list - Add Workspace field to ModelConfig for CLI-based providers - Fix OAuth handling to use auth store instead of raw APIKey - Update tests to use new model_list configuration format This eliminates the giant switch-case in legacy_provider.go, achieving the goal of "zero-code provider addition" from the design document (issue sipeed#283). Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
1. Add VLLM default API base (http://localhost:8000/v1)
- Previously returned empty string, causing provider creation to fail
2. Implement MaxTokensField configuration
- Add maxTokensField field to HTTPProvider
- Add NewHTTPProviderWithMaxTokensField constructor
- Use configured field name for max_tokens parameter
- Fallback to model-based detection for backward compatibility
3. Add tests for VLLM, deepseek, ollama default API bases
Example config usage:
{
"model_name": "glm-4",
"model": "openai/glm-4",
"max_tokens_field": "max_completion_tokens"
}
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
- Preserve user's configured model during config migration (issue sipeed#5) - Simplify ExtractProtocol using strings.Cut - Extract NormalizeToolCall to shared utility, removing ~70 lines of duplicate code - Clean up unused fields in providerMigrationConfig struct Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Add validation to ensure model_name is unique across all entries in model_list. This prevents potential conflicts when multiple model configs share the same model_name identifier.
When no provider field is set but model is specified, use the user's model as ModelName for the first provider. This maintains backward compatibility with old configs that relied on implicit provider selection and ensures GetModelConfig(model) can find the model by its configured name.
Append emergency compression note to the original system prompt instead of creating a separate system message. Some APIs like Zhipu reject two consecutive system messages.
…d configuration - Move provider creation logic to factory_provider.go with protocol-based approach - Add OpenAIProviderConfig with WebSearch support and embedded ProviderConfig - Add maxTokensField to OpenAI-compatible provider for configurable token field - Introduce new providers: Ollama, DeepSeek, GitHubCopilot, Antigravity, Qwen - Remove redundant CreateProvider function from factory.go - Add ThoughtSignature field to FunctionCall for tool response handling - Remove duplicate Name field assignment in tool loop - Update tests to reflect new provider configuration structure
| } | ||
|
|
||
| func authHelp() { | ||
| fmt.Println("\nAuth commands:") |
There was a problem hiding this comment.
nit: better to have a const for this message, instead of print multiple lines.
There was a problem hiding this comment.
Please feel free to ignore, as this PR is already quite big. :)
yinwm
left a comment
There was a problem hiding this comment.
回复 @xiaket 在 pkg/providers/antigravity_provider.go 关于 Project ID 的问题:
这个 fallback Project ID rising-fact-p41fc 来自 PR #343(由 @mrbeandev 引入),原始来源是 pi-ai/OpenCode 项目(与 OAuth credentials 同源)。
它只是一个 fallback 值,仅当获取用户自己的 Project ID 失败时才使用。正常情况下,代码会先尝试从用户的 OAuth token 中获取 Project ID,只有获取失败才会使用这个默认值。
相关代码:
// First try to get project ID from auth credential
if c.projectID != "" {
return c.projectID
}
// Fallback to default project ID (from pi-ai/OpenCode)
return "rising-fact-p41fc"|
回复 @xiaket 在 这个 fallback Project ID 它只是一个 fallback 值,仅当获取用户自己的 Project ID 失败时才使用。正常情况下,代码会先尝试从用户的 OAuth token 中获取 Project ID,只有获取失败才会使用这个默认值。 相关代码逻辑: // First try to get project ID from auth credential
if c.projectID != "" {
return c.projectID
}
// Fallback to default project ID (from pi-ai/OpenCode)
return "rising-fact-p41fc" |
|
回复 @xiaket 在 关于 leaking creds 和 ToS violation: 这些 OAuth Client ID 和 Client Secret 是 公开的客户端标识符,不是密钥:
这不是 ToS 违规,而是标准的 OAuth 公共客户端流程。 |
Address review comment from @xiaket - the "Supported providers" message was printed in multiple places. Now extracted as a constant. Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Resolved conflicts: - pkg/config/config.go: Removed duplicate DefaultConfig() (already in defaults.go) - pkg/config/defaults.go: Updated Temperature to *float64 (nil default) Upstream changes included: - Temperature changed from float64 to *float64 (nil means use provider default) - New HeartbeatConfig and DevicesConfig - Various agent and tool improvements Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Resolved conflicts:
- pkg/config/config.go: Removed duplicate DefaultConfig() (already in defaults.go)
Upstream changes:
- Added Session.DMScope default value ("main")
- Various channel improvements
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Refactor to use protocol-based model_list configuration instead of adding Anthropic-style API support. Zhipu GLM coding plan can use the existing zhipu/ protocol with a custom api_base. Changes: - Add glm-4.7 model config to config.example.json - Document Zhipu GLM coding plan endpoint configuration in README - Add Chinese documentation for Zhipu GLM coding plan in README.zh.md The new architecture (PR sipeed#492) supports zero-code provider addition through protocol-based model_list configuration. Addressing reviewer feedback: - Use new protocol-based provider way from PR sipeed#492 - Zhipu GLM coding plan can use OpenAI-style (zhipu/) protocol - No code changes needed, just configuration
…otocol feat(config): refactor provider architecture to protocol-based model_list
Summary
This PR implements the protocol-based provider refactoring as described in #283. It moves from vendor-based provider configuration to a unified
model_listapproach that enables zero-code addition of new providers.Changes
openai/prefixDesign Rationale
The new
model_listconfiguration:Related Issues
🤖 Generated with Claude Code