Skip to content

Conversation

@zerob13
Copy link
Collaborator

@zerob13 zerob13 commented Aug 28, 2025

add TokenFlux.ai provider support

  • add api support
  • add model list support

#613

Summary by CodeRabbit

  • New Features
    • Added TokenFlux as a selectable AI provider, including model discovery and compatibility with existing chat and text generation features.
    • Enabled configuration of TokenFlux via base URL and API key.
    • Added TokenFlux provider icon to improve visual identification in the UI.

Add comprehensive TokenFlux.ai integration with OpenAI-compatible API support:
- Implement TokenFluxProvider extending OpenAICompatibleProvider
- Add provider configuration with official endpoints and documentation links
- Support dynamic model fetching with TokenFlux-specific format parsing
- Include vision and function calling capability detection from API response
- Add TokenFlux icon integration in ModelIcon component
- Enable seamless UI integration with existing provider settings

Resolves #613
@coderabbitai
Copy link
Contributor

coderabbitai bot commented Aug 28, 2025

Walkthrough

Adds TokenFlux as a new OpenAI-compatible LLM provider: extends default providers config, wires provider instantiation in LLMProviderPresenter, implements TokenFluxProvider with key checks and model discovery, updates model icon mapping, and makes a minor try/catch binding tweak in content enrichment.

Changes

Cohort / File(s) Summary
Provider config
src/main/presenter/configPresenter/providers.ts
Adds TokenFlux entry to DEFAULT_PROVIDERS with id "tokenflux", OpenAI-compatible settings, and websites metadata.
Provider wiring
src/main/presenter/llmProviderPresenter/index.ts
Imports TokenFluxProvider and extends factory/creation logic to instantiate it when provider.id is "tokenflux".
TokenFlux provider implementation
src/main/presenter/llmProviderPresenter/providers/tokenfluxProvider.ts
New OpenAI-compatible TokenFluxProvider class: delegates chat/text APIs, validates API key via /models, overrides model fetching to derive capabilities and update config, and adds health check.
UI icon mapping
src/renderer/src/components/icons/ModelIcon.vue
Imports tokenflux-color.svg and maps "tokenflux" to the icon for model display.
Content enrichment minor cleanup
src/main/presenter/threadPresenter/contentEnricher.ts
Removes catch binding (catch {}) in extractMainContent, behavior unchanged.

Sequence Diagram(s)

sequenceDiagram
  autonumber
  participant U as User
  participant A as App
  participant L as LLMProviderPresenter
  participant T as TokenFluxProvider
  participant X as TokenFlux API
  participant C as ConfigPresenter

  U->>A: Select TokenFlux / request model list
  A->>L: getProvider("tokenflux")
  L->>T: createProviderInstance()
  A->>T: fetchOpenAIModels()
  T->>X: GET /models (with API key)
  X-->>T: Models list
  T->>C: update per-model config if changed
  T-->>A: MODEL_META[]

  U->>A: Send prompt / messages
  A->>L: create client for "tokenflux"
  L->>T: completions()/generateText()
  T->>X: POST /chat/completions
  X-->>T: Response
  T-->>A: LLMResponse
  A-->>U: Render reply
Loading

Estimated code review effort

🎯 4 (Complex) | ⏱️ ~60 minutes

Possibly related issues

Possibly related PRs

Poem

In burrows of code I hop with delight,
TokenFlux lanterns now glow in the night.
Models discovered, keys checked with care,
A shiny new icon with colorful flair.
I thump and I beam—ship it, alright! 🐇✨

✨ Finishing Touches
  • 📝 Generate Docstrings
🧪 Generate unit tests
  • Create PR with unit tests
  • Post copyable unit tests in a comment
  • Commit unit tests in branch feature/tokenflux

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share
🪧 Tips

Chat

There are 3 ways to chat with CodeRabbit:

  • Review comments: Directly reply to a review comment made by CodeRabbit. Example:
    • I pushed a fix in commit <commit_id>, please review it.
    • Open a follow-up GitHub issue for this discussion.
  • Files and specific lines of code (under the "Files changed" tab): Tag @coderabbit in a new review comment at the desired location with your query.
  • PR comments: Tag @coderabbit in a new PR comment to ask questions about the PR branch. For the best results, please provide a very specific query, as very limited context is provided in this mode. Examples:
    • @coderabbit gather interesting stats about this repository and render them as a table. Additionally, render a pie chart showing the language distribution in the codebase.
    • @coderabbit read the files in the src/scheduler package and generate a class diagram using mermaid and a README in the markdown format.

Support

Need help? Create a ticket on our support page for assistance with any issues or questions.

CodeRabbit Commands (Invoked using PR/Issue comments)

Type @coderabbit help to get the list of available commands.

Other keywords and placeholders

  • Add @coderabbit ignore or @coderabbitai ignore anywhere in the PR description to prevent this PR from being reviewed.
  • Add @coderabbit summary or @coderabbitai summary to generate the high-level summary at a specific location in the PR description.
  • Add @coderabbit or @coderabbitai anywhere in the PR title to generate the title automatically.

CodeRabbit Configuration File (.coderabbit.yaml)

  • You can programmatically configure CodeRabbit by adding a .coderabbit.yaml file to the root of your repository.
  • Please see the configuration documentation for more information.
  • If your editor has YAML language server enabled, you can add the path at the top of this file to enable auto-completion and validation: # yaml-language-server: $schema=https://coderabbit.ai/integrations/schema.v2.json

Status, Documentation and Community

  • Visit our Status Page to check the current availability of CodeRabbit.
  • Visit our Documentation for detailed information on how to use CodeRabbit.
  • Join our Discord Community to get help, request features, and share feedback.
  • Follow us on X/Twitter for updates and announcements.

@zerob13 zerob13 marked this pull request as ready for review August 28, 2025 05:38
Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 3

📜 Review details

Configuration used: CodeRabbit UI

Review profile: CHILL

Plan: Pro

💡 Knowledge Base configuration:

  • MCP integration is disabled by default for public repositories
  • Jira integration is disabled by default for public repositories
  • Linear integration is disabled by default for public repositories

You can enable these sources in your CodeRabbit configuration.

📥 Commits

Reviewing files that changed from the base of the PR and between 76eb459 and 7f9ea3f.

⛔ Files ignored due to path filters (1)
  • src/renderer/src/assets/llm-icons/tokenflux-color.svg is excluded by !**/*.svg
📒 Files selected for processing (5)
  • src/main/presenter/configPresenter/providers.ts (1 hunks)
  • src/main/presenter/llmProviderPresenter/index.ts (2 hunks)
  • src/main/presenter/llmProviderPresenter/providers/tokenfluxProvider.ts (1 hunks)
  • src/main/presenter/threadPresenter/contentEnricher.ts (1 hunks)
  • src/renderer/src/components/icons/ModelIcon.vue (2 hunks)
🧰 Additional context used
📓 Path-based instructions (20)
src/renderer/src/**/*

📄 CodeRabbit inference engine (.cursor/rules/i18n.mdc)

src/renderer/src/**/*: All user-facing strings must use i18n keys (avoid hardcoded user-visible text in code)
Use the 'vue-i18n' framework for all internationalization in the renderer
Ensure all user-visible text in the renderer uses the translation system

Files:

  • src/renderer/src/components/icons/ModelIcon.vue
src/renderer/**/*.{vue,ts,js,tsx,jsx}

📄 CodeRabbit inference engine (.cursor/rules/project-structure.mdc)

渲染进程代码放在 src/renderer

Files:

  • src/renderer/src/components/icons/ModelIcon.vue
src/renderer/src/**/*.{vue,ts,tsx,js,jsx}

📄 CodeRabbit inference engine (.cursor/rules/vue-best-practices.mdc)

src/renderer/src/**/*.{vue,ts,tsx,js,jsx}: Use the Composition API for better code organization and reusability
Implement proper state management with Pinia
Utilize Vue Router for navigation and route management
Leverage Vue's built-in reactivity system for efficient data handling

Files:

  • src/renderer/src/components/icons/ModelIcon.vue
src/renderer/src/**/*.vue

📄 CodeRabbit inference engine (.cursor/rules/vue-best-practices.mdc)

Use scoped styles to prevent CSS conflicts between components

src/renderer/src/**/*.vue: Follow existing component patterns when creating new UI components
Ensure responsive design with Tailwind CSS for new UI components
Add proper error handling and loading states to UI components

Files:

  • src/renderer/src/components/icons/ModelIcon.vue
src/renderer/**/*.{ts,tsx,vue}

📄 CodeRabbit inference engine (.cursor/rules/vue-shadcn.mdc)

src/renderer/**/*.{ts,tsx,vue}: Use descriptive variable names with auxiliary verbs (e.g., isLoading, hasError).
Use TypeScript for all code; prefer types over interfaces.
Avoid enums; use const objects instead.
Use arrow functions for methods and computed properties.
Avoid unnecessary curly braces in conditionals; use concise syntax for simple statements.

Files:

  • src/renderer/src/components/icons/ModelIcon.vue
src/renderer/**/*.{vue,ts}

📄 CodeRabbit inference engine (.cursor/rules/vue-shadcn.mdc)

Implement lazy loading for routes and components.

Files:

  • src/renderer/src/components/icons/ModelIcon.vue
src/renderer/**/*.{ts,vue}

📄 CodeRabbit inference engine (.cursor/rules/vue-shadcn.mdc)

src/renderer/**/*.{ts,vue}: Use useFetch and useAsyncData for data fetching.
Implement SEO best practices using Nuxt's useHead and useSeoMeta.

src/renderer/**/*.{ts,vue}: Use Pinia for frontend state management
From renderer to main, call presenters via the usePresenter.ts composable

Files:

  • src/renderer/src/components/icons/ModelIcon.vue
src/**/*.{ts,tsx,vue}

📄 CodeRabbit inference engine (CLAUDE.md)

Use English for all logs and comments

Files:

  • src/renderer/src/components/icons/ModelIcon.vue
  • src/main/presenter/threadPresenter/contentEnricher.ts
  • src/main/presenter/configPresenter/providers.ts
  • src/main/presenter/llmProviderPresenter/providers/tokenfluxProvider.ts
  • src/main/presenter/llmProviderPresenter/index.ts
src/renderer/**/*.vue

📄 CodeRabbit inference engine (CLAUDE.md)

src/renderer/**/*.vue: Use Vue 3 Composition API for all components
Use Tailwind CSS with scoped styles for component styling

Files:

  • src/renderer/src/components/icons/ModelIcon.vue
src/renderer/src/**

📄 CodeRabbit inference engine (CLAUDE.md)

Organize UI components by feature under src/renderer/src/

Files:

  • src/renderer/src/components/icons/ModelIcon.vue
**/*.{js,jsx,ts,tsx}

📄 CodeRabbit inference engine (.cursor/rules/development-setup.mdc)

**/*.{js,jsx,ts,tsx}: 使用 OxLint 进行代码检查
Log和注释使用英文书写

Files:

  • src/main/presenter/threadPresenter/contentEnricher.ts
  • src/main/presenter/configPresenter/providers.ts
  • src/main/presenter/llmProviderPresenter/providers/tokenfluxProvider.ts
  • src/main/presenter/llmProviderPresenter/index.ts
src/{main,renderer}/**/*.ts

📄 CodeRabbit inference engine (.cursor/rules/electron-best-practices.mdc)

src/{main,renderer}/**/*.ts: Use context isolation for improved security
Implement proper inter-process communication (IPC) patterns
Optimize application startup time with lazy loading
Implement proper error handling and logging for debugging

Files:

  • src/main/presenter/threadPresenter/contentEnricher.ts
  • src/main/presenter/configPresenter/providers.ts
  • src/main/presenter/llmProviderPresenter/providers/tokenfluxProvider.ts
  • src/main/presenter/llmProviderPresenter/index.ts
src/main/**/*.ts

📄 CodeRabbit inference engine (.cursor/rules/electron-best-practices.mdc)

Use Electron's built-in APIs for file system and native dialogs

From main to renderer, broadcast events via EventBus using mainWindow.webContents.send()

Files:

  • src/main/presenter/threadPresenter/contentEnricher.ts
  • src/main/presenter/configPresenter/providers.ts
  • src/main/presenter/llmProviderPresenter/providers/tokenfluxProvider.ts
  • src/main/presenter/llmProviderPresenter/index.ts
**/*.{ts,tsx}

📄 CodeRabbit inference engine (.cursor/rules/error-logging.mdc)

**/*.{ts,tsx}: 始终使用 try-catch 处理可能的错误
提供有意义的错误信息
记录详细的错误日志
优雅降级处理
日志应包含时间戳、日志级别、错误代码、错误描述、堆栈跟踪(如适用)、相关上下文信息
日志级别应包括 ERROR、WARN、INFO、DEBUG
不要吞掉错误
提供用户友好的错误信息
实现错误重试机制
避免记录敏感信息
使用结构化日志
设置适当的日志级别

Enable and adhere to strict TypeScript type checking

Files:

  • src/main/presenter/threadPresenter/contentEnricher.ts
  • src/main/presenter/configPresenter/providers.ts
  • src/main/presenter/llmProviderPresenter/providers/tokenfluxProvider.ts
  • src/main/presenter/llmProviderPresenter/index.ts
src/main/**/*.{ts,js,tsx,jsx}

📄 CodeRabbit inference engine (.cursor/rules/project-structure.mdc)

主进程代码放在 src/main

Files:

  • src/main/presenter/threadPresenter/contentEnricher.ts
  • src/main/presenter/configPresenter/providers.ts
  • src/main/presenter/llmProviderPresenter/providers/tokenfluxProvider.ts
  • src/main/presenter/llmProviderPresenter/index.ts
src/main/presenter/**/*.ts

📄 CodeRabbit inference engine (CLAUDE.md)

Maintain one presenter per functional domain in src/main/presenter/

Files:

  • src/main/presenter/threadPresenter/contentEnricher.ts
  • src/main/presenter/configPresenter/providers.ts
  • src/main/presenter/llmProviderPresenter/providers/tokenfluxProvider.ts
  • src/main/presenter/llmProviderPresenter/index.ts
src/main/presenter/configPresenter/**/*.ts

📄 CodeRabbit inference engine (CLAUDE.md)

Centralize configuration logic under configPresenter/

Files:

  • src/main/presenter/configPresenter/providers.ts
src/main/presenter/configPresenter/providers.ts

📄 CodeRabbit inference engine (CLAUDE.md)

Add new provider configuration entries in configPresenter/providers.ts

Files:

  • src/main/presenter/configPresenter/providers.ts
src/main/presenter/llmProviderPresenter/providers/*.ts

📄 CodeRabbit inference engine (.cursor/rules/llm-agent-loop.mdc)

src/main/presenter/llmProviderPresenter/providers/*.ts: Each file in src/main/presenter/llmProviderPresenter/providers/*.ts should handle interaction with a specific LLM API, including request/response formatting, tool definition conversion, native/non-native tool call management, and standardizing output streams to a common event format.
Provider implementations must use a coreStream method that yields standardized stream events to decouple the main loop from provider-specific details.
The coreStream method in each Provider must perform a single streaming API request per conversation round and must not contain multi-round tool call loop logic.
Provider files should implement helper methods such as formatMessages, convertToProviderTools, parseFunctionCalls, and prepareFunctionCallPrompt as needed for provider-specific logic.
All provider implementations must parse provider-specific data chunks and yield standardized events for text, reasoning, tool calls, usage, errors, stop reasons, and image data.
When a provider does not support native function calling, it must prepare messages using prompt wrapping (e.g., prepareFunctionCallPrompt) before making the API call.
When a provider supports native function calling, MCP tools must be converted to the provider's format (e.g., using convertToProviderTools) and included in the API request.
Provider implementations should aggregate and yield usage events as part of the standardized stream.
Provider implementations should yield image data events in the standardized format when applicable.
Provider implementations should yield reasoning events in the standardized format when applicable.
Provider implementations should yield tool call events (tool_call_start, tool_call_chunk, tool_call_end) in the standardized format.
Provider implementations should yield stop events with appropriate stop_reason in the standardized format.
Provider implementations should yield error events in the standardized format...

Files:

  • src/main/presenter/llmProviderPresenter/providers/tokenfluxProvider.ts
src/main/presenter/llmProviderPresenter/index.ts

📄 CodeRabbit inference engine (.cursor/rules/llm-agent-loop.mdc)

src/main/presenter/llmProviderPresenter/index.ts: src/main/presenter/llmProviderPresenter/index.ts should manage the overall Agent loop, conversation history, tool execution via McpPresenter, and frontend communication via eventBus.
The main Agent loop in llmProviderPresenter/index.ts should handle multi-round LLM calls and tool usage, maintaining conversation state and controlling the loop with needContinueConversation and toolCallCount.
The main Agent loop should send standardized STREAM_EVENTS (RESPONSE, END, ERROR) to the frontend via eventBus.
The main Agent loop should buffer text content, handle tool call events, format tool results for the next LLM call, and manage conversation continuation logic.

Agent Loop layer must manage conversation flow, execute tools via McpPresenter, and standardize events to the frontend

Files:

  • src/main/presenter/llmProviderPresenter/index.ts
🧠 Learnings (17)
📚 Learning: 2025-07-23T00:45:57.322Z
Learnt from: CR
PR: ThinkInAIXYZ/deepchat#0
File: .cursor/rules/vue-shadcn.mdc:0-0
Timestamp: 2025-07-23T00:45:57.322Z
Learning: Applies to src/renderer/**/*.{vue} : Use Iconify/Vue for icon implementation.

Applied to files:

  • src/renderer/src/components/icons/ModelIcon.vue
📚 Learning: 2025-07-23T00:45:57.322Z
Learnt from: CR
PR: ThinkInAIXYZ/deepchat#0
File: .cursor/rules/vue-shadcn.mdc:0-0
Timestamp: 2025-07-23T00:45:57.322Z
Learning: Applies to src/renderer/**/*.{vue} : Import Icon component from Iconify/Vue.

Applied to files:

  • src/renderer/src/components/icons/ModelIcon.vue
📚 Learning: 2025-07-23T00:45:57.322Z
Learnt from: CR
PR: ThinkInAIXYZ/deepchat#0
File: .cursor/rules/vue-shadcn.mdc:0-0
Timestamp: 2025-07-23T00:45:57.322Z
Learning: Applies to src/renderer/**/*.{vue} : Use the Icon component with lucide icons.

Applied to files:

  • src/renderer/src/components/icons/ModelIcon.vue
📚 Learning: 2025-07-23T00:45:57.322Z
Learnt from: CR
PR: ThinkInAIXYZ/deepchat#0
File: .cursor/rules/vue-shadcn.mdc:0-0
Timestamp: 2025-07-23T00:45:57.322Z
Learning: Applies to src/renderer/**/*.{vue} : For cases where lucide icons don't meet requirements, use other Iconify collections as fallback.

Applied to files:

  • src/renderer/src/components/icons/ModelIcon.vue
📚 Learning: 2025-07-23T00:45:57.322Z
Learnt from: CR
PR: ThinkInAIXYZ/deepchat#0
File: .cursor/rules/vue-shadcn.mdc:0-0
Timestamp: 2025-07-23T00:45:57.322Z
Learning: Applies to src/renderer/**/*.{vue} : Prefer "lucide:" icon family as the primary choice.

Applied to files:

  • src/renderer/src/components/icons/ModelIcon.vue
📚 Learning: 2025-07-23T00:45:57.322Z
Learnt from: CR
PR: ThinkInAIXYZ/deepchat#0
File: .cursor/rules/vue-shadcn.mdc:0-0
Timestamp: 2025-07-23T00:45:57.322Z
Learning: Applies to src/renderer/**/*.{vue} : Follow naming pattern: {collection}:{icon-name} for icons.

Applied to files:

  • src/renderer/src/components/icons/ModelIcon.vue
📚 Learning: 2025-08-26T14:13:46.601Z
Learnt from: CR
PR: ThinkInAIXYZ/deepchat#0
File: CLAUDE.md:0-0
Timestamp: 2025-08-26T14:13:46.601Z
Learning: Applies to src/main/presenter/configPresenter/providers.ts : Add new provider configuration entries in configPresenter/providers.ts

Applied to files:

  • src/main/presenter/configPresenter/providers.ts
  • src/main/presenter/llmProviderPresenter/providers/tokenfluxProvider.ts
  • src/main/presenter/llmProviderPresenter/index.ts
📚 Learning: 2025-08-26T14:13:46.601Z
Learnt from: CR
PR: ThinkInAIXYZ/deepchat#0
File: CLAUDE.md:0-0
Timestamp: 2025-08-26T14:13:46.601Z
Learning: Applies to src/main/presenter/llmProviderPresenter/providers/*.ts : When adding a new LLM provider, create a provider file under providers/

Applied to files:

  • src/main/presenter/configPresenter/providers.ts
  • src/main/presenter/llmProviderPresenter/index.ts
📚 Learning: 2025-08-26T14:13:46.601Z
Learnt from: CR
PR: ThinkInAIXYZ/deepchat#0
File: CLAUDE.md:0-0
Timestamp: 2025-08-26T14:13:46.601Z
Learning: Applies to src/main/presenter/llmProviderPresenter/providers/*.ts : Each LLM provider must implement provider-specific API interactions, convert MCP tools, and normalize streaming responses

Applied to files:

  • src/main/presenter/configPresenter/providers.ts
  • src/main/presenter/llmProviderPresenter/providers/tokenfluxProvider.ts
  • src/main/presenter/llmProviderPresenter/index.ts
📚 Learning: 2025-07-21T01:46:52.880Z
Learnt from: CR
PR: ThinkInAIXYZ/deepchat#0
File: .cursor/rules/llm-agent-loop.mdc:0-0
Timestamp: 2025-07-21T01:46:52.880Z
Learning: Applies to src/main/presenter/llmProviderPresenter/providers/*.ts : Provider files should implement helper methods such as `formatMessages`, `convertToProviderTools`, `parseFunctionCalls`, and `prepareFunctionCallPrompt` as needed for provider-specific logic.

Applied to files:

  • src/main/presenter/llmProviderPresenter/providers/tokenfluxProvider.ts
  • src/main/presenter/llmProviderPresenter/index.ts
📚 Learning: 2025-07-21T01:46:52.880Z
Learnt from: CR
PR: ThinkInAIXYZ/deepchat#0
File: .cursor/rules/llm-agent-loop.mdc:0-0
Timestamp: 2025-07-21T01:46:52.880Z
Learning: Applies to src/main/presenter/llmProviderPresenter/providers/*.ts : All provider implementations must parse provider-specific data chunks and yield standardized events for text, reasoning, tool calls, usage, errors, stop reasons, and image data.

Applied to files:

  • src/main/presenter/llmProviderPresenter/providers/tokenfluxProvider.ts
📚 Learning: 2025-07-21T01:46:52.880Z
Learnt from: CR
PR: ThinkInAIXYZ/deepchat#0
File: .cursor/rules/llm-agent-loop.mdc:0-0
Timestamp: 2025-07-21T01:46:52.880Z
Learning: Applies to src/main/presenter/llmProviderPresenter/providers/*.ts : Provider implementations should yield events asynchronously using the async generator pattern.

Applied to files:

  • src/main/presenter/llmProviderPresenter/providers/tokenfluxProvider.ts
  • src/main/presenter/llmProviderPresenter/index.ts
📚 Learning: 2025-07-21T01:46:52.880Z
Learnt from: CR
PR: ThinkInAIXYZ/deepchat#0
File: .cursor/rules/llm-agent-loop.mdc:0-0
Timestamp: 2025-07-21T01:46:52.880Z
Learning: Applies to src/main/presenter/llmProviderPresenter/providers/*.ts : Each file in `src/main/presenter/llmProviderPresenter/providers/*.ts` should handle interaction with a specific LLM API, including request/response formatting, tool definition conversion, native/non-native tool call management, and standardizing output streams to a common event format.

Applied to files:

  • src/main/presenter/llmProviderPresenter/providers/tokenfluxProvider.ts
  • src/main/presenter/llmProviderPresenter/index.ts
📚 Learning: 2025-08-26T14:13:46.601Z
Learnt from: CR
PR: ThinkInAIXYZ/deepchat#0
File: CLAUDE.md:0-0
Timestamp: 2025-08-26T14:13:46.601Z
Learning: Applies to src/main/presenter/llmProviderPresenter/providers/*.ts : Implement a coreStream method for new providers following the standardized event interface

Applied to files:

  • src/main/presenter/llmProviderPresenter/index.ts
📚 Learning: 2025-07-21T01:46:52.880Z
Learnt from: CR
PR: ThinkInAIXYZ/deepchat#0
File: .cursor/rules/llm-agent-loop.mdc:0-0
Timestamp: 2025-07-21T01:46:52.880Z
Learning: Applies to src/main/presenter/llmProviderPresenter/providers/*.ts : Provider implementations must use a `coreStream` method that yields standardized stream events to decouple the main loop from provider-specific details.

Applied to files:

  • src/main/presenter/llmProviderPresenter/index.ts
📚 Learning: 2025-07-21T01:46:52.880Z
Learnt from: CR
PR: ThinkInAIXYZ/deepchat#0
File: .cursor/rules/llm-agent-loop.mdc:0-0
Timestamp: 2025-07-21T01:46:52.880Z
Learning: Applies to src/main/presenter/llmProviderPresenter/providers/*.ts : Provider implementations should yield text events in the standardized format.

Applied to files:

  • src/main/presenter/llmProviderPresenter/index.ts
📚 Learning: 2025-07-21T01:46:52.880Z
Learnt from: CR
PR: ThinkInAIXYZ/deepchat#0
File: .cursor/rules/llm-agent-loop.mdc:0-0
Timestamp: 2025-07-21T01:46:52.880Z
Learning: Applies to src/main/presenter/llmProviderPresenter/providers/*.ts : Provider implementations should aggregate and yield usage events as part of the standardized stream.

Applied to files:

  • src/main/presenter/llmProviderPresenter/index.ts
🧬 Code graph analysis (2)
src/main/presenter/llmProviderPresenter/providers/tokenfluxProvider.ts (1)
src/shared/presenter.d.ts (5)
  • LLM_PROVIDER (495-515)
  • ChatMessage (1272-1284)
  • LLMResponse (842-859)
  • KeyStatus (1361-1367)
  • MODEL_META (481-494)
src/main/presenter/llmProviderPresenter/index.ts (1)
src/main/presenter/llmProviderPresenter/providers/tokenfluxProvider.ts (1)
  • TokenFluxProvider (26-213)
🔇 Additional comments (5)
src/renderer/src/components/icons/ModelIcon.vue (2)

43-43: TokenFlux icon mapping added — LGTM

Import + mapping are consistent with existing pattern.

Also applies to: 115-115


156-175: Confirm dark-mode invert needs for TokenFlux

If the TokenFlux asset has a dark fill, add it to the invert list to keep parity with other providers in dark UI.

Would you like me to add props.modelId.toLowerCase().includes('tokenflux') to the invert block after validation?

src/main/presenter/configPresenter/providers.ts (1)

96-110: Provider integration verified; confirm TokenFlux endpoint URLs
Integration confirmed in providers.ts, presenter wiring, and icon asset. Please double-check that baseUrl (https://tokenflux.ai/v1) and the apiKey, docs, and models URLs point to the intended production endpoints (no staging) and that the /v1 path matches the server routing.

src/main/presenter/llmProviderPresenter/index.ts (1)

22-22: Wiring TokenFluxProvider — LGTM

Import and id-based instantiation follow existing PPIO pattern and correctly precede the apiType switch.

Also applies to: 170-172

src/main/presenter/llmProviderPresenter/providers/tokenfluxProvider.ts (1)

205-212: Graceful fallback — LGTM

On model fetch errors, logging and delegating to parent implementation is appropriate.

Comment on lines +153 to +156
// Determine capabilities based on TokenFlux model data
const hasVision = model.supports_vision || false
const hasFunctionCalling = true // Most TokenFlux models should support function calling

Copy link
Contributor

@coderabbitai coderabbitai bot Aug 28, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🛠️ Refactor suggestion

Avoid assuming function-calling support; derive or keep existing

Hardcoding hasFunctionCalling = true can incorrectly flip configs. Prefer deriving from API fields when present; otherwise keep existing config.

-        // Determine capabilities based on TokenFlux model data
-        const hasVision = model.supports_vision || false
-        const hasFunctionCalling = true // Most TokenFlux models should support function calling
+        // Determine capabilities based on TokenFlux model data (fallback to conservative defaults)
+        const hasVision = Boolean((model as any)?.supports_vision)
+        const hasFunctionCalling = Boolean((model as any)?.capabilities?.function_call)
@@
-        const newConfig = {
+        const newConfig = {
           contextLength: contextLength,
           maxTokens: maxTokens,
-          functionCall: hasFunctionCalling,
+          // only update FC when API signals support; otherwise preserve existing
+          functionCall: hasFunctionCalling ?? existingConfig.functionCall,
           vision: hasVision,
           reasoning: existingConfig.reasoning, // Keep existing reasoning setting
           temperature: existingConfig.temperature, // Keep existing temperature
           type: existingConfig.type // Keep existing type
         }
@@
-          existingConfig.functionCall !== newConfig.functionCall ||
+          (hasFunctionCalling !== undefined &&
+            existingConfig.functionCall !== newConfig.functionCall) ||
           existingConfig.vision !== newConfig.vision

If the API doesn’t expose capabilities, consider leaving functionCall untouched.

Also applies to: 165-173, 175-186, 188-201

🤖 Prompt for AI Agents
In src/main/presenter/llmProviderPresenter/providers/tokenfluxProvider.ts around
lines 153-156 (and similarly at 165-173, 175-186, 188-201), the code currently
hardcodes hasFunctionCalling = true which can incorrectly override existing
config; instead, derive function-calling support from the TokenFlux model/API
response when a capabilities or function_calling-like field exists (e.g.,
model.capabilities.function_call or model.supports_function_calling), and if
such a field is absent, do not change the existing functionCall
configuration—leave it untouched or carry forward the previous value; update all
similar occurrences to prefer API-derived values and fallback to the current
config rather than forcing true.

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Most TokenFlux models should support function calling

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@zerob13 Thank you for that clarification! I appreciate your domain knowledge about TokenFlux models.

Given that most TokenFlux models support function calling, your current approach makes sense as a reasonable default. However, to make the code more robust and future-proof, you might consider a slight modification that defaults to true but allows for exceptions:

// Determine capabilities based on TokenFlux model data
const hasVision = model.supports_vision || false
// Default to true for TokenFlux models, but allow API to override if field exists
const hasFunctionCalling = model.supports_function_calling ?? true

This way, if TokenFlux ever adds a model that doesn't support function calling and exposes this in their API response, the code will handle it gracefully. But for now, it maintains your intended behavior of assuming function calling support by default.

The current implementation is perfectly acceptable given your knowledge of the TokenFlux platform!


✏️ Learnings added
Learnt from: zerob13
PR: ThinkInAIXYZ/deepchat#804
File: src/main/presenter/llmProviderPresenter/providers/tokenfluxProvider.ts:153-156
Timestamp: 2025-08-28T05:55:31.458Z
Learning: TokenFlux models generally support function calling by default, so it's reasonable to assume hasFunctionCalling = true for TokenFlux provider implementations in src/main/presenter/llmProviderPresenter/providers/tokenfluxProvider.ts

Note: Learnings are effective only in the context of similar code segments. To apply general coding guidelines, please configure review instructions. You can manage existing learnings in the app.


🧠 Learnings used
Learnt from: CR
PR: ThinkInAIXYZ/deepchat#0
File: CLAUDE.md:0-0
Timestamp: 2025-08-26T14:13:46.601Z
Learning: Applies to src/main/presenter/llmProviderPresenter/providers/*.ts : Each LLM provider must implement provider-specific API interactions, convert MCP tools, and normalize streaming responses

Learnt from: CR
PR: ThinkInAIXYZ/deepchat#0
File: .cursor/rules/llm-agent-loop.mdc:0-0
Timestamp: 2025-07-21T01:46:52.880Z
Learning: Applies to src/main/presenter/llmProviderPresenter/providers/*.ts : All provider implementations must parse provider-specific data chunks and yield standardized events for text, reasoning, tool calls, usage, errors, stop reasons, and image data.

Learnt from: CR
PR: ThinkInAIXYZ/deepchat#0
File: .cursor/rules/llm-agent-loop.mdc:0-0
Timestamp: 2025-07-21T01:46:52.880Z
Learning: Applies to src/main/presenter/llmProviderPresenter/providers/*.ts : Each file in `src/main/presenter/llmProviderPresenter/providers/*.ts` should handle interaction with a specific LLM API, including request/response formatting, tool definition conversion, native/non-native tool call management, and standardizing output streams to a common event format.

Learnt from: CR
PR: ThinkInAIXYZ/deepchat#0
File: .cursor/rules/llm-agent-loop.mdc:0-0
Timestamp: 2025-07-21T01:46:52.880Z
Learning: Applies to src/main/presenter/llmProviderPresenter/providers/*.ts : Provider files should implement helper methods such as `formatMessages`, `convertToProviderTools`, `parseFunctionCalls`, and `prepareFunctionCallPrompt` as needed for provider-specific logic.

Learnt from: CR
PR: ThinkInAIXYZ/deepchat#0
File: .cursor/rules/llm-agent-loop.mdc:0-0
Timestamp: 2025-07-21T01:46:52.880Z
Learning: Applies to src/main/presenter/llmProviderPresenter/providers/*.ts : Provider implementations should yield events asynchronously using the async generator pattern.

Learnt from: CR
PR: ThinkInAIXYZ/deepchat#0
File: .cursor/rules/llm-agent-loop.mdc:0-0
Timestamp: 2025-07-21T01:46:52.880Z
Learning: Applies to src/main/presenter/llmProviderPresenter/providers/*.ts : When a provider does not support native function calling, it must prepare messages using prompt wrapping (e.g., `prepareFunctionCallPrompt`) before making the API call.

@zerob13 zerob13 merged commit e5a205d into dev Aug 28, 2025
2 checks passed
neoragex2002 pushed a commit to neoragex2002/deepchat that referenced this pull request Aug 28, 2025
* feat: add TokenFlux.ai provider support

Add comprehensive TokenFlux.ai integration with OpenAI-compatible API support:
- Implement TokenFluxProvider extending OpenAICompatibleProvider
- Add provider configuration with official endpoints and documentation links
- Support dynamic model fetching with TokenFlux-specific format parsing
- Include vision and function calling capability detection from API response
- Add TokenFlux icon integration in ModelIcon component
- Enable seamless UI integration with existing provider settings

Resolves ThinkInAIXYZ#613

* fix: lint
@zerob13 zerob13 deleted the feature/tokenflux branch September 21, 2025 15:15
This was referenced Oct 13, 2025
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants