Update gemini and other service default models#947
Merged
Conversation
This commit updates the list of available Gemini models to the latest 2.5 series and removes the older 2.0 flash models. - Adds `gemini-2.5-flash-lite` as an available model. - Removes the deprecated `gemini-2.0-flash` and `gemini-2.0-flash-lite` models. - Updates the documented rate limits for `gemini-2.5-pro` and `gemini-2.5-flash`.
This commit updates the available OpenAI models to include the new GPT-5 series, removes several older models, and sets a new default model for the service. - **Added:** The new `gpt-5`, `gpt-5-mini`, and `gpt-5-nano` models. - **Removed:** The `gpt-4o`, `gpt-4o-mini`, `o3`, and `o4-mini` models have been removed from the list. - **New Default:** The default model has been changed from `gpt-4.1-mini` to `gpt-5-mini`.
This commit significantly updates and simplifies the list of available GitHub Models, focusing on the latest GPT-5 series and the existing GPT-4.1 models. - **Added:** The new `gpt-5`, `gpt-5-mini`, and `gpt-5-nano` models are now available. - **Removed:** To streamline the available options, models from other providers (DeepSeek, Llama, Cohere, Mistral, Phi) and the `gpt-4o` series have been removed. - **New Default:** The default model for the GitHub service has been updated from `gpt-4.1-mini` to `gpt-5-mini`.
This commit updates the list of available Groq models to include two new preview models from OpenAI. - Adds `openai/gpt-oss-120b` to the list of preview models. - Adds `openai/gpt-oss-20b` to the list of preview models.
This commit refactors the `BuiltInAIService` to improve type safety and maintainability by using enums for model definitions. - **New GLMModel Enum:** A new `GLMModel` enum has been introduced to define the available models from Zhipu AI, eliminating the use of hardcoded strings. - **Updated Model List:** The `BuiltInAIService` now populates its `defaultModels` list using the new `GLMModel` and existing `GroqModel` enums. The list of available models has been updated, removing `llama-3.3-70b` and `gemini-2.0-flash-lite`. - **New Default Model:** `glm-4-flash-250414` is now set as the default model for the Built-in AI service.
phlpsong
approved these changes
Aug 20, 2025
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.
Close #943
And update all LLM service default models.