feat(core): support GOOGLE_GEMINI_BASE_URL for custom API endpoints#20681
feat(core): support GOOGLE_GEMINI_BASE_URL for custom API endpoints#20681AvichalDwivedi2205 wants to merge 2 commits intogoogle-gemini:mainfrom
Conversation
Summary of ChangesHello, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed! This pull request introduces a significant enhancement by allowing users to configure custom API endpoints for the Gemini CLI via environment variables. This change is crucial for developers who wish to integrate with local LLM proxies or custom API-compatible services, providing greater flexibility and enabling local-first development workflows. The implementation includes robust URL validation and ensures that telemetry accurately reflects the custom endpoints being used. Highlights
Changelog
Activity
Using Gemini Code AssistThe full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips. Invoking Gemini You can request assistance from Gemini at any point by creating a comment using either
Customization To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a Limitations & Feedback Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here. You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension. Footnotes
|
There was a problem hiding this comment.
Code Review
This pull request adds support for custom API endpoints via the GOOGLE_GEMINI_BASE_URL and GEMINI_API_BASE_URL environment variables. The changes include URL validation, updates to telemetry to log the custom endpoint, comprehensive tests, and documentation updates. The implementation is solid, but I've identified one area for improvement in loggingContentGenerator.ts where redundant URL validation logic can be removed to improve maintainability.
Adds support for GOOGLE_GEMINI_BASE_URL environment variable to override the default Gemini API endpoint. This enables local-first workflows with LLM proxies like Ollama and LiteLLM. - GOOGLE_GEMINI_BASE_URL takes precedence (matches issue google-gemini#15430 request) - GEMINI_API_BASE_URL supported as alias for compatibility - URL validation with clear warning on invalid values - Endpoint telemetry updated to reflect custom base URL - Vitest tests covering all env var combinations and edge cases - Configuration docs updated Closes google-gemini#15430
resolveCustomBaseUrl() already validates the URL and returns undefined for invalid input, so new URL() is guaranteed not to throw when customBaseUrl is truthy.
60005d4 to
fb0c216
Compare
| if ( | ||
| config.authType === AuthType.USE_GEMINI || | ||
| config.authType === AuthType.USE_VERTEX_AI | ||
| ) { | ||
| let headers: Record<string, string> = { ...baseHeaders }; | ||
| if (gcConfig?.getUsageStatisticsEnabled()) { | ||
| const installationManager = new InstallationManager(); | ||
| const installationId = installationManager.getInstallationId(); | ||
| headers = { | ||
| ...headers, | ||
| 'x-gemini-api-privileged-user-id': `${installationId}`, | ||
| }; | ||
| } | ||
| const httpOptions = { headers }; | ||
| const customBaseUrl = resolveCustomBaseUrl(); | ||
| const httpOptions = { | ||
| headers, | ||
| ...(customBaseUrl && { baseUrl: customBaseUrl }), | ||
| }; |
There was a problem hiding this comment.
In line 232–233 resolveCustomBaseUrl() is called inside the USE_GEMINI || USE_VERTEX_AI branch, so the custom baseUrl is applied to httpOptions for both auth types:
if (config.authType === AuthType.USE_GEMINI ||
config.authType === AuthType.USE_VERTEX_AI) {
// ...
const customBaseUrl = resolveCustomBaseUrl();
const httpOptions = {
headers,
...(customBaseUrl && { baseUrl: customBaseUrl }), // applied for both
};However, in loggingContentGenerator.ts _getEndpointUrl() (L214–235), the Vertex AI check (Case 2, L215) returns before the custom URL check (Case 3, L226) is ever reached:
// Case 2 (L215) — returns early, never reaches Case 3
if (genConfig?.vertexai) {
return { address: `${location}-aiplatform.googleapis.com`, port: 443 };
}
// Case 3 (L226) — unreachable when vertexai is true
const customBaseUrl = resolveCustomBaseUrl();So if a user sets GOOGLE_GEMINI_BASE_URL=http://my-proxy:4000 with Vertex AI
auth, the actual request goes to http://my-proxy:4000, but the telemetry
event records us-central1-aiplatform.googleapis.com:443 as the server
address. Since telemetry should reflect where the request actually went, it
should record { address: 'my-proxy', port: 4000 } instead.
One possible solution will, is to Move the custom URL check in _getEndpointUrl() before the Vertex AI check, so telemetry accurately reflects the actual endpoint:
// packages/core/src/core/loggingContentGenerator.ts
// _getEndpointUrl() — replace L214–L235 with custom URL check first:
// Case 2 (moved up): Custom base URL
const customBaseUrl = resolveCustomBaseUrl();
if (customBaseUrl) {
const parsed = new URL(customBaseUrl);
const port = parsed.port
? parseInt(parsed.port, 10)
: parsed.protocol === 'https:' ? 443 : 80;
return { address: parsed.hostname, port };
}
// Case 3 (moved down): Vertex AI default
if (genConfig?.vertexai) { ... }Let me know what you think.
Adds support for GOOGLE_GEMINI_BASE_URL environment variable to override the default Gemini API endpoint. This enables local-first workflows with LLM proxies like Ollama and LiteLLM.
Closes #15430
Summary
Details
Related Issues
How to Validate
Pre-Merge Checklist