Skip to content

fix(vertexai): support API key auth via Gemini Developer API#87

Merged
avivhalfon merged 7 commits intomainfrom
ah/TLP-1190/feat-vertexai-apikey-fix-v2
Jan 14, 2026
Merged

fix(vertexai): support API key auth via Gemini Developer API#87
avivhalfon merged 7 commits intomainfrom
ah/TLP-1190/feat-vertexai-apikey-fix-v2

Conversation

@avivhalfon
Copy link
Contributor

@avivhalfon avivhalfon commented Jan 13, 2026

Important

Adds API Key authentication support to Vertex AI, updates documentation, and modifies tests to reflect these changes.

  • Behavior:
    • Adds API Key and Service Account authentication modes to VertexAIProvider in provider.rs.
    • Updates VertexAIProviderConfig in dto.rs to make project_id and location optional.
    • Modifies fetch_live_config() in config_provider_service.rs to handle optional project_id and location.
  • Documentation:
    • Updates README.md with examples for API Key and Service Account authentication.
  • Tests:
    • Updates tests in tests.rs and provider_api_tests.rs to reflect changes in VertexAIProviderConfig.
    • Adds tests for API Key authentication in vertexai/tests.rs.

This description was created by Ellipsis for d9e5269. You can customize this summary. It will automatically update as commits are pushed.


Summary by CodeRabbit

  • New Features

    • Two Vertex AI authentication modes added: API Key and Service Account, plus a test-mode endpoint.
  • Documentation

    • README updated with credential examples and a table mapping auth modes to recommended endpoints/use cases.
  • Bug Fixes

    • Improved validation message to require both project and location when applicable.
  • Chores

    • Provider config deserialization tightened to reject unknown fields.
  • Tests

    • Tests updated to reflect project_id and location now represented as optional values.

✏️ Tip: You can customize this high-level summary in your review settings.

@coderabbitai
Copy link
Contributor

coderabbitai bot commented Jan 13, 2026

📝 Walkthrough

Walkthrough

Adds optional multi-auth support to the Vertex AI provider (API Key or Service Account), refactors token selection and request routing for chat and embeddings, makes project_id and location optional in provider config, updates config transformation logic, tests, and README documentation.

Changes

Cohort / File(s) Summary
Documentation
README.md
Adds two authentication modes and examples (api_key for Gemini Developer API, credentials_path for Vertex AI service account), removes old inline credential comment, and adds a mapping table for endpoints.
Vertex AI provider (auth & routing)
src/providers/vertexai/provider.rs
Introduces uses_api_key() and get_oauth_token(); removes get_auth_token(). Refactors chat and embeddings flows to branch between test endpoint (dummy token), Gemini Developer API (API key + x-goog-api-key), or Vertex AI (OAuth Bearer). Centralizes request body construction, endpoint selection, token handling, and logging; updates response parsing.
Vertex AI provider tests
src/providers/vertexai/tests.rs
Updates expected panic message in validation test to require both project_id and location.
Management DTOs
src/management/dto.rs
VertexAIProviderConfig.project_id and location changed from String to Option<String>; added serde(deny_unknown_fields) on the struct.
Config transformation
src/management/services/config_provider_service.rs
transform_provider_dto now inserts project_id and location into params only when those Option fields are Some(...).
Integration & API tests
tests/pipelines_api_integration_tests.rs, tests/provider_api_tests.rs
Tests updated to construct VertexAIProviderConfig with Some(...) for project_id and location and to assert Option-wrapped values accordingly.

Sequence Diagram(s)

sequenceDiagram
    participant Client
    participant Provider as VertexAI Provider
    participant Auth as Auth Selector
    participant GeminiAPI as Gemini Developer API
    participant VertexAI as Vertex AI API
    participant TestEP as Test Endpoint

    Client->>Provider: request (chat_completions / embeddings)
    Provider->>Auth: determine auth mode (test / api_key / service_account)
    alt Test Mode
        Auth-->>Provider: test
        Provider->>TestEP: POST (dummy token)
        TestEP-->>Provider: test response
    else API Key Mode
        Auth-->>Provider: api_key
        Provider->>GeminiAPI: POST with x-goog-api-key header
        GeminiAPI-->>Provider: Gemini-format response
    else Service Account Mode
        Auth-->>Provider: service_account
        Provider->>Provider: get_oauth_token()
        Provider->>VertexAI: POST with Bearer token
        VertexAI-->>Provider: Vertex-format response
    end
    Provider-->>Client: normalized response
Loading

Estimated code review effort

🎯 3 (Moderate) | ⏱️ ~20 minutes

Poem

🐇 I hopped where tokens split in two,

A key or account to choose and view.
I stitched the routes and logged each test,
Project and location tucked to rest.
Hooray — the rabbit's patch is through!

🚥 Pre-merge checks | ✅ 3 | ❌ 2
❌ Failed checks (1 warning, 1 inconclusive)
Check name Status Explanation Resolution
Docstring Coverage ⚠️ Warning Docstring coverage is 42.86% which is insufficient. The required threshold is 80.00%. Write docstrings for the functions missing them to satisfy the coverage threshold.
Linked Issues check ❓ Inconclusive The linked issue TLP-1190 lacks detailed coding requirements. While the PR appears to implement API key authentication for VertexAI (related to BYOK), the issue description is empty, making full compliance validation impossible. Provide detailed coding requirements in the linked issue TLP-1190 to enable thorough compliance assessment of implementation details.
✅ Passed checks (3 passed)
Check name Status Explanation
Description Check ✅ Passed Check skipped - CodeRabbit’s high-level summary is enabled.
Title check ✅ Passed The PR title clearly summarizes the main change: adding API key authentication support for VertexAI via Gemini Developer API, which aligns with the primary objective described in the PR objectives.
Out of Scope Changes check ✅ Passed All changes are focused on supporting API key authentication for VertexAI: configuration schema updates (dto.rs), implementation (provider.rs), service logic (config_provider_service.rs), documentation (README.md), and corresponding tests. No unrelated changes detected.

✏️ Tip: You can configure your own custom pre-merge checks in the settings.

✨ Finishing touches
  • 📝 Generate docstrings


📜 Recent review details

Configuration used: defaults

Review profile: CHILL

Plan: Pro

Disabled knowledge base sources:

  • Linear integration is disabled by default for public repositories

You can enable these sources in your CodeRabbit configuration.

📥 Commits

Reviewing files that changed from the base of the PR and between e522bae and d9e5269.

📒 Files selected for processing (1)
  • src/management/dto.rs
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (5)
  • GitHub Check: Build Docker Images (hub, Dockerfile, linux/amd64)
  • GitHub Check: Build Docker Images (migrations, Dockerfile.db-based-migrations, -migrations, linux/amd64)
  • GitHub Check: Build Docker Images (migrations, Dockerfile.db-based-migrations, -migrations, linux/arm64)
  • GitHub Check: build
  • GitHub Check: Analyze (rust)
🔇 Additional comments (1)
src/management/dto.rs (1)

88-96: LGTM! The deny_unknown_fields attribute correctly prevents untagged enum mismatch.

This is the right approach for ensuring strict deserialization when VertexAIProviderConfig is used within the #[serde(untagged)] ProviderConfig enum. Making project_id and location optional properly supports the API key authentication mode.

Runtime validation in VertexAIProvider::new() correctly enforces that when no api_key is provided, both project_id and location must be present, with an explicit panic and clear error message. Location format is also validated to only allow alphanumeric characters and hyphens.

✏️ Tip: You can disable this entire section by setting review_details to false in your review settings.


Comment @coderabbitai help to get the list of available commands and usage tips.

Copy link
Contributor

@ellipsis-dev ellipsis-dev bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Important

Looks good to me! 👍

Reviewed everything up to b62f945 in 1 minute and 43 seconds. Click for details.
  • Reviewed 367 lines of code in 2 files
  • Skipped 0 files when reviewing.
  • Skipped posting 5 draft comments. View those below.
  • Modify your settings and rules to customize what types of comments Ellipsis leaves. And don't forget to react with 👍 or 👎 to teach Ellipsis.
1. README.md:241
  • Draft comment:
    Configuration examples look clear. Consider adding a brief note explaining when to choose API key mode versus service account mode for better user guidance.
  • Reason this comment was not posted:
    Confidence changes required: 50% <= threshold 50% None
2. src/providers/vertexai/provider.rs:51
  • Draft comment:
    Consider caching the OAuth token instead of reading and authenticating on every request. Repeated file I/O and authenticator creation (lines 51-61) could hurt performance under load.
  • Reason this comment was not posted:
    Comment was on unchanged code.
3. src/providers/vertexai/provider.rs:56
  • Draft comment:
    Instead of panicking when credentials are missing, consider returning an error for better error handling. This would improve resiliency in production environments.
  • Reason this comment was not posted:
    Confidence changes required: 50% <= threshold 50% None
4. src/providers/vertexai/provider.rs:41
  • Draft comment:
    The test mode check is implemented differently across methods (using is_some_and here vs map_or elsewhere). Consider consolidating the logic for consistency.
  • Reason this comment was not posted:
    Confidence changes required: 33% <= threshold 50% None
5. src/providers/vertexai/provider.rs:354
  • Draft comment:
    Ensure that the differences between the Vertex AI (service account mode) and Gemini Developer API (API key mode) request bodies are well documented and aligned with the provider’s expected schemas.
  • Reason this comment was not posted:
    Confidence changes required: 50% <= threshold 50% None

Workflow ID: wflow_davmGMcNWXXlwbe8

You can customize Ellipsis by changing your verbosity settings, reacting with 👍 or 👎, replying to comments, or adding code review rules.

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 4

Caution

Some comments are outside the diff and can’t be posted inline due to platform limitations.

⚠️ Outside diff range comments (1)
src/providers/vertexai/provider.rs (1)

490-515: Response parsing will fail for Gemini API (API key mode) due to format mismatch.

The code at lines 498-515 expects Vertex AI :predict response format with predictions[].embeddings.values, but the Gemini API :embedContent endpoint (used in API key mode) returns {"embedding": {"values": [...]}} without a predictions array. The .ok_or(StatusCode::INTERNAL_SERVER_ERROR)? call will fail when parsing API key mode responses.

🤖 Fix all issues with AI agents
In @src/providers/vertexai/provider.rs:
- Around line 71-74: The call to
ServiceAccountAuthenticator::builder(sa_key).build().await.expect(...) should
not use expect; instead handle the build error and propagate or return a Result:
replace expect with proper error propagation (use ? or map_err to convert to
your crate's error type) so the failure to create the authenticator is returned
with a contextual message; update the enclosing function to return a Result if
necessary and reference ServiceAccountAuthenticator::builder, build().await, and
the variable auth when implementing the change.
- Around line 51-68: Replace the panic-inducing expect() calls around credential
lookup, file read, and JSON parse with proper error handling: when resolving
self.config.params.get("credentials_path") or
std::env::var("GOOGLE_APPLICATION_CREDENTIALS") fails, when reading the file
into key_json, or when deserializing into ServiceAccountKey (sa_key), log the
error and return an HTTP StatusCode::INTERNAL_SERVER_ERROR instead of panicking;
also replace the blocking std::fs::read_to_string call with
tokio::fs::read_to_string (await it) so you don’t block the async runtime, and
ensure all failure branches produce clear logs and an Err/Response with
INTERNAL_SERVER_ERROR.
- Around line 83-84: The current code uses
token.token().unwrap_or_default().to_string(), which silently returns an empty
string on None and can cause confusing auth failures; update the logic in the
token retrieval path (the variable token and the call token.token()) to treat a
missing or empty token as an error: check token.token() for Some(non-empty) and
return a proper Err (with a clear error message) instead of Ok("")—include
context in the error and optionally log the failure before returning so callers
know authentication truly failed.
🧹 Nitpick comments (1)
src/providers/vertexai/provider.rs (1)

32-35: Consider trimming whitespace before checking.

If api_key contains only whitespace, this would return true but fail authentication. Consider using .trim().is_empty() for robustness.

Suggested improvement
     fn uses_api_key(&self) -> bool {
-        !self.config.api_key.is_empty()
+        !self.config.api_key.trim().is_empty()
     }
📜 Review details

Configuration used: defaults

Review profile: CHILL

Plan: Pro

Disabled knowledge base sources:

  • Linear integration is disabled by default for public repositories

You can enable these sources in your CodeRabbit configuration.

📥 Commits

Reviewing files that changed from the base of the PR and between f419a67 and b62f945.

📒 Files selected for processing (2)
  • README.md
  • src/providers/vertexai/provider.rs
🧰 Additional context used
🧬 Code graph analysis (1)
src/providers/vertexai/provider.rs (1)
src/providers/vertexai/models.rs (3)
  • serde_json (508-508)
  • from (335-478)
  • from (590-629)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (2)
  • GitHub Check: build
  • GitHub Check: Analyze (rust)
🔇 Additional comments (2)
README.md (1)

244-268: LGTM! Clear documentation of the two authentication modes.

The documentation accurately reflects the implementation:

  • API Key mode routes to Gemini Developer API (generativelanguage.googleapis.com)
  • Service Account mode routes to Vertex AI ({location}-aiplatform.googleapis.com)

The YAML examples and reference table provide clear guidance for users to choose the appropriate authentication method.

src/providers/vertexai/provider.rs (1)

184-232: LGTM! Clean three-way branching for authentication modes.

The refactored implementation:

  • Pre-builds the request body once and reuses it across all modes
  • Correctly uses x-goog-api-key header for Gemini Developer API
  • Correctly uses Bearer token for Vertex AI OAuth
  • Endpoints match the documented behavior

@doronkopit5 doronkopit5 changed the title feat(vertexai): support API key auth via Gemini Developer API fix(vertexai): support API key auth via Gemini Developer API Jan 14, 2026
README.md Outdated
Comment on lines +252 to +253
project_id: your-project
location: us-central1
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

if you put apikey i dont think you need project and location - thats the point, no?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

removed

Copy link
Contributor

@ellipsis-dev ellipsis-dev bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Important

Looks good to me! 👍

Reviewed 7d9e5a9 in 1 minute and 38 seconds. Click for details.
  • Reviewed 52 lines of code in 2 files
  • Skipped 0 files when reviewing.
  • Skipped posting 3 draft comments. View those below.
  • Modify your settings and rules to customize what types of comments Ellipsis leaves. And don't forget to react with 👍 or 👎 to teach Ellipsis.
1. README.md:249
  • Draft comment:
    Removed 'project_id' and 'location' for API key auth. Ensure the docs clearly state which parameters are required depending on the auth method.
  • Reason this comment was not posted:
    Comment did not seem useful. Confidence is useful = 0% <= threshold 50% The comment is asking the author to ensure that the documentation is updated to reflect changes in required parameters for API key authentication. This is a request for documentation updates, which is not allowed according to the rules.
2. src/providers/vertexai/provider.rs:105
  • Draft comment:
    The new constructor now conditionally requires project_id and location for service account mode. The panic on missing values is appropriate for misconfiguration, but consider aligning error handling (or providing a more explicit error) rather than using panic.
  • Reason this comment was not posted:
    Decided after close inspection that this draft comment was likely wrong and/or not actionable: usefulness confidence = 20% vs. threshold = 50% The comment says "consider aligning error handling" but doesn't specify what that means or how to do it. It's vague - what does "aligning" mean? Align with what? The comment also says "or providing a more explicit error" but the current panic message is already quite explicit. The comment doesn't provide a concrete alternative or explain why panic is inappropriate here. This is a constructor (fn new), and panicking on misconfiguration in constructors is a common Rust pattern. The comment is speculative ("consider") and doesn't provide clear, actionable guidance. It violates the rule about comments needing to be actionable and clear. Maybe the comment is suggesting that returning a Result would be better than panicking, which could be valid feedback. Perhaps there's a broader pattern in the codebase where constructors return Results instead of panicking, and this would be inconsistent. Even if returning a Result might be better, the comment doesn't clearly state this. It uses vague language like "consider aligning" without specifying what to align with or how. The trait signature for Provider::new likely returns Self, not Result<Self, _>, so changing this would require a trait change which is beyond the scope of this file. Without seeing the trait definition, I can't confirm if this is even possible. The comment is too vague and speculative. It doesn't provide clear, actionable guidance. The panic message is already explicit, and the comment doesn't explain what "aligning error handling" means or provide a concrete alternative. This violates the rule that comments should be actionable and clear.
3. src/providers/vertexai/provider.rs:546
  • Draft comment:
    The test client builder (with_test_client) always validates location via validate_location, which panics if the location string is empty. This is inconsistent with the main constructor that allows an empty location in API key mode. Consider aligning these behaviors.
  • Reason this comment was not posted:
    Comment was not on a location in the diff, so it can't be submitted as a review comment.

Workflow ID: wflow_ipzEVzocOVGd6rYO

You can customize Ellipsis by changing your verbosity settings, reacting with 👍 or 👎, replying to comments, or adding code review rules.

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 0

Caution

Some comments are outside the diff and can’t be posted inline due to platform limitations.

⚠️ Outside diff range comments (2)
src/providers/vertexai/provider.rs (2)

493-518: Response parsing assumes Vertex AI format, but API key mode uses Gemini API with a different response structure.

When uses_api_key() is true, requests go to the Gemini embedContent endpoint, which returns:

{"embedding": {"values": [...]}}

However, the response parsing at lines 500-518 expects Vertex AI format:

{"predictions": [{"embeddings": {"values": [...]}}]}

This will cause INTERNAL_SERVER_ERROR at line 503 for all API key mode embedding requests since predictions field won't exist.

Proposed fix: Handle both response formats
             // Normal processing for regular API responses
             let gemini_response: serde_json::Value =
                 serde_json::from_str(&response_text).map_err(|e| {
                     error!("Failed to parse response as JSON: {}", e);
                     StatusCode::INTERNAL_SERVER_ERROR
                 })?;

-            // Extract embeddings from updated response format
-            let embeddings = gemini_response["predictions"]
-                .as_array()
-                .ok_or(StatusCode::INTERNAL_SERVER_ERROR)?
-                .iter()
-                .enumerate()
-                .map(|(i, pred)| Embeddings {
-                    object: "embedding".to_string(),
-                    embedding: Embedding::Float(
-                        pred["embeddings"]["values"]
-                            .as_array()
-                            .unwrap_or(&vec![])
-                            .iter()
-                            .filter_map(|v| v.as_f64().map(|f| f as f32))
-                            .collect::<Vec<f32>>(),
-                    ),
-                    index: i,
-                })
-                .collect();
+            // Handle different response formats based on auth mode
+            let embeddings = if self.uses_api_key() {
+                // Gemini embedContent format: {"embedding": {"values": [...]}}
+                let values = gemini_response["embedding"]["values"]
+                    .as_array()
+                    .ok_or_else(|| {
+                        error!("Missing embedding.values in Gemini response");
+                        StatusCode::INTERNAL_SERVER_ERROR
+                    })?;
+                vec![Embeddings {
+                    object: "embedding".to_string(),
+                    embedding: Embedding::Float(
+                        values
+                            .iter()
+                            .filter_map(|v| v.as_f64().map(|f| f as f32))
+                            .collect::<Vec<f32>>(),
+                    ),
+                    index: 0,
+                }]
+            } else {
+                // Vertex AI format: {"predictions": [{"embeddings": {"values": [...]}}]}
+                gemini_response["predictions"]
+                    .as_array()
+                    .ok_or(StatusCode::INTERNAL_SERVER_ERROR)?
+                    .iter()
+                    .enumerate()
+                    .map(|(i, pred)| Embeddings {
+                        object: "embedding".to_string(),
+                        embedding: Embedding::Float(
+                            pred["embeddings"]["values"]
+                                .as_array()
+                                .unwrap_or(&vec![])
+                                .iter()
+                                .filter_map(|v| v.as_f64().map(|f| f as f32))
+                                .collect::<Vec<f32>>(),
+                        ),
+                        index: i,
+                    })
+                    .collect()
+            };

537-561: Test helper doesn't handle API key mode where location can be empty.

The with_test_client method unconditionally calls validate_location at line 552, but in API key mode, location_str can be empty. This will cause the test helper to panic with "Invalid location provided" when testing API key authentication.

Consider aligning with the new() constructor logic:

Proposed fix
         let location_str = config
             .params
             .get("location")
             .cloned()
             .unwrap_or_else(|| "".to_string());

-        let location = Self::validate_location(&location_str)
-            .expect("Invalid location provided for test client configuration");
+        let location = if location_str.is_empty() {
+            String::new()
+        } else {
+            Self::validate_location(&location_str)
+                .expect("Invalid location provided for test client configuration")
+        };
♻️ Duplicate comments (2)
src/providers/vertexai/provider.rs (2)

51-84: Previous review feedback still applies: Replace expect() calls and blocking I/O.

The issues in this method were already flagged in previous reviews:

  1. Lines 51-57, 60-61, 67-68, 71-74: expect() calls will panic on misconfiguration
  2. Line 60-61: std::fs::read_to_string is blocking I/O in an async function
  3. Line 84: unwrap_or_default() silently returns empty string on None

Please refer to the previous review comments for suggested fixes.


373-397: Previous review feedback still applies: Warn when multiple inputs are silently reduced to one.

The Gemini API embedContent endpoint only supports single text input, but this code silently takes only the first item from Multiple and MultipleTokenIds variants without warning the user. Please refer to the previous review comment for the suggested warning implementation.

📜 Review details

Configuration used: defaults

Review profile: CHILL

Plan: Pro

Disabled knowledge base sources:

  • Linear integration is disabled by default for public repositories

You can enable these sources in your CodeRabbit configuration.

📥 Commits

Reviewing files that changed from the base of the PR and between b62f945 and 7d9e5a9.

📒 Files selected for processing (2)
  • README.md
  • src/providers/vertexai/provider.rs
🧰 Additional context used
🧬 Code graph analysis (1)
src/providers/vertexai/provider.rs (1)
src/providers/provider.rs (1)
  • new (13-15)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (2)
  • GitHub Check: build
  • GitHub Check: Analyze (rust)
🔇 Additional comments (4)
README.md (1)

244-265: LGTM! Clear documentation of the two authentication modes.

The documentation correctly shows that API key mode only requires api_key, while service account mode requires project_id, location, and credentials_path. The endpoint mapping table is a helpful addition for users to understand which Google API each auth method uses.

src/providers/vertexai/provider.rs (3)

32-35: LGTM! Clean helper method for auth mode detection.

Simple and effective way to determine which authentication path to use.


105-129: LGTM! Proper conditional validation for service account mode.

The constructor correctly:

  • Makes project_id and location required only for service account mode
  • Skips location validation when using API key mode (empty location is valid)
  • Uses panic! appropriately for startup configuration errors

187-235: LGTM! Clean auth routing implementation for chat completions.

The three-way branching (test/API key/service account) is well-structured:

  • Request body is built once and reused across all paths
  • Each path correctly constructs its endpoint and authentication headers
  • Logging clearly indicates which API is being used

✏️ Tip: You can disable this entire section by setting review_details to false in your review settings.

Copy link
Contributor

@ellipsis-dev ellipsis-dev bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Important

Looks good to me! 👍

Reviewed 3b9d495 in 1 minute and 1 seconds. Click for details.
  • Reviewed 13 lines of code in 1 files
  • Skipped 0 files when reviewing.
  • Skipped posting 1 draft comments. View those below.
  • Modify your settings and rules to customize what types of comments Ellipsis leaves. And don't forget to react with 👍 or 👎 to teach Ellipsis.
1. src/providers/vertexai/tests.rs:941
  • Draft comment:
    The updated panic message now expects both 'project_id and location' to be provided. Consider renaming the test (e.g. to test_provider_new_missing_project_id_and_location) for clarity.
  • Reason this comment was not posted:
    Comment was on unchanged code.

Workflow ID: wflow_GTiri8aXQAiJcxCl

You can customize Ellipsis by changing your verbosity settings, reacting with 👍 or 👎, replying to comments, or adding code review rules.

Copy link
Contributor

@ellipsis-dev ellipsis-dev bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Important

Looks good to me! 👍

Reviewed aa141a1 in 2 minutes and 5 seconds. Click for details.
  • Reviewed 116 lines of code in 4 files
  • Skipped 0 files when reviewing.
  • Skipped posting 4 draft comments. View those below.
  • Modify your settings and rules to customize what types of comments Ellipsis leaves. And don't forget to react with 👍 or 👎 to teach Ellipsis.
1. src/management/dto.rs:90
  • Draft comment:
    Changed VertexAIProviderConfig fields to Option; ensure downstream validation if these fields are required for proper authentication.
  • Reason this comment was not posted:
    Comment looked like it was already resolved.
2. src/management/services/config_provider_service.rs:238
  • Draft comment:
    The match in transform_pipeline_dto only accepts 'chat', 'completion', or 'embeddings'. Tests use pipeline type 'llm-router', which could cause errors. Update the match arm if 'llm-router' is valid.
  • Reason this comment was not posted:
    Comment was not on a location in the diff, so it can't be submitted as a review comment.
3. tests/pipelines_api_integration_tests.rs:258
  • Draft comment:
    Test 'test_create_pipeline_with_valid_model_router' uses pipeline type 'llm-router' which may be inconsistent with the supported pipeline types in the service layer.
  • Reason this comment was not posted:
    Comment was not on a location in the diff, so it can't be submitted as a review comment.
4. tests/provider_api_tests.rs:324
  • Draft comment:
    Hardcoding the expected error message for duplicate provider names could be brittle. Consider matching on an error code or structure instead.
  • Reason this comment was not posted:
    Comment was not on a location in the diff, so it can't be submitted as a review comment.

Workflow ID: wflow_bP0RMDOhgU6wfoLI

You can customize Ellipsis by changing your verbosity settings, reacting with 👍 or 👎, replying to comments, or adding code review rules.

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 0

Caution

Some comments are outside the diff and can’t be posted inline due to platform limitations.

⚠️ Outside diff range comments (1)
tests/provider_api_tests.rs (1)

815-825: Formatting issue detected by CI — run cargo fmt.

The pipeline failure indicates formatting issues around these assertion lines. Please run cargo fmt to fix the code style.

🔧 Suggested fix
     // Verify the configuration was stored and retrieved correctly
     if let ProviderConfig::VertexAI(config) = provider_response.config {
-        assert_eq!(config.project_id, Some("test-project-transform".to_string()));
-        assert_eq!(config.location, Some("us-central1".to_string()));
+        assert_eq!(
+            config.project_id,
+            Some("test-project-transform".to_string())
+        );
+        assert_eq!(config.location, Some("us-central1".to_string()));
         assert_eq!(
             config.credentials_path,
             Some("/path/to/credentials.json".to_string())
         );
📜 Review details

Configuration used: defaults

Review profile: CHILL

Plan: Pro

Disabled knowledge base sources:

  • Linear integration is disabled by default for public repositories

You can enable these sources in your CodeRabbit configuration.

📥 Commits

Reviewing files that changed from the base of the PR and between 3b9d495 and aa141a1.

📒 Files selected for processing (4)
  • src/management/dto.rs
  • src/management/services/config_provider_service.rs
  • tests/pipelines_api_integration_tests.rs
  • tests/provider_api_tests.rs
🧰 Additional context used
🧬 Code graph analysis (2)
src/management/services/config_provider_service.rs (1)
src/providers/vertexai/provider.rs (1)
  • location (88-91)
tests/provider_api_tests.rs (1)
src/providers/vertexai/provider.rs (1)
  • location (88-91)
🪛 GitHub Actions: build
tests/provider_api_tests.rs

[error] 815-825: cargo fmt --check failed due to formatting issues. Unformatted code detected around the assertion; please run 'cargo fmt' to fix code style issues.

⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (5)
  • GitHub Check: Build Docker Images (hub, Dockerfile, linux/arm64)
  • GitHub Check: Build Docker Images (hub, Dockerfile, linux/amd64)
  • GitHub Check: Build Docker Images (migrations, Dockerfile.db-based-migrations, -migrations, linux/amd64)
  • GitHub Check: Build Docker Images (migrations, Dockerfile.db-based-migrations, -migrations, linux/arm64)
  • GitHub Check: Analyze (rust)
🔇 Additional comments (6)
src/management/services/config_provider_service.rs (1)

171-186: LGTM! Conditional parameter insertion correctly handles optional fields.

The logic properly inserts project_id and location only when present, aligning with the DTO changes. The API key resolution path (lines 181-185) remains intact.

tests/pipelines_api_integration_tests.rs (1)

123-128: LGTM! Test data correctly updated to use Some(...) wrappers.

The test helper properly adapts to the new Option<String> field types for project_id and location.

tests/provider_api_tests.rs (3)

106-117: LGTM! Test data correctly updated for optional fields.

Both initial and updated configs properly use Some(...) wrappers for project_id and location.


209-214: LGTM!

Test payload correctly uses Some(...) for the optional fields.


253-258: LGTM!

API key authentication test correctly supplies optional project_id and location.

src/management/dto.rs (1)

88-95: LGTM! Making project_id and location optional enables API key authentication.

This change correctly supports dual authentication modes: API key (Gemini Developer API) doesn't require project_id/location, while service account (Vertex AI) does. The provider implementation validates this at construction time (lines 106-114 of provider.rs), ensuring that if no api_key is provided, both project_id and location must be present. Requests are appropriately routed to the correct endpoint based on the authentication mode.

✏️ Tip: You can disable this entire section by setting review_details to false in your review settings.

@avivhalfon avivhalfon force-pushed the ah/TLP-1190/feat-vertexai-apikey-fix-v2 branch from aa141a1 to e522bae Compare January 14, 2026 12:40
Copy link
Contributor

@ellipsis-dev ellipsis-dev bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Important

Looks good to me! 👍

Reviewed e522bae in 1 minute and 33 seconds. Click for details.
  • Reviewed 119 lines of code in 4 files
  • Skipped 0 files when reviewing.
  • Skipped posting 4 draft comments. View those below.
  • Modify your settings and rules to customize what types of comments Ellipsis leaves. And don't forget to react with 👍 or 👎 to teach Ellipsis.
1. src/management/dto.rs:89
  • Draft comment:
    VertexAIProviderConfig: project_id and location made optional. Ensure docs and downstream consumers handle None correctly.
  • Reason this comment was not posted:
    Comment did not seem useful. Confidence is useful = 0% <= threshold 50% The comment is asking the author to ensure that documentation and downstream consumers handle None values correctly. This falls under the category of asking the author to ensure behavior, which is against the rules. The comment does not provide a specific code suggestion or ask for a specific test to be written.
2. src/management/services/config_provider_service.rs:171
  • Draft comment:
    In the VertexAI branch of transform_provider_dto, optional fields are handled; consider adding inline comments to clarify behavior when api_key is absent.
  • Reason this comment was not posted:
    Comment was not on a location in the diff, so it can't be submitted as a review comment.
3. tests/pipelines_api_integration_tests.rs:121
  • Draft comment:
    Updated VertexAI test: project_id and location now wrapped in Some(..). Verify that all API auth modes are properly exercised.
  • Reason this comment was not posted:
    Comment did not seem useful. Confidence is useful = 0% <= threshold 50% The comment is asking the PR author to verify that all API auth modes are properly exercised, which falls under the rule of not asking the author to ensure behavior is intended or tested. This makes the comment not useful according to the given rules.
4. tests/provider_api_tests.rs:103
  • Draft comment:
    VertexAI provider tests now use Some for project_id and location. Ensure transformation logic correctly handles optional values.
  • Reason this comment was not posted:
    Comment did not seem useful. Confidence is useful = 0% <= threshold 50% The comment is asking the author to ensure that the transformation logic correctly handles optional values. This falls under the category of asking the author to ensure something is tested or verified, which is against the rules.

Workflow ID: wflow_marb0fFqu7Ij4WyF

You can customize Ellipsis by changing your verbosity settings, reacting with 👍 or 👎, replying to comments, or adding code review rules.

Copy link
Contributor

@ellipsis-dev ellipsis-dev bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Important

Looks good to me! 👍

Reviewed d9e5269 in 1 minute and 3 seconds. Click for details.
  • Reviewed 12 lines of code in 1 files
  • Skipped 0 files when reviewing.
  • Skipped posting 1 draft comments. View those below.
  • Modify your settings and rules to customize what types of comments Ellipsis leaves. And don't forget to react with 👍 or 👎 to teach Ellipsis.
1. src/management/dto.rs:90
  • Draft comment:
    Good use of #[serde(deny_unknown_fields)] to enforce strict schema for VertexAIProviderConfig. Please ensure that this is intentional compared to other provider configs and update the docs if extra fields might be needed in the future.
  • Reason this comment was not posted:
    Comment did not seem useful. Confidence is useful = 0% <= threshold 50% The comment is asking the author to ensure that the use of #[serde(deny_unknown_fields)] is intentional, which is not allowed according to the rules. It also suggests updating the documentation, which is also not allowed. Therefore, this comment should be removed.

Workflow ID: wflow_uq3lDi9IbvCsuVyz

You can customize Ellipsis by changing your verbosity settings, reacting with 👍 or 👎, replying to comments, or adding code review rules.

@avivhalfon avivhalfon merged commit 608164a into main Jan 14, 2026
15 checks passed
@avivhalfon avivhalfon deleted the ah/TLP-1190/feat-vertexai-apikey-fix-v2 branch January 14, 2026 15:15
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants