Skip to content

feat: add LLM configuration to YAML config (#37)#44

Merged
Kavirubc merged 2 commits intosimiligh:mainfrom
mahsumaktas:feature/llm-yaml-config
Feb 14, 2026
Merged

feat: add LLM configuration to YAML config (#37)#44
Kavirubc merged 2 commits intosimiligh:mainfrom
mahsumaktas:feature/llm-yaml-config

Conversation

@mahsumaktas
Copy link
Copy Markdown
Contributor

@mahsumaktas mahsumaktas commented Feb 13, 2026

Summary

Adds LLM provider/model configuration support to the YAML config file (.simili.yaml), addressing #37.

Currently the LLM provider and model are hardcoded. This PR makes them configurable via:

  1. YAML config (llm.provider, llm.model, llm.api_key, llm.temperature)
  2. Environment variable override (LLM_MODEL, GEMINI_API_KEY)

Changes

Core config

  • Added LLMConfig struct and Config.LLM field in internal/core/config/config.go
  • Defaults: provider: gemini, model: gemini-2.0-flash-lite
  • mergeConfigs() supports merging LLM fields

Gemini client

  • NewLLMClient now accepts (apiKey, model string) parameters
  • Falls back to default model when empty

CLI + Web wiring

  • Updated process.go, batch.go, main.go (simili-web)
  • API key: cfg.LLM.APIKey → embedding key fallback → GEMINI_API_KEY env
  • Model: cfg.LLM.ModelLLM_MODEL env override

Config & docs

  • Added llm: section to .simili.yaml and example configs
  • Added LLM_MODEL to .env.sample
  • Updated README.md, setup guides

Tests

  • TestLLMConfigDefaults — verifies defaults
  • TestMergeConfigsLLM — verifies config merging
  • TestLoadConfigWithLLM — verifies YAML loading

Validation

go build ./...   #
go test ./...    # ✅ all passed

Config Example

llm:
  provider: gemini
  model: gemini-2.0-flash-lite
  api_key: "${GEMINI_API_KEY}"
  temperature: 0.3

Summary by CodeRabbit

  • New Features

    • LLM settings are now configurable through the configuration file, including provider, API key, model, and temperature parameters.
    • LLM model selection can be overridden via the LLM_MODEL environment variable.
    • Default LLM model set to gemini-2.0-flash-lite.
  • Documentation

    • Configuration examples added for single-repo and multi-repo setups.
    • Updated README with configuration details and development guide.

@coderabbitai
Copy link
Copy Markdown

coderabbitai Bot commented Feb 13, 2026

📝 Walkthrough

Walkthrough

This change introduces configurable LLM model selection throughout the system. A new LLMConfig struct is added to manage provider, API key, model, and temperature settings. LLM client initialization is refactored to support configuration-based model selection with environment variable overrides and fallback logic. Configuration files and documentation are updated to reflect the new LLM settings block.

Changes

Cohort / File(s) Summary
Configuration Files
.env.sample, .simili.yaml, DOCS/examples/multi-repo/simili.yaml, DOCS/examples/single-repo/simili.yaml
Added LLM configuration block specifying provider (gemini), API key, and default model (gemini-2.0-flash-lite) across all configuration examples and environment template.
Documentation
README.md, DOCS/single-repo-setup.md, DOCS/multi-repo-org-setup.md
Added configuration examples and guidance instructing users to include LLM section in simili.yaml, documenting defaults and model overrides.
Config Core
internal/core/config/config.go, internal/core/config/config_test.go
Introduced new LLMConfig type with Provider, APIKey, Model, and Temperature fields; added Config.LLM field with defaults; implemented config merging for LLM settings; added three comprehensive test functions validating defaults, merging, and YAML parsing.
CLI Commands
cmd/simili-web/main.go, cmd/simili/commands/batch.go, cmd/simili/commands/process.go
Updated LLM client initialization across all command entry points to derive API key from config with embedding key fallback, apply model selection from config with LLM_MODEL environment override, and pass model parameter to client constructor.
LLM Integration
internal/integrations/gemini/llm.go
Updated NewLLMClient signature from single-parameter (apiKey) to two-parameter (apiKey, model); added logic to default empty model to "gemini-2.0-flash-lite"; LLMClient now stores provided model instead of hardcoded value.

Poem

🐰 A config tale with models bright,
Gemini's flash now set just right,
With fallback keys and overrides swift,
The LLM client receives its gift!
From env to config, clean and clear—
Let the defaults bring you cheer! ✨

Estimated code review effort

🎯 3 (Moderate) | ⏱️ ~25 minutes

🚥 Pre-merge checks | ✅ 3 | ❌ 1
❌ Failed checks (1 warning)
Check name Status Explanation Resolution
Docstring Coverage ⚠️ Warning Docstring coverage is 45.45% which is insufficient. The required threshold is 80.00%. Write docstrings for the functions missing them to satisfy the coverage threshold.
✅ Passed checks (3 passed)
Check name Status Explanation
Description Check ✅ Passed Check skipped - CodeRabbit’s high-level summary is enabled.
Title check ✅ Passed The title clearly and concisely describes the main feature addition: LLM configuration support in YAML files.
Merge Conflict Detection ✅ Passed ✅ No merge conflicts detected when merging into main

✏️ Tip: You can configure your own custom pre-merge checks in the settings.

✨ Finishing touches
  • 📝 Generate docstrings
🧪 Generate unit tests (beta)
  • Create PR with unit tests
  • Post copyable unit tests in a comment

Comment @coderabbitai help to get the list of available commands and usage tips.

Copy link
Copy Markdown

@coderabbitai coderabbitai Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

🤖 Fix all issues with AI agents
In `@internal/integrations/gemini/llm.go`:
- Around line 105-120: Update the default model string used in NewLLMClient
(which constructs and returns an *LLMClient) to a supported stable model:
replace the current fallback value assigned to the local variable model
("gemini-2.0-flash-lite") with a pinned/stable identifier such as
"gemini-2.0-flash-lite-001" or use a newer model "gemini-2.5-flash-lite" so the
LLMClient.model field is initialized to a non-deprecated model.
🧹 Nitpick comments (1)
internal/core/config/config_test.go (1)

61-84: Consider adding test coverage for the Temperature field.

The tests cover Provider and Model parsing but don't verify Temperature (a *float64 pointer). Since pointer fields have different zero-value semantics, a test ensuring correct parsing and nil-vs-set distinction would strengthen coverage.

💡 Suggested test addition
func TestLoadConfigWithLLMTemperature(t *testing.T) {
	yamlContent := `
llm:
  provider: gemini
  model: gemini-2.0-flash-lite
  temperature: 0.7
`
	cfg, err := parseRaw([]byte(yamlContent))
	if err != nil {
		t.Fatalf("Failed to parse YAML: %v", err)
	}
	if cfg.LLM.Temperature == nil {
		t.Error("Expected LLM.Temperature to be set, got nil")
	} else if *cfg.LLM.Temperature != 0.7 {
		t.Errorf("Expected LLM.Temperature 0.7, got %f", *cfg.LLM.Temperature)
	}
}

Comment on lines +105 to 120
func NewLLMClient(apiKey, model string) (*LLMClient, error) {
ctx := context.Background()
client, err := genai.NewClient(ctx, option.WithAPIKey(apiKey))
if err != nil {
return nil, fmt.Errorf("failed to create Gemini client: %w", err)
}

if model == "" {
model = "gemini-2.0-flash-lite" // Fast and cost-effective
}

return &LLMClient{
client: client,
model: "gemini-2.0-flash-lite", // Fast and cost-effective
model: model,
}, nil
}
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🔴 Critical

🧩 Analysis chain

🌐 Web query:

Google Gemini API available models 2026 gemini-2.0-flash-lite

💡 Result:

gemini-2.0-flash-lite is (was) a valid Gemini API model, but it’s scheduled for deprecation/shutdown starting February 2026, so in 2026 you should expect it to be at risk of being turned off and plan to migrate. Google lists gemini-2.5-flash-lite as the recommended replacement. [1]

What gemini-2.0-flash-lite is (Gemini API)

  • Model code: gemini-2.0-flash-lite (latest) and stable pinned version gemini-2.0-flash-lite-001 [2]
  • Context window: 1,048,576 input tokens; 8,192 output tokens [2]
  • I/O: audio/images/video/text input → text output [2]
  • Not supported (per model doc): Live API, search grounding, image generation, code execution, “thinking” mode [2]

Deprecation status (relevant for 2026)

  • gemini-2.0-flash-lite and gemini-2.0-flash-lite-001: Shutdown date: “Earliest February 2026”Replacement: gemini-2.5-flash-lite [1]

How to confirm what’s available right now in your project

Google’s docs recommend using the Gemini API “models” documentation and release/deprecation pages to track what’s supported and what’s being shut down. The deprecations table is the key reference for whether a model may stop working. [1]


Replace the default model with a stable or current version.

The model gemini-2.0-flash-lite is scheduled for shutdown in February 2026 and is at immediate risk of being unavailable. Use the stable pinned version gemini-2.0-flash-lite-001 instead, or migrate to gemini-2.5-flash-lite.

Update line 113:

model = "gemini-2.0-flash-lite-001"

Or migrate to a current model:

model = "gemini-2.5-flash-lite"
🤖 Prompt for AI Agents
In `@internal/integrations/gemini/llm.go` around lines 105 - 120, Update the
default model string used in NewLLMClient (which constructs and returns an
*LLMClient) to a supported stable model: replace the current fallback value
assigned to the local variable model ("gemini-2.0-flash-lite") with a
pinned/stable identifier such as "gemini-2.0-flash-lite-001" or use a newer
model "gemini-2.5-flash-lite" so the LLMClient.model field is initialized to a
non-deprecated model.

@mahsumaktas
Copy link
Copy Markdown
Contributor Author

mahsumaktas commented Feb 13, 2026

Regarding the two points:

  1. Duplicate model field — This appears to be a diff rendering artifact. The actual code has a single model: model field in the struct literal (line 120). The old hardcoded line was replaced, not duplicated. You can verify in the source file.

  2. Docstring coverage (45.45%) — All newly added functions and types in this PR have proper Go doc comments (LLMConfig, NewLLMClient, etc.). The 45% figure reflects pre-existing repo-wide coverage, not a regression from this PR.

Copy link
Copy Markdown
Contributor

@Kavirubc Kavirubc left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM — LLM config via YAML is well structured. Approving as code owner. (Deprecation fix for default model will follow in a separate commit on main.)

@Kavirubc Kavirubc merged commit d8689fe into similigh:main Feb 14, 2026
4 of 5 checks passed
This was referenced Feb 16, 2026
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

[0.2.0v][Feature]: Add LLM configuration to YAML

2 participants