Skip to content

feat(embedding): add Ollama provider support for local embedding#644

Merged
qin-ctx merged 1 commit intovolcengine:mainfrom
chenxiaofei-cxf:main
Mar 16, 2026
Merged

feat(embedding): add Ollama provider support for local embedding#644
qin-ctx merged 1 commit intovolcengine:mainfrom
chenxiaofei-cxf:main

Conversation

@chenxiaofei-cxf
Copy link
Copy Markdown
Contributor

Summary

  • Add 'ollama' as a supported embedding provider
  • Ollama runs locally via OpenAI-compatible API, no API key required
  • Allow OpenAI provider to work without api_key when api_base is set (supports local OpenAI-compatible servers like vLLM, LocalAI)
  • Add configuration example and tests for Ollama provider

This enables fully local embedding deployment without cloud API keys.

@CLAassistant
Copy link
Copy Markdown

CLAassistant commented Mar 16, 2026

CLA assistant check
All committers have signed the CLA.

if self.provider == "openai":
if not self.api_key:
# Allow missing api_key when api_base is set (e.g. local OpenAI-compatible servers)
if not self.api_key and not self.api_base:
Copy link
Copy Markdown
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

[Bug] Config validation now allows provider="openai" with api_base but no api_key. However, OpenAIDenseEmbedder.__init__ (openai_embedders.py:65-66) still hard-requires api_key:

if not self.api_key:
    raise ValueError("api_key is required")

Also, the OpenAI factory lambda passes cfg.api_key directly (which will be None), unlike the Ollama factory which uses cfg.api_key or "ollama" as a fallback.

This means a user configuring provider="openai" + api_base + no api_key (as shown in Test 7) will pass config validation but crash at runtime when the embedder is created.

Suggested fix: add a fallback in the OpenAI factory lambda (e.g., cfg.api_key or "no-key") and relax the check in OpenAIDenseEmbedder.__init__ to allow missing api_key when api_base is set.

- Add 'ollama' as a supported embedding provider

- Ollama runs locally via OpenAI-compatible API, no API key required

- Allow OpenAI provider to work without api_key when api_base is set

  (supports local OpenAI-compatible servers like vLLM, LocalAI)

- Add configuration example and tests for Ollama provider

This enables fully local embedding deployment without cloud API keys.
@qin-ctx qin-ctx merged commit e5abe88 into volcengine:main Mar 16, 2026
1 check passed
@github-project-automation github-project-automation bot moved this from Backlog to Done in OpenViking project Mar 16, 2026
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

Status: Done

Development

Successfully merging this pull request may close these issues.

3 participants