Add support for custom HTTP headers in VLM models (OpenAI-compatible)#723
Merged
qin-ctx merged 5 commits intovolcengine:mainfrom Mar 18, 2026
Merged
Add support for custom HTTP headers in VLM models (OpenAI-compatible)#723qin-ctx merged 5 commits intovolcengine:mainfrom
qin-ctx merged 5 commits intovolcengine:mainfrom
Conversation
…ends Add extra_headers configuration option for VLM models to support custom HTTP headers (e.g., HTTP-Referer, X-Title) when using OpenAI-compatible providers like OpenRouter. Changes: - VLMBase extracts extra_headers from config - OpenAIVLM passes extra_headers as default_headers to OpenAI client - VLMConfig supports extra_headers in providers config - Add tests for extra_headers functionality - Update configuration docs (zh/en) and example config Co-Authored-By: KorenKrita <KorenKrita@gmail.com>
qin-ctx
requested changes
Mar 18, 2026
Collaborator
qin-ctx
left a comment
There was a problem hiding this comment.
Two blocking issues found. The core code changes look clean, but the docs/config need adjustment before merge.
- Add extra_headers: Optional[Dict[str, str]] field to VLMConfig - Migrate extra_headers to providers structure in _migrate_legacy_config - Remove confusing example-only keys from ov.conf.example - Add test for flat extra_headers config style Co-Authored-By: KorenKrita <KorenKrita@gmail.com>
# Conflicts: # openviking/models/vlm/base.py
Format long assertion lines to pass CI checks. Co-Authored-By: KorenKrita <KorenKrita@gmail.com>
Contributor
Author
|
@qin-ctx cr fix done |
qin-ctx
approved these changes
Mar 18, 2026
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.
Description
Add custom HTTP headers support for the VLM (Vision Language Model) OpenAI-compatible backend. Users can pass custom request headers (e.g.,
HTTP-RefererandX-Titlerequired by OpenRouter) via theextra_headersconfiguration option.Related Issue
Type of Change
Changes Made
Core code changes:
openviking/models/vlm/base.py:VLMBaseextractsextra_headersfrom configopenviking/models/vlm/backends/openai_vlm.py:OpenAIVLMpassesextra_headersasdefault_headersto OpenAI clients (sync/async)openviking_cli/utils/config/vlm_config.py:VLMConfigsupportsextra_headersin providers configurationTests:
tests/models/test_vlm_extra_headers.pywith 6 test cases covering sync/async clients, empty config, VLMConfig forwarding, etc.Documentation and config:
examples/ov.conf.examplewith anextra_headersusage example (OpenRouter scenario)docs/zh/guides/01-configuration.mdanddocs/en/guides/01-configuration.mdwithextra_headersparameter description and usage examplesTesting
Checklist
Additional Notes
Usage example:
{ "vlm": { "provider": "openai", "api_key": "your-api-key", "model": "gpt-4o", "api_base": "https://openrouter.ai/api/v1", "extra_headers": { "HTTP-Referer": "https://your-site.com", "X-Title": "Your App Name" } } }This feature only takes effect for OpenAI-compatible VLM providers (e.g., openai, litellm, etc.), and does not affect the volcengine provider.