Skip to content

Commit a617ba1

Browse files
feat(provider): add oracle provider (#170)
* feat(provider): add oracle provider * fix(oracle): address review follow-ups * fix(oracle): clarify model discovery setup * refactor(providers): resolve config before construction * fix(oracle): return typed model discovery errors
1 parent 142ba20 commit a617ba1

41 files changed

Lines changed: 1125 additions & 272 deletions

Some content is hidden

Large Commits have some content hidden by default. Use the searchbox below for content that may be hidden.

.env.template

Lines changed: 5 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -168,9 +168,13 @@
168168

169169
# Azure OpenAI
170170
# AZURE_API_KEY=...
171-
# AZURE_API_BASE=https://your-resource.openai.azure.com/openai/deployments/your-deployment
171+
# AZURE_BASE_URL=https://your-resource.openai.azure.com/openai/deployments/your-deployment
172172
# AZURE_API_VERSION=2024-10-21
173173

174+
# Oracle
175+
# ORACLE_API_KEY=...
176+
# ORACLE_BASE_URL=https://inference.generativeai.us-chicago-1.oci.oraclecloud.com/20231130/actions/v1
177+
174178
# Ollama (local LLM server)
175179
# Note: Ollama doesn't require an API key
176180
# Set base URL to enable (default: http://localhost:11434/v1)

CLAUDE.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -4,7 +4,7 @@ Guidance for AI models (like Claude) working with this codebase.
44

55
## Project Overview
66

7-
**GOModel** is a high-performance AI gateway in Go that routes requests to multiple AI model providers (OpenAI, Anthropic, Gemini, Groq, xAI, Ollama). LiteLLM killer.
7+
**GOModel** is a high-performance AI gateway in Go that routes requests to multiple AI model providers (OpenAI, Anthropic, Gemini, Groq, xAI, Oracle, Ollama). LiteLLM killer.
88

99
**Go:** 1.26.1
1010
**Repo:** https://github.com/ENTERPILOT/GOModel
@@ -111,4 +111,4 @@ Full reference: `.env.template` and `config/config.yaml`
111111
- **Resilience:** Configured via `config/config.yaml` — global `resilience.retry.*` and `resilience.circuit_breaker.*` defaults with optional per-provider overrides under `providers.<name>.resilience.retry.*` and `providers.<name>.resilience.circuit_breaker.*`. Retry defaults: `max_retries` (3), `initial_backoff` (1s), `max_backoff` (30s), `backoff_factor` (2.0), `jitter_factor` (0.1). Circuit breaker defaults: `failure_threshold` (5), `success_threshold` (2), `timeout` (30s)
112112
- **Metrics:** `METRICS_ENABLED` (false), `METRICS_ENDPOINT` (/metrics)
113113
- **Guardrails:** Configured via `config/config.yaml` only (except `GUARDRAILS_ENABLED` env var)
114-
- **Providers:** `OPENAI_API_KEY`, `ANTHROPIC_API_KEY`, `GEMINI_API_KEY`, `XAI_API_KEY`, `GROQ_API_KEY`, `OLLAMA_BASE_URL`
114+
- **Providers:** `OPENAI_API_KEY`, `ANTHROPIC_API_KEY`, `GEMINI_API_KEY`, `XAI_API_KEY`, `GROQ_API_KEY`, `ORACLE_API_KEY` (Oracle API key), `ORACLE_BASE_URL` (Oracle OpenAI-compatible base URL), `OLLAMA_BASE_URL`

GETTING_STARTED.md

Lines changed: 8 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -197,8 +197,10 @@ Provider credentials:
197197
| `GROQ_API_KEY` | Groq |
198198
| `GROQ_BASE_URL` | Groq (custom endpoint) |
199199
| `AZURE_API_KEY` | Azure OpenAI |
200-
| `AZURE_API_BASE` | Azure OpenAI deployment base URL |
200+
| `AZURE_BASE_URL` | Azure OpenAI deployment base URL |
201201
| `AZURE_API_VERSION` | Azure OpenAI API version override (default: `2024-10-21`) |
202+
| `ORACLE_API_KEY` | Oracle |
203+
| `ORACLE_BASE_URL` | Oracle OpenAI-compatible base URL |
202204
| `OLLAMA_BASE_URL` | Ollama (default: `http://localhost:11434/v1`) |
203205

204206

@@ -221,7 +223,11 @@ Setting `CIRCUIT_BREAKER_TIMEOUT=60s` in the environment overrides whatever `tim
221223
Ollama requires no API key. Even with no YAML and no `OLLAMA_BASE_URL` set, an Ollama provider is registered pointing at `http://localhost:11434/v1`. If you do not want Ollama, make sure no Ollama instance is reachable at that address (the gateway's availability check will remove it from routing if it cannot be reached).
222224

223225
**Azure requires both key and base URL.**
224-
`AZURE_API_KEY` alone is not enough for auto-discovery. Set `AZURE_API_BASE` to the Azure deployment endpoint as well, otherwise the provider is ignored.
226+
`AZURE_API_KEY` alone is not enough for auto-discovery. Set `AZURE_BASE_URL` to the Azure deployment endpoint as well, otherwise the provider is ignored.
227+
228+
**Oracle requires both key and base URL.**
229+
`ORACLE_API_KEY` alone is not enough for auto-discovery. Set `ORACLE_BASE_URL` to the Oracle OpenAI-compatible endpoint, otherwise the provider is ignored.
230+
If your Oracle endpoint does not return a usable model list, configure `providers.<name>.models` in YAML to seed the router with explicit model IDs.
225231

226232
**Azure ships with a pinned API version by default.**
227233
If you do not set `AZURE_API_VERSION`, the gateway sends `api-version=2024-10-21`. Override it only when you need a different Azure API version.

README.md

Lines changed: 6 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -6,7 +6,7 @@
66
[![Docker Pulls](https://img.shields.io/docker/pulls/enterpilot/gomodel)](https://hub.docker.com/r/enterpilot/gomodel)
77
[![Go Version](https://img.shields.io/github/go-mod/go-version/ENTERPILOT/GOModel)](https://github.com/ENTERPILOT/GOModel/blob/main/go.mod)
88

9-
A high-performance AI gateway written in Go, providing a unified OpenAI-compatible API for multiple AI model providers, full-observability and more.
9+
A high-performance AI gateway written in Go, providing a unified OpenAI-compatible API for OpenAI, Anthropic, Gemini, xAI, Groq, OpenRouter, Azure OpenAI, Oracle, Ollama, and more.
1010

1111
<a href="docs/dashboard.gif">
1212
<img src="docs/dashboard.gif" alt="Animated GOModel AI gateway dashboard showing usage analytics, token tracking, and estimated cost monitoring" width="100%">
@@ -37,8 +37,10 @@ docker run --rm -p 8080:8080 \
3737
-e OPENROUTER_API_KEY="your-openrouter-key" \
3838
-e XAI_API_KEY="your-xai-key" \
3939
-e AZURE_API_KEY="your-azure-key" \
40-
-e AZURE_API_BASE="https://your-resource.openai.azure.com/openai/deployments/your-deployment" \
40+
-e AZURE_BASE_URL="https://your-resource.openai.azure.com/openai/deployments/your-deployment" \
4141
-e AZURE_API_VERSION="2024-10-21" \
42+
-e ORACLE_API_KEY="your-oracle-key" \
43+
-e ORACLE_BASE_URL="https://inference.generativeai.us-chicago-1.oci.oraclecloud.com/20231130/actions/v1" \
4244
-e OLLAMA_BASE_URL="http://host.docker.internal:11434/v1" \
4345
enterpilot/gomodel
4446
```
@@ -70,7 +72,8 @@ Example model identifiers are illustrative and subject to change; consult provid
7072
| Groq | `GROQ_API_KEY` | `llama-3.3-70b-versatile` |||||||
7173
| OpenRouter | `OPENROUTER_API_KEY` | `google/gemini-2.5-flash` |||||||
7274
| xAI (Grok) | `XAI_API_KEY` | `grok-2` |||||||
73-
| Azure OpenAI | `AZURE_API_KEY` + `AZURE_API_BASE` (`AZURE_API_VERSION` optional) | `gpt-4o` |||||||
75+
| Azure OpenAI | `AZURE_API_KEY` + `AZURE_BASE_URL` (`AZURE_API_VERSION` optional) | `gpt-4o` |||||||
76+
| Oracle | `ORACLE_API_KEY` + `ORACLE_BASE_URL` | `openai.gpt-oss-120b` |||||||
7477
| Ollama | `OLLAMA_BASE_URL` | `llama3.2` |||||||
7578

7679
✅ Supported ❌ Unsupported

cmd/gomodel/main.go

Lines changed: 3 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -25,6 +25,7 @@ import (
2525
"gomodel/internal/providers/ollama"
2626
"gomodel/internal/providers/openai"
2727
"gomodel/internal/providers/openrouter"
28+
"gomodel/internal/providers/oracle"
2829
"gomodel/internal/providers/xai"
2930
"gomodel/internal/version"
3031

@@ -71,7 +72,7 @@ func startApplication(application lifecycleApp, addr string) error {
7172

7273
// @title GOModel API
7374
// @version 1.0
74-
// @description High-performance AI gateway routing requests to multiple LLM providers (OpenAI, Anthropic, Gemini, Groq, xAI, Ollama). Drop-in OpenAI-compatible API.
75+
// @description High-performance AI gateway routing requests to multiple LLM providers (OpenAI, Anthropic, Gemini, Groq, xAI, Oracle, Ollama). Drop-in OpenAI-compatible API.
7576
// @BasePath /
7677
// @schemes http
7778
// @securityDefinitions.apikey BearerAuth
@@ -122,6 +123,7 @@ func main() {
122123
factory.Add(openai.Registration)
123124
factory.Add(openrouter.Registration)
124125
factory.Add(azure.Registration)
126+
factory.Add(oracle.Registration)
125127
factory.Add(anthropic.Registration)
126128
factory.Add(gemini.Registration)
127129
factory.Add(groq.Registration)

cmd/recordapi/main.go

Lines changed: 27 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -20,9 +20,12 @@ import (
2020
"time"
2121
)
2222

23+
const oracleDefaultModel = "openai.gpt-oss-120b"
24+
2325
// Provider configurations
2426
var providerConfigs = map[string]struct {
2527
baseURL string
28+
baseURLEnv string
2629
envKey string
2730
authHeader string
2831
contentType string
@@ -57,6 +60,12 @@ var providerConfigs = map[string]struct {
5760
authHeader: "Authorization",
5861
contentType: "application/json",
5962
},
63+
"oracle": {
64+
baseURLEnv: "ORACLE_BASE_URL",
65+
envKey: "ORACLE_API_KEY",
66+
authHeader: "Authorization",
67+
contentType: "application/json",
68+
},
6069
}
6170

6271
// Endpoint configurations
@@ -127,6 +136,9 @@ var providerCapabilities = map[string]map[string]bool{
127136
"xai": {
128137
"responses": true,
129138
},
139+
"oracle": {
140+
"responses": true,
141+
},
130142
}
131143

132144
func endpointRequiresResponsesCapability(endpoint string) bool {
@@ -142,7 +154,7 @@ func providerSupportsResponses(provider string) bool {
142154
}
143155

144156
func main() {
145-
provider := flag.String("provider", "openai", "Provider to test (openai, anthropic, gemini, groq, xai)")
157+
provider := flag.String("provider", "openai", "Provider to test (openai, anthropic, gemini, groq, xai, oracle)")
146158
endpoint := flag.String("endpoint", "chat", "Endpoint to test (chat, chat_stream, models, responses, responses_stream)")
147159
output := flag.String("output", "", "Output file path (required)")
148160
model := flag.String("model", "", "Override model in request")
@@ -160,6 +172,15 @@ func main() {
160172
os.Exit(1)
161173
}
162174

175+
baseURL := pConfig.baseURL
176+
if pConfig.baseURLEnv != "" {
177+
baseURL = os.Getenv(pConfig.baseURLEnv)
178+
if baseURL == "" {
179+
fmt.Fprintf(os.Stderr, "Error: %s environment variable is required\n", pConfig.baseURLEnv)
180+
os.Exit(1)
181+
}
182+
}
183+
163184
eConfig, ok := endpointConfigs[*endpoint]
164185
if !ok {
165186
fmt.Fprintf(os.Stderr, "Error: unknown endpoint %q\n", *endpoint)
@@ -181,9 +202,12 @@ func main() {
181202
if eConfig.requestBody != nil {
182203
reqBody := eConfig.requestBody
183204

184-
// Override model if specified
205+
// Oracle's OpenAI-compatible endpoint expects OCI-hosted model IDs,
206+
// so use a provider-specific default instead of the generic gpt-4o-mini fixture.
185207
if *model != "" {
186208
reqBody["model"] = *model
209+
} else if *provider == "oracle" {
210+
reqBody["model"] = oracleDefaultModel
187211
}
188212

189213
// Adjust request for different providers
@@ -200,7 +224,7 @@ func main() {
200224
}
201225

202226
// Build URL
203-
url := pConfig.baseURL + eConfig.path
227+
url := baseURL + eConfig.path
204228

205229
// Create request
206230
req, err := http.NewRequest(eConfig.method, url, bodyReader)

config/config.example.yaml

Lines changed: 10 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -155,10 +155,19 @@ providers:
155155
# Example: Azure OpenAI
156156
# azure:
157157
# type: "azure"
158-
# base_url: "${AZURE_API_BASE}"
158+
# base_url: "${AZURE_BASE_URL}"
159159
# api_key: "${AZURE_API_KEY}"
160160
# api_version: "2024-10-21"
161161

162+
# Example: Oracle OpenAI-compatible endpoint
163+
# oracle:
164+
# type: "oracle"
165+
# base_url: "${ORACLE_BASE_URL}"
166+
# api_key: "${ORACLE_API_KEY}"
167+
# models:
168+
# - openai.gpt-oss-120b
169+
# - xai.grok-3
170+
162171
# Example: DeepSeek (OpenAI-compatible)
163172
# deepseek:
164173
# type: "openai"

config/config_test.go

Lines changed: 2 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -16,7 +16,8 @@ func clearProviderEnvVars(t *testing.T) {
1616
"XAI_API_KEY", "XAI_BASE_URL",
1717
"GROQ_API_KEY", "GROQ_BASE_URL",
1818
"OPENROUTER_API_KEY", "OPENROUTER_BASE_URL", "OPENROUTER_SITE_URL", "OPENROUTER_APP_NAME",
19-
"AZURE_API_KEY", "AZURE_API_BASE", "AZURE_API_VERSION",
19+
"AZURE_API_KEY", "AZURE_BASE_URL", "AZURE_API_VERSION",
20+
"ORACLE_API_KEY", "ORACLE_BASE_URL",
2021
"OLLAMA_API_KEY", "OLLAMA_BASE_URL",
2122
} {
2223
t.Setenv(key, "")

docs/adr/0001-explicit-provider-registration.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -2,7 +2,7 @@
22

33
## Context
44

5-
GOModel supports multiple LLM providers (OpenAI, Anthropic, Gemini, Groq, Ollama, xAI). Each provider must be registered with the factory before use.
5+
GOModel supports multiple LLM providers, including OpenAI, Anthropic, Gemini, xAI, Groq, OpenRouter, Azure OpenAI, Oracle, Ollama, and custom OpenAI-compatible endpoints. Each provider must be registered with the factory before use.
66

77
## Decision
88

@@ -25,4 +25,4 @@ Use explicit registration in main.go:
2525

2626
### Negative
2727

28-
- Slightly more boilerplate in main.go (6 explicit Register calls)
28+
- Slightly more boilerplate in main.go (9 explicit registration calls)

docs/advanced/configuration.mdx

Lines changed: 35 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -131,10 +131,12 @@ Set these to automatically register providers. No YAML configuration required.
131131
| `OPENROUTER_API_KEY` | OpenRouter |
132132
| `XAI_API_KEY` | xAI (Grok) |
133133
| `GROQ_API_KEY` | Groq |
134-
| `AZURE_API_KEY` | Azure OpenAI (`AZURE_API_BASE` also required) |
134+
| `AZURE_API_KEY` | Azure OpenAI (`AZURE_BASE_URL` also required) |
135+
| `ORACLE_API_KEY` | Oracle (`ORACLE_BASE_URL` also required) |
135136
| `OLLAMA_BASE_URL` | Ollama (no API key needed) |
136137

137-
Most providers can use a custom base URL via `<PROVIDER>_BASE_URL` (for example `OPENAI_BASE_URL`). OpenRouter defaults to `https://openrouter.ai/api/v1` and can be overridden with `OPENROUTER_BASE_URL`. Azure uses `AZURE_API_BASE` for its deployment base URL and accepts an optional `AZURE_API_VERSION` override; otherwise it defaults to `2024-10-21`.
138+
Most providers can use a custom base URL via `<PROVIDER>_BASE_URL` (for example `OPENAI_BASE_URL`). OpenRouter defaults to `https://openrouter.ai/api/v1` and can be overridden with `OPENROUTER_BASE_URL`. Azure uses `AZURE_BASE_URL` for its deployment base URL and accepts an optional `AZURE_API_VERSION` override; otherwise it defaults to `2024-10-21`. Oracle requires `ORACLE_BASE_URL` because its OpenAI-compatible endpoint is region-specific.
139+
When using Oracle, `models:` also acts as a fallback inventory if the upstream endpoint does not expose a usable `/models` response.
138140

139141
For OpenRouter, GOModel also sends default attribution headers unless the request already sets them. Override those defaults with `OPENROUTER_SITE_URL` and `OPENROUTER_APP_NAME`.
140142

@@ -229,8 +231,19 @@ The simplest way to add providers. GOModel checks for well-known API key environ
229231
export OPENAI_API_KEY="sk-..." # Registers "openai" provider
230232
export ANTHROPIC_API_KEY="sk-ant-..." # Registers "anthropic" provider
231233
export GEMINI_API_KEY="..." # Registers "gemini" provider
234+
export XAI_API_KEY="..." # Registers "xai" provider
235+
export GROQ_API_KEY="gsk_..." # Registers "groq" provider
236+
export OPENROUTER_API_KEY="sk-or-..." # Registers "openrouter" provider
237+
export AZURE_API_KEY="..." # Registers "azure" provider when paired with AZURE_BASE_URL
238+
export AZURE_BASE_URL="https://your-resource.openai.azure.com/openai/deployments/your-deployment"
239+
export ORACLE_API_KEY="..." # Registers "oracle" provider when paired with ORACLE_BASE_URL
240+
export ORACLE_BASE_URL="https://inference.generativeai.us-chicago-1.oci.oraclecloud.com/20231130/actions/v1"
241+
export OLLAMA_BASE_URL="http://localhost:11434/v1" # Registers "ollama" provider
232242
```
233243

244+
GOModel also works with additional OpenAI-compatible providers out of the box
245+
through YAML provider blocks.
246+
234247
### YAML Provider Blocks
235248

236249
For more control (custom base URLs, model restrictions, or custom provider names), use the YAML file:
@@ -250,6 +263,15 @@ providers:
250263
api_key: "..."
251264
api_version: "2024-10-21"
252265
266+
# Add Oracle's OpenAI-compatible endpoint
267+
oracle:
268+
type: oracle
269+
base_url: "https://inference.generativeai.us-chicago-1.oci.oraclecloud.com/20231130/actions/v1"
270+
api_key: "..."
271+
models:
272+
- openai.gpt-oss-120b
273+
- xai.grok-3
274+
253275
# Restrict to specific models
254276
gemini:
255277
type: gemini
@@ -259,6 +281,17 @@ providers:
259281
- gemini-1.5-pro
260282
```
261283

284+
<Note>
285+
For Oracle, prefer YAML configuration over env-only auto-discovery. Oracle
286+
inference can work even when the upstream `/models` endpoint is unavailable,
287+
so setting `models:` gives GoModel a reliable fallback inventory. See the
288+
[Oracle guide](/guides/oracle) for the required OCI policy and a tested
289+
configuration. Automatic model discovery is not yet a reliable, validated
290+
path for this provider: GoModel can try Oracle's OpenAI-compatible `/models`
291+
endpoint, but Oracle may not return a usable inventory there. OCI-native
292+
Oracle model discovery is not integrated yet.
293+
</Note>
294+
262295
### Ollama (Local Models)
263296

264297
Ollama does not require an API key. Set the base URL to enable it:

0 commit comments

Comments
 (0)