Skip to content

Commit 99fbed8

Browse files
authored
feat: normalize provider specific model options type names and ensure they are exported (#12443)
## Background Provider-specific model options types had inconsistent naming (`*ProviderOptions`, `*CallOptions`, `*ChatOptions`, etc.) and many were not exported from their packages, making it impossible for consumers to use `satisfies` for type-safe `providerOptions`. This needs to be normalized before the v7 beta release, especially in preparation for renaming `providerOptions` to `options`. ## Summary Normalizes all provider-specific model options type names across 31 packages to follow a consistent `*ModelOptions` convention and ensures they are all exported: - Renamed types to follow `{Provider}{ModelType}{OptionalModelSuffix}Options` pattern, e.g.: - `AnthropicProviderOptions` → `AnthropicLanguageModelOptions` - `OpenAIResponsesProviderOptions` → `OpenAILanguageModelResponsesOptions` - Renamed Zod schemas to match - e.g. `anthropicProviderOptions` → `anthropicLanguageModelOptions` - Exported 13 types from 8 packages that were previously internal-only, e.g. - `GroqTranscriptionModelOptions` - `OpenAITranscriptionModelOptions` - `FalSpeechModelOptions` - Added deprecated aliases for all previously exported names across 23 packages to maintain backward compatibility - Fixed `@ai-sdk/groq`: Already had a transcription model options type, but in the wrong file and under the wrong name and not exported - Updated ~190 example files to use `satisfies *Options` on `providerOptions` inner objects - Updates examples across documentation to use the new type names and include the types on examples where they were missing but relevant - Added the new naming convention and export requirements to `contributing/providers.md` All changes are fully backward compatible — no breaking changes. ## Manual Verification - `pnpm type-check:full` passes with no errors - `pnpm build` succeeds for all packages - `pnpm prettier-check` passes ## Checklist - [x] Tests have been added / updated (for bug fixes / features) - [x] Documentation has been added / updated (for bug fixes / features) - [x] A _patch_ changeset for relevant packages has been added (for bug fixes / features - run `pnpm changeset` in the project root) - [x] I have reviewed this pull request (self-review) ## Future Work Consider adding options types for providers that don't have them yet, and add `satisfies` to those examples accordingly. For example: - #8241 - #12435 - #12437 - #12438 - #12439 ## Related Issues Fixes #12269
1 parent 6d2dadc commit 99fbed8

File tree

353 files changed

+1833
-1016
lines changed

Some content is hidden

Large Commits have some content hidden by default. Use the searchbox below for content that may be hidden.

353 files changed

+1833
-1016
lines changed

.changeset/light-crabs-heal.md

Lines changed: 34 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,34 @@
1+
---
2+
'@ai-sdk/black-forest-labs': patch
3+
'@ai-sdk/openai-compatible': patch
4+
'@ai-sdk/amazon-bedrock': patch
5+
'@ai-sdk/google-vertex': patch
6+
'@example/ai-functions': patch
7+
'@example/next-openai': patch
8+
'@ai-sdk/elevenlabs': patch
9+
'@ai-sdk/moonshotai': patch
10+
'@ai-sdk/togetherai': patch
11+
'@ai-sdk/anthropic': patch
12+
'@ai-sdk/fireworks': patch
13+
'@ai-sdk/replicate': patch
14+
'@ai-sdk/deepgram': patch
15+
'@ai-sdk/deepseek': patch
16+
'@example/angular': patch
17+
'@example/express': patch
18+
'@ai-sdk/alibaba': patch
19+
'@ai-sdk/baseten': patch
20+
'@ai-sdk/gateway': patch
21+
'@ai-sdk/klingai': patch
22+
'@ai-sdk/cohere': patch
23+
'@ai-sdk/google': patch
24+
'@ai-sdk/openai': patch
25+
'@ai-sdk/prodia': patch
26+
'@ai-sdk/azure': patch
27+
'@example/hono': patch
28+
'@ai-sdk/groq': patch
29+
'@ai-sdk/luma': patch
30+
'@ai-sdk/fal': patch
31+
'@ai-sdk/xai': patch
32+
---
33+
34+
feat: normalize provider specific model options type names and ensure they are exported

AGENTS.md

Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -281,3 +281,4 @@ When uncertain about expected artifacts, ask for clarification.
281281
- Change public APIs without updating documentation
282282
- Use `require()` for imports
283283
- Add new dependencies without running `pnpm update-references`
284+
- Modify `content/docs/08-migration-guides` or `packages/codemod` as part of broader codebase changes

content/cookbook/00-guides/17-gemini.mdx

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -46,7 +46,7 @@ console.log(text);
4646
Gemini 3 models can use enhanced reasoning through thinking mode, which improves their ability to solve complex problems. You can control the thinking level using the `thinkingLevel` provider option:
4747

4848
```ts
49-
import { google, GoogleGenerativeAIProviderOptions } from '@ai-sdk/google';
49+
import { google, GoogleLanguageModelOptions } from '@ai-sdk/google';
5050
import { generateText } from 'ai';
5151

5252
const { text } = await generateText({
@@ -58,7 +58,7 @@ const { text } = await generateText({
5858
includeThoughts: true,
5959
thinkingLevel: 'low',
6060
},
61-
} satisfies GoogleGenerativeAIProviderOptions,
61+
} satisfies GoogleLanguageModelOptions,
6262
},
6363
});
6464

content/cookbook/00-guides/18-claude-4.mdx

Lines changed: 4 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -48,7 +48,7 @@ console.log(text);
4848
Claude 4 enhances the extended thinking capabilities first introduced in Claude 3.7 Sonnet—the ability to solve complex problems with careful, step-by-step reasoning. Additionally, both Opus 4 and Sonnet 4 can now use tools during extended thinking, allowing Claude to alternate between reasoning and tool use to improve responses. You can enable extended thinking using the `thinking` provider option and specifying a thinking budget in tokens. For interleaved thinking (where Claude can think in between tool calls) you'll need to enable a beta feature using the `anthropic-beta` header:
4949

5050
```ts
51-
import { anthropic, AnthropicProviderOptions } from '@ai-sdk/anthropic';
51+
import { anthropic, AnthropicLanguageModelOptions } from '@ai-sdk/anthropic';
5252
import { generateText } from 'ai';
5353

5454
const { text, reasoningText, reasoning } = await generateText({
@@ -57,7 +57,7 @@ const { text, reasoningText, reasoning } = await generateText({
5757
providerOptions: {
5858
anthropic: {
5959
thinking: { type: 'enabled', budgetTokens: 15000 },
60-
} satisfies AnthropicProviderOptions,
60+
} satisfies AnthropicLanguageModelOptions,
6161
},
6262
headers: {
6363
'anthropic-beta': 'interleaved-thinking-2025-05-14',
@@ -86,7 +86,7 @@ In a new Next.js application, first install the AI SDK and the Anthropic provide
8686
Then, create a route handler for the chat endpoint:
8787

8888
```tsx filename="app/api/chat/route.ts"
89-
import { anthropic, AnthropicProviderOptions } from '@ai-sdk/anthropic';
89+
import { anthropic, AnthropicLanguageModelOptions } from '@ai-sdk/anthropic';
9090
import { streamText, convertToModelMessages, type UIMessage } from 'ai';
9191

9292
export async function POST(req: Request) {
@@ -101,7 +101,7 @@ export async function POST(req: Request) {
101101
providerOptions: {
102102
anthropic: {
103103
thinking: { type: 'enabled', budgetTokens: 15000 },
104-
} satisfies AnthropicProviderOptions,
104+
} satisfies AnthropicLanguageModelOptions,
105105
},
106106
});
107107

content/cookbook/00-guides/20-sonnet-3-7.mdx

Lines changed: 4 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -50,7 +50,7 @@ const { reasoning, text } = await generateText({
5050
Claude 3.7 Sonnet introduces a new extended thinking—the ability to solve complex problems with careful, step-by-step reasoning. You can enable it using the `thinking` provider option and specifying a thinking budget in tokens:
5151

5252
```ts
53-
import { anthropic, AnthropicProviderOptions } from '@ai-sdk/anthropic';
53+
import { anthropic, AnthropicLanguageModelOptions } from '@ai-sdk/anthropic';
5454
import { generateText } from 'ai';
5555

5656
const { text, reasoningText, reasoning } = await generateText({
@@ -59,7 +59,7 @@ const { text, reasoningText, reasoning } = await generateText({
5959
providerOptions: {
6060
anthropic: {
6161
thinking: { type: 'enabled', budgetTokens: 12000 },
62-
} satisfies AnthropicProviderOptions,
62+
} satisfies AnthropicLanguageModelOptions,
6363
},
6464
});
6565

@@ -85,7 +85,7 @@ In a new Next.js application, first install the AI SDK and the Anthropic provide
8585
Then, create a route handler for the chat endpoint:
8686

8787
```tsx filename="app/api/chat/route.ts"
88-
import { anthropic, AnthropicProviderOptions } from '@ai-sdk/anthropic';
88+
import { anthropic, AnthropicLanguageModelOptions } from '@ai-sdk/anthropic';
8989
import { streamText, convertToModelMessages, type UIMessage } from 'ai';
9090

9191
export async function POST(req: Request) {
@@ -97,7 +97,7 @@ export async function POST(req: Request) {
9797
providerOptions: {
9898
anthropic: {
9999
thinking: { type: 'enabled', budgetTokens: 12000 },
100-
} satisfies AnthropicProviderOptions,
100+
} satisfies AnthropicLanguageModelOptions,
101101
},
102102
});
103103

content/docs/03-agents/05-configuring-call-options.mdx

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -153,7 +153,7 @@ await newsAgent.generate({
153153
Configure provider settings dynamically:
154154

155155
```ts
156-
import { openai, OpenAIProviderOptions } from '@ai-sdk/openai';
156+
import { openai, OpenAILanguageModelResponsesOptions } from '@ai-sdk/openai';
157157
import { ToolLoopAgent } from 'ai';
158158
import { z } from 'zod';
159159

@@ -167,7 +167,7 @@ const agent = new ToolLoopAgent({
167167
providerOptions: {
168168
openai: {
169169
reasoningEffort: options.taskDifficulty,
170-
} satisfies OpenAIProviderOptions,
170+
} satisfies OpenAILanguageModelResponsesOptions,
171171
},
172172
}),
173173
});

content/docs/03-ai-sdk-core/38-video-generation.mdx

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -223,7 +223,7 @@ You can configure the polling timeout using provider-specific options. Each prov
223223

224224
```tsx highlight={"10-12"}
225225
import { experimental_generateVideo as generateVideo } from 'ai';
226-
import { fal, type FalVideoProviderOptions } from '@ai-sdk/fal';
226+
import { fal, type FalVideoModelOptions } from '@ai-sdk/fal';
227227

228228
const { video } = await generateVideo({
229229
model: fal.video('luma-dream-machine/ray-2'),
@@ -232,7 +232,7 @@ const { video } = await generateVideo({
232232
providerOptions: {
233233
fal: {
234234
pollTimeoutMs: 600000, // 10 minutes
235-
} satisfies FalVideoProviderOptions,
235+
} satisfies FalVideoModelOptions,
236236
},
237237
});
238238
```

content/docs/03-ai-sdk-core/45-provider-management.mdx

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -233,13 +233,13 @@ Here is an example that implements the following concepts:
233233
- setup an OpenAI-compatible provider with custom api key and base URL (here: `custom > *`)
234234
- setup model name aliases (here: `anthropic > fast`, `anthropic > writing`, `anthropic > reasoning`)
235235
- pre-configure model settings (here: `anthropic > reasoning`)
236-
- validate the provider-specific options (here: `AnthropicProviderOptions`)
236+
- validate the provider-specific options (here: `AnthropicLanguageModelOptions`)
237237
- use a fallback provider (here: `anthropic > *`)
238238
- limit a provider to certain models without a fallback (here: `groq > gemma2-9b-it`, `groq > qwen-qwq-32b`)
239239
- define a custom separator for the provider registry (here: `>`)
240240

241241
```ts
242-
import { anthropic, AnthropicProviderOptions } from '@ai-sdk/anthropic';
242+
import { anthropic, AnthropicLanguageModelOptions } from '@ai-sdk/anthropic';
243243
import { createOpenAICompatible } from '@ai-sdk/openai-compatible';
244244
import { xai } from '@ai-sdk/xai';
245245
import { groq } from '@ai-sdk/groq';
@@ -286,7 +286,7 @@ export const registry = createProviderRegistry(
286286
type: 'enabled',
287287
budgetTokens: 32000,
288288
},
289-
} satisfies AnthropicProviderOptions,
289+
} satisfies AnthropicLanguageModelOptions,
290290
},
291291
},
292292
}),

content/providers/01-ai-sdk-providers/00-ai-gateway.mdx

Lines changed: 14 additions & 14 deletions
Original file line numberDiff line numberDiff line change
@@ -538,7 +538,7 @@ for await (const part of result.fullStream) {
538538
Track usage per end-user and categorize requests with tags:
539539

540540
```ts
541-
import type { GatewayProviderOptions } from '@ai-sdk/gateway';
541+
import type { GatewayLanguageModelOptions } from '@ai-sdk/gateway';
542542
import { generateText } from 'ai';
543543

544544
const { text } = await generateText({
@@ -548,7 +548,7 @@ const { text } = await generateText({
548548
gateway: {
549549
user: 'user-abc-123', // Track usage for this specific end-user
550550
tags: ['document-summary', 'premium-feature'], // Categorize for reporting
551-
} satisfies GatewayProviderOptions,
551+
} satisfies GatewayLanguageModelOptions,
552552
},
553553
});
554554
```
@@ -568,7 +568,7 @@ The AI Gateway provider accepts provider options that control routing behavior a
568568
You can use the `gateway` key in `providerOptions` to control how AI Gateway routes requests:
569569

570570
```ts
571-
import type { GatewayProviderOptions } from '@ai-sdk/gateway';
571+
import type { GatewayLanguageModelOptions } from '@ai-sdk/gateway';
572572
import { generateText } from 'ai';
573573

574574
const { text } = await generateText({
@@ -578,7 +578,7 @@ const { text } = await generateText({
578578
gateway: {
579579
order: ['vertex', 'anthropic'], // Try Vertex AI first, then Anthropic
580580
only: ['vertex', 'anthropic'], // Only use these providers
581-
} satisfies GatewayProviderOptions,
581+
} satisfies GatewayLanguageModelOptions,
582582
},
583583
});
584584
```
@@ -634,7 +634,7 @@ The following gateway provider options are available:
634634
You can combine these options to have fine-grained control over routing and tracking:
635635

636636
```ts
637-
import type { GatewayProviderOptions } from '@ai-sdk/gateway';
637+
import type { GatewayLanguageModelOptions } from '@ai-sdk/gateway';
638638
import { generateText } from 'ai';
639639

640640
const { text } = await generateText({
@@ -644,7 +644,7 @@ const { text } = await generateText({
644644
gateway: {
645645
order: ['vertex'], // Prefer Vertex AI
646646
only: ['anthropic', 'vertex'], // Only allow these providers
647-
} satisfies GatewayProviderOptions,
647+
} satisfies GatewayLanguageModelOptions,
648648
},
649649
});
650650
```
@@ -654,7 +654,7 @@ const { text } = await generateText({
654654
The `models` option enables automatic fallback to alternative models when the primary model fails:
655655

656656
```ts
657-
import type { GatewayProviderOptions } from '@ai-sdk/gateway';
657+
import type { GatewayLanguageModelOptions } from '@ai-sdk/gateway';
658658
import { generateText } from 'ai';
659659

660660
const { text } = await generateText({
@@ -663,7 +663,7 @@ const { text } = await generateText({
663663
providerOptions: {
664664
gateway: {
665665
models: ['openai/gpt-5-nano', 'gemini-2.0-flash'], // Fallback models
666-
} satisfies GatewayProviderOptions,
666+
} satisfies GatewayLanguageModelOptions,
667667
},
668668
});
669669

@@ -681,7 +681,7 @@ that have zero data retention policies. When `zeroDataRetention` is `false` or n
681681
specified, there is no enforcement of restricting routing.
682682

683683
```ts
684-
import type { GatewayProviderOptions } from '@ai-sdk/gateway';
684+
import type { GatewayLanguageModelOptions } from '@ai-sdk/gateway';
685685
import { generateText } from 'ai';
686686

687687
const { text } = await generateText({
@@ -690,7 +690,7 @@ const { text } = await generateText({
690690
providerOptions: {
691691
gateway: {
692692
zeroDataRetention: true,
693-
} satisfies GatewayProviderOptions,
693+
} satisfies GatewayLanguageModelOptions,
694694
},
695695
});
696696
```
@@ -700,8 +700,8 @@ const { text } = await generateText({
700700
When using provider-specific options through AI Gateway, use the actual provider name (e.g. `anthropic`, `openai`, not `gateway`) as the key:
701701

702702
```ts
703-
import type { AnthropicProviderOptions } from '@ai-sdk/anthropic';
704-
import type { GatewayProviderOptions } from '@ai-sdk/gateway';
703+
import type { AnthropicLanguageModelOptions } from '@ai-sdk/anthropic';
704+
import type { GatewayLanguageModelOptions } from '@ai-sdk/gateway';
705705
import { generateText } from 'ai';
706706

707707
const { text } = await generateText({
@@ -710,10 +710,10 @@ const { text } = await generateText({
710710
providerOptions: {
711711
gateway: {
712712
order: ['vertex', 'anthropic'],
713-
} satisfies GatewayProviderOptions,
713+
} satisfies GatewayLanguageModelOptions,
714714
anthropic: {
715715
thinking: { type: 'enabled', budgetTokens: 12000 },
716-
} satisfies AnthropicProviderOptions,
716+
} satisfies AnthropicLanguageModelOptions,
717717
},
718718
});
719719
```

0 commit comments

Comments
 (0)