Skip to content

Commit 64a8fae

Browse files
authored
chore: remove obsolete model IDs for Anthropic, Google, OpenAI, xAI (#12923)
## Background Follow up to: - #12807 - #12808 - #12809 - #12810 These issues were opened with type "New provider", so models listed in "New models" in this context does not mean these are new models, but rather that they are _all_ models that the provider API supports. This PR removes any models that were not in this list but still in our codebase. ## Summary - Removes obsolete model IDs across the four providers, being conservative to err on the side of keeping one if it _might_ still work: - e.g. aliases (model version that lacks the date suffix of a model that's still supported, model version of a supported model that appends "-latest") - model IDs that are deemed obsolete here but may be present on other providers (e.g. Gateway, Amazon Bedrock, Google Vertex) remain untouched there - Replaces usage in our examples with suitable newer replacement models - Replaces usage in documentation code snippets and removes mentions in documentation model lists or model tables ## Checklist <!-- Do not edit this list. Leave items unchecked that don't apply. If you need to track subtasks, create a new "## Tasks" section Please check if the PR fulfills the following requirements: --> - [x] Tests have been added / updated (for bug fixes / features) - [x] Documentation has been added / updated (for bug fixes / features) - [x] A _patch_ changeset for relevant packages has been added (for bug fixes / features - run `pnpm changeset` in the project root) - [x] I have reviewed this pull request (self-review)
1 parent d5f76bd commit 64a8fae

File tree

118 files changed

+397
-525
lines changed

Some content is hidden

Large Commits have some content hidden by default. Use the searchbox below for content that may be hidden.

118 files changed

+397
-525
lines changed

.changeset/gentle-eyes-sing.md

Lines changed: 9 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,9 @@
1+
---
2+
'@ai-sdk/google-vertex': patch
3+
'@ai-sdk/anthropic': patch
4+
'@ai-sdk/google': patch
5+
'@ai-sdk/openai': patch
6+
'@ai-sdk/xai': patch
7+
---
8+
9+
chore: remove obsolete model IDs for Anthropic, Google, OpenAI, xAI

content/cookbook/00-guides/20-sonnet-3-7.mdx

Lines changed: 6 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -6,6 +6,12 @@ tags: ['getting-started']
66

77
# Get started with Claude 3.7 Sonnet
88

9+
<Note type="warning">
10+
This guide is deprecated. [Claude 3.7 Sonnet was retired on February 19,
11+
2026](https://platform.claude.com/docs/en/about-claude/model-deprecations#2025-10-28-claude-sonnet-3-7-model)
12+
and can no longer be used with the Anthropic API.
13+
</Note>
14+
915
With the [release of Claude 3.7 Sonnet](https://www.anthropic.com/news/claude-3-7-sonnet), there has never been a better time to start building AI applications, particularly those that require complex reasoning capabilities.
1016

1117
The [AI SDK](/) is a powerful TypeScript toolkit for building AI applications with large language models (LLMs) like Claude 3.7 Sonnet alongside popular frameworks like React, Next.js, Vue, Svelte, Node.js, and more.

content/docs/02-foundations/02-providers-and-models.mdx

Lines changed: 41 additions & 54 deletions
Large diffs are not rendered by default.

content/docs/02-foundations/03-prompts.mdx

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -309,7 +309,7 @@ import { google } from '@ai-sdk/google';
309309
import { generateText } from 'ai';
310310

311311
const result = await generateText({
312-
model: google('gemini-1.5-flash'),
312+
model: google('gemini-2.5-flash'),
313313
messages: [
314314
{
315315
role: 'user',

content/docs/03-ai-sdk-core/30-embeddings.mdx

Lines changed: 0 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -233,7 +233,6 @@ Several providers offer embedding models:
233233
| [OpenAI](/providers/ai-sdk-providers/openai#embedding-models) | `text-embedding-3-small` | 1536 |
234234
| [OpenAI](/providers/ai-sdk-providers/openai#embedding-models) | `text-embedding-ada-002` | 1536 |
235235
| [Google Generative AI](/providers/ai-sdk-providers/google-generative-ai#embedding-models) | `gemini-embedding-001` | 3072 |
236-
| [Google Generative AI](/providers/ai-sdk-providers/google-generative-ai#embedding-models) | `text-embedding-004` | 768 |
237236
| [Mistral](/providers/ai-sdk-providers/mistral#embedding-models) | `mistral-embed` | 1024 |
238237
| [Cohere](/providers/ai-sdk-providers/cohere#embedding-models) | `embed-english-v3.0` | 1024 |
239238
| [Cohere](/providers/ai-sdk-providers/cohere#embedding-models) | `embed-multilingual-v3.0` | 1024 |

content/docs/04-ai-sdk-ui/02-chatbot.mdx

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -903,7 +903,7 @@ Check out the [stream protocol guide](/docs/ai-sdk-ui/stream-protocol) for more
903903
## Reasoning
904904

905905
Some models such as DeepSeek `deepseek-r1`
906-
and Anthropic `claude-3-7-sonnet-20250219` support reasoning tokens.
906+
and Anthropic `claude-sonnet-4-5-20250929` support reasoning tokens.
907907
These tokens are typically sent before the message content.
908908
You can forward them to the client with the `sendReasoning` option:
909909

content/docs/07-reference/01-ai-sdk-core/42-custom-provider.mdx

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -22,8 +22,8 @@ import { customProvider } from 'ai';
2222
export const myOpenAI = customProvider({
2323
languageModels: {
2424
// replacement model with custom settings:
25-
'gpt-4': wrapLanguageModel({
26-
model: openai('gpt-4'),
25+
'gpt-5': wrapLanguageModel({
26+
model: openai('gpt-5'),
2727
middleware: defaultSettingsMiddleware({
2828
settings: {
2929
providerOptions: {

content/providers/01-ai-sdk-providers/01-xai.mdx

Lines changed: 9 additions & 18 deletions
Original file line numberDiff line numberDiff line change
@@ -142,7 +142,7 @@ The following optional provider options are available for xAI chat models:
142142
You can use the xAI Responses API with the `xai.responses(modelId)` factory method for server-side agentic tool calling. This enables the model to autonomously orchestrate tool calls and research on xAI's servers.
143143

144144
```ts
145-
const model = xai.responses('grok-4-fast');
145+
const model = xai.responses('grok-4-fast-non-reasoning');
146146
```
147147

148148
The Responses API provides server-side tools that the model can autonomously execute during its reasoning process:
@@ -186,7 +186,7 @@ import { xai } from '@ai-sdk/xai';
186186
import { generateText } from 'ai';
187187

188188
const { text, sources } = await generateText({
189-
model: xai.responses('grok-4-fast'),
189+
model: xai.responses('grok-4-fast-non-reasoning'),
190190
prompt: 'What are the latest developments in AI?',
191191
tools: {
192192
web_search: xai.tools.webSearch({
@@ -220,7 +220,7 @@ The X search tool enables searching X (Twitter) for posts, with filtering by han
220220

221221
```ts
222222
const { text, sources } = await generateText({
223-
model: xai.responses('grok-4-fast'),
223+
model: xai.responses('grok-4-fast-non-reasoning'),
224224
prompt: 'What are people saying about AI on X this week?',
225225
tools: {
226226
x_search: xai.tools.xSearch({
@@ -266,7 +266,7 @@ The code execution tool enables the model to write and execute Python code for c
266266

267267
```ts
268268
const { text } = await generateText({
269-
model: xai.responses('grok-4-fast'),
269+
model: xai.responses('grok-4-fast-non-reasoning'),
270270
prompt:
271271
'Calculate the compound interest for $10,000 at 5% annually for 10 years',
272272
tools: {
@@ -281,7 +281,7 @@ The view image tool enables the model to view and analyze images:
281281

282282
```ts
283283
const { text } = await generateText({
284-
model: xai.responses('grok-4-fast'),
284+
model: xai.responses('grok-4-fast-non-reasoning'),
285285
prompt: 'Describe what you see in the image',
286286
tools: {
287287
view_image: xai.tools.viewImage(),
@@ -295,7 +295,7 @@ The view X video tool enables the model to view and analyze videos from X (Twitt
295295

296296
```ts
297297
const { text } = await generateText({
298-
model: xai.responses('grok-4-fast'),
298+
model: xai.responses('grok-4-fast-non-reasoning'),
299299
prompt: 'Summarize the content of this X video',
300300
tools: {
301301
view_x_video: xai.tools.viewXVideo(),
@@ -309,7 +309,7 @@ The MCP server tool enables the model to connect to remote [Model Context Protoc
309309

310310
```ts
311311
const { text } = await generateText({
312-
model: xai.responses('grok-4-fast'),
312+
model: xai.responses('grok-4-fast-non-reasoning'),
313313
prompt: 'Use the weather tool to check conditions in San Francisco',
314314
tools: {
315315
weather_server: xai.tools.mcpServer({
@@ -404,7 +404,7 @@ import { xai } from '@ai-sdk/xai';
404404
import { streamText } from 'ai';
405405

406406
const { fullStream } = streamText({
407-
model: xai.responses('grok-4-fast'),
407+
model: xai.responses('grok-4-fast-non-reasoning'),
408408
prompt: 'Research AI safety developments and calculate risk metrics',
409409
tools: {
410410
web_search: xai.tools.webSearch(),
@@ -438,7 +438,7 @@ import { xai, type XaiLanguageModelResponsesOptions } from '@ai-sdk/xai';
438438
import { generateText } from 'ai';
439439

440440
const result = await generateText({
441-
model: xai.responses('grok-4-fast'),
441+
model: xai.responses('grok-4-fast-non-reasoning'),
442442
providerOptions: {
443443
xai: {
444444
reasoningEffort: 'high',
@@ -779,20 +779,11 @@ console.log('Sources:', await result.sources);
779779
| `grok-4-latest` | <Cross size={18} /> | <Check size={18} /> | <Check size={18} /> | <Check size={18} /> | <Cross size={18} /> |
780780
| `grok-3` | <Cross size={18} /> | <Check size={18} /> | <Check size={18} /> | <Check size={18} /> | <Cross size={18} /> |
781781
| `grok-3-latest` | <Cross size={18} /> | <Check size={18} /> | <Check size={18} /> | <Check size={18} /> | <Cross size={18} /> |
782-
| `grok-3-fast` | <Cross size={18} /> | <Check size={18} /> | <Check size={18} /> | <Check size={18} /> | <Cross size={18} /> |
783-
| `grok-3-fast-latest` | <Cross size={18} /> | <Check size={18} /> | <Check size={18} /> | <Check size={18} /> | <Cross size={18} /> |
784782
| `grok-3-mini` | <Cross size={18} /> | <Check size={18} /> | <Check size={18} /> | <Check size={18} /> | <Check size={18} /> |
785783
| `grok-3-mini-latest` | <Cross size={18} /> | <Check size={18} /> | <Check size={18} /> | <Check size={18} /> | <Check size={18} /> |
786-
| `grok-3-mini-fast` | <Cross size={18} /> | <Check size={18} /> | <Check size={18} /> | <Check size={18} /> | <Check size={18} /> |
787-
| `grok-3-mini-fast-latest` | <Cross size={18} /> | <Check size={18} /> | <Check size={18} /> | <Check size={18} /> | <Check size={18} /> |
788-
| `grok-2` | <Cross size={18} /> | <Check size={18} /> | <Check size={18} /> | <Check size={18} /> | <Cross size={18} /> |
789-
| `grok-2-latest` | <Cross size={18} /> | <Check size={18} /> | <Check size={18} /> | <Check size={18} /> | <Cross size={18} /> |
790-
| `grok-2-1212` | <Cross size={18} /> | <Check size={18} /> | <Check size={18} /> | <Check size={18} /> | <Cross size={18} /> |
791784
| `grok-2-vision` | <Check size={18} /> | <Check size={18} /> | <Check size={18} /> | <Check size={18} /> | <Cross size={18} /> |
792785
| `grok-2-vision-latest` | <Check size={18} /> | <Check size={18} /> | <Check size={18} /> | <Check size={18} /> | <Cross size={18} /> |
793786
| `grok-2-vision-1212` | <Check size={18} /> | <Check size={18} /> | <Check size={18} /> | <Check size={18} /> | <Cross size={18} /> |
794-
| `grok-beta` | <Cross size={18} /> | <Check size={18} /> | <Check size={18} /> | <Check size={18} /> | <Cross size={18} /> |
795-
| `grok-vision-beta` | <Check size={18} /> | <Cross size={18} /> | <Cross size={18} /> | <Cross size={18} /> | <Cross size={18} /> |
796787

797788
<Note>
798789
The table above lists popular models. Please see the [xAI

content/providers/01-ai-sdk-providers/03-openai.mdx

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -1507,7 +1507,7 @@ The following optional provider options are available for OpenAI chat models:
15071507

15081508
OpenAI has introduced the `o1`,`o3`, and `o4` series of [reasoning models](https://platform.openai.com/docs/guides/reasoning).
15091509
Currently, `o4-mini`, `o3`, `o3-mini`, and `o1` are available via both the chat and responses APIs. The
1510-
models `codex-mini-latest` and `computer-use-preview` are available only via the [responses API](#responses-models).
1510+
model `gpt-5.1-codex-mini` is available only via the [responses API](#responses-models).
15111511

15121512
Reasoning models currently only generate text, have several limitations, and are only supported using `generateText` and `streamText`.
15131513

content/providers/01-ai-sdk-providers/05-anthropic.mdx

Lines changed: 14 additions & 16 deletions
Original file line numberDiff line numberDiff line change
@@ -220,7 +220,7 @@ The `speed` option accepts `'fast'` or `'standard'` (default behavior).
220220

221221
### Reasoning
222222

223-
Anthropic has reasoning support for `claude-opus-4-20250514`, `claude-sonnet-4-20250514`, and `claude-3-7-sonnet-20250219` models.
223+
Anthropic has reasoning support for `claude-opus-4-20250514`, `claude-sonnet-4-20250514`, and `claude-sonnet-4-5-20250929` models.
224224

225225
You can enable it using the `thinking` provider option
226226
and specifying a thinking budget in tokens.
@@ -258,7 +258,7 @@ import { anthropic, AnthropicLanguageModelOptions } from '@ai-sdk/anthropic';
258258
import { generateText } from 'ai';
259259

260260
const result = await generateText({
261-
model: anthropic('claude-3-7-sonnet-20250219'),
261+
model: anthropic('claude-sonnet-4-5-20250929'),
262262
prompt: 'Continue our conversation...',
263263
providerOptions: {
264264
anthropic: {
@@ -509,7 +509,7 @@ Cache control for tools:
509509

510510
```ts
511511
const result = await generateText({
512-
model: anthropic('claude-3-5-haiku-latest'),
512+
model: anthropic('claude-haiku-4-5'),
513513
tools: {
514514
cityAttractions: tool({
515515
inputSchema: z.object({ city: z.string() }),
@@ -537,7 +537,7 @@ Here's an example:
537537

538538
```ts
539539
const result = await generateText({
540-
model: anthropic('claude-3-5-haiku-latest'),
540+
model: anthropic('claude-haiku-4-5'),
541541
messages: [
542542
{
543543
role: 'user',
@@ -1302,18 +1302,16 @@ and the `mediaType` should be set to `'application/pdf'`.
13021302

13031303
### Model Capabilities
13041304

1305-
| Model | Image Input | Object Generation | Tool Usage | Computer Use | Web Search | Tool Search | Compaction |
1306-
| -------------------------- | ------------------- | ------------------- | ------------------- | ------------------- | ------------------- | ------------------- | ------------------- |
1307-
| `claude-opus-4-6` | <Check size={18} /> | <Check size={18} /> | <Check size={18} /> | <Check size={18} /> | <Check size={18} /> | <Check size={18} /> | <Check size={18} /> |
1308-
| `claude-sonnet-4-6` | <Check size={18} /> | <Check size={18} /> | <Check size={18} /> | <Check size={18} /> | <Check size={18} /> | <Check size={18} /> | |
1309-
| `claude-opus-4-5` | <Check size={18} /> | <Check size={18} /> | <Check size={18} /> | <Check size={18} /> | <Check size={18} /> | <Check size={18} /> | |
1310-
| `claude-haiku-4-5` | <Check size={18} /> | <Check size={18} /> | <Check size={18} /> | <Check size={18} /> | <Check size={18} /> | | |
1311-
| `claude-sonnet-4-5` | <Check size={18} /> | <Check size={18} /> | <Check size={18} /> | <Check size={18} /> | <Check size={18} /> | <Check size={18} /> | |
1312-
| `claude-opus-4-1` | <Check size={18} /> | <Check size={18} /> | <Check size={18} /> | <Check size={18} /> | <Check size={18} /> | | |
1313-
| `claude-opus-4-0` | <Check size={18} /> | <Check size={18} /> | <Check size={18} /> | <Check size={18} /> | <Check size={18} /> | | |
1314-
| `claude-sonnet-4-0` | <Check size={18} /> | <Check size={18} /> | <Check size={18} /> | <Check size={18} /> | <Check size={18} /> | | |
1315-
| `claude-3-7-sonnet-latest` | <Check size={18} /> | <Check size={18} /> | <Check size={18} /> | <Check size={18} /> | <Check size={18} /> | | |
1316-
| `claude-3-5-haiku-latest` | <Check size={18} /> | <Check size={18} /> | <Check size={18} /> | <Check size={18} /> | <Check size={18} /> | | |
1305+
| Model | Image Input | Object Generation | Tool Usage | Computer Use | Web Search | Tool Search | Compaction |
1306+
| ------------------- | ------------------- | ------------------- | ------------------- | ------------------- | ------------------- | ------------------- | ------------------- |
1307+
| `claude-opus-4-6` | <Check size={18} /> | <Check size={18} /> | <Check size={18} /> | <Check size={18} /> | <Check size={18} /> | <Check size={18} /> | <Check size={18} /> |
1308+
| `claude-sonnet-4-6` | <Check size={18} /> | <Check size={18} /> | <Check size={18} /> | <Check size={18} /> | <Check size={18} /> | <Check size={18} /> | |
1309+
| `claude-opus-4-5` | <Check size={18} /> | <Check size={18} /> | <Check size={18} /> | <Check size={18} /> | <Check size={18} /> | <Check size={18} /> | |
1310+
| `claude-haiku-4-5` | <Check size={18} /> | <Check size={18} /> | <Check size={18} /> | <Check size={18} /> | <Check size={18} /> | | |
1311+
| `claude-sonnet-4-5` | <Check size={18} /> | <Check size={18} /> | <Check size={18} /> | <Check size={18} /> | <Check size={18} /> | <Check size={18} /> | |
1312+
| `claude-opus-4-1` | <Check size={18} /> | <Check size={18} /> | <Check size={18} /> | <Check size={18} /> | <Check size={18} /> | | |
1313+
| `claude-opus-4-0` | <Check size={18} /> | <Check size={18} /> | <Check size={18} /> | <Check size={18} /> | <Check size={18} /> | | |
1314+
| `claude-sonnet-4-0` | <Check size={18} /> | <Check size={18} /> | <Check size={18} /> | <Check size={18} /> | <Check size={18} /> | | |
13171315

13181316
<Note>
13191317
The table above lists popular models. Please see the [Anthropic

0 commit comments

Comments
 (0)