Skip to content

Commit 85e476d

Browse files
vercel-ai-sdk[bot]jerilynzhengmclenhardclaude
authored
Backport: feat (provider/gateway): add disallowPromptTraining provider option (#14039)
This is an automated backport of #13726 to the release-v6.0 branch. FYI @jerilynzheng Co-authored-by: Jerilyn Zheng <zheng.jerilyn@gmail.com> Co-authored-by: mat lenhard <mclenhard@gmail.com> Co-authored-by: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
1 parent 39c6a0e commit 85e476d

File tree

5 files changed

+128
-28
lines changed

5 files changed

+128
-28
lines changed

.changeset/dry-radios-compare.md

Lines changed: 5 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,5 @@
1+
---
2+
'@ai-sdk/gateway': patch
3+
---
4+
5+
feat (provider/gateway): add disallowPromptTraining gateway provider option

content/providers/01-ai-sdk-providers/00-ai-gateway.mdx

Lines changed: 50 additions & 28 deletions
Original file line numberDiff line numberDiff line change
@@ -29,7 +29,7 @@ For most use cases, you can use the AI Gateway directly with a model string:
2929
import { generateText } from 'ai';
3030

3131
const { text } = await generateText({
32-
model: 'openai/gpt-5',
32+
model: 'openai/gpt-5.4',
3333
prompt: 'Hello world',
3434
});
3535
```
@@ -39,7 +39,7 @@ const { text } = await generateText({
3939
import { generateText, gateway } from 'ai';
4040

4141
const { text } = await generateText({
42-
model: gateway('openai/gpt-5'),
42+
model: gateway('openai/gpt-5.4'),
4343
prompt: 'Hello world',
4444
});
4545
```
@@ -169,7 +169,7 @@ You can create language models using a provider instance. The first argument is
169169
import { generateText } from 'ai';
170170

171171
const { text } = await generateText({
172-
model: 'openai/gpt-5',
172+
model: 'openai/gpt-5.4',
173173
prompt: 'Explain quantum computing in simple terms',
174174
});
175175
```
@@ -215,7 +215,7 @@ availableModels.models.forEach(model => {
215215

216216
// Use any discovered model with plain string
217217
const { text } = await generateText({
218-
model: availableModels.models[0].id, // e.g., 'openai/gpt-4o'
218+
model: availableModels.models[0].id, // e.g., 'openai/gpt-5.4'
219219
prompt: 'Hello world',
220220
});
221221
```
@@ -326,7 +326,7 @@ It returns a `GatewayGenerationInfo` object with the following fields:
326326
import { generateText } from 'ai';
327327

328328
const { text } = await generateText({
329-
model: 'anthropic/claude-sonnet-4',
329+
model: 'anthropic/claude-sonnet-4.6',
330330
prompt: 'Write a haiku about programming',
331331
});
332332

@@ -339,7 +339,7 @@ console.log(text);
339339
import { streamText } from 'ai';
340340

341341
const { textStream } = await streamText({
342-
model: 'openai/gpt-5',
342+
model: 'openai/gpt-5.4',
343343
prompt: 'Explain the benefits of serverless architecture',
344344
});
345345

@@ -381,7 +381,7 @@ import { generateText, stepCountIs } from 'ai';
381381
import { openai } from '@ai-sdk/openai';
382382

383383
const result = await generateText({
384-
model: 'openai/gpt-5-mini',
384+
model: 'openai/gpt-5.4-mini',
385385
prompt: 'What is the Vercel AI Gateway?',
386386
stopWhen: stepCountIs(10),
387387
tools: {
@@ -410,7 +410,7 @@ The Perplexity Search tool enables models to search the web using [Perplexity's
410410
import { gateway, generateText } from 'ai';
411411

412412
const result = await generateText({
413-
model: 'openai/gpt-5-nano',
413+
model: 'openai/gpt-5.4-nano',
414414
prompt: 'Search for news about AI regulations in January 2025.',
415415
tools: {
416416
perplexity_search: gateway.tools.perplexitySearch(),
@@ -428,7 +428,7 @@ You can also configure the search with optional parameters:
428428
import { gateway, generateText } from 'ai';
429429

430430
const result = await generateText({
431-
model: 'openai/gpt-5-nano',
431+
model: 'openai/gpt-5.4-nano',
432432
prompt:
433433
'Search for news about AI regulations from the first week of January 2025.',
434434
tools: {
@@ -482,7 +482,7 @@ The tool works with both `generateText` and `streamText`:
482482
import { gateway, streamText } from 'ai';
483483

484484
const result = streamText({
485-
model: 'openai/gpt-5-nano',
485+
model: 'openai/gpt-5.4-nano',
486486
prompt: 'Search for the latest news about AI regulations.',
487487
tools: {
488488
perplexity_search: gateway.tools.perplexitySearch(),
@@ -512,7 +512,7 @@ The Parallel Search tool enables models to search the web using [Parallel AI's S
512512
import { gateway, generateText } from 'ai';
513513

514514
const result = await generateText({
515-
model: 'openai/gpt-5-nano',
515+
model: 'openai/gpt-5.4-nano',
516516
prompt: 'Research the latest developments in quantum computing.',
517517
tools: {
518518
parallel_search: gateway.tools.parallelSearch(),
@@ -530,7 +530,7 @@ You can also configure the search with optional parameters:
530530
import { gateway, generateText } from 'ai';
531531

532532
const result = await generateText({
533-
model: 'openai/gpt-5-nano',
533+
model: 'openai/gpt-5.4-nano',
534534
prompt: 'Find detailed information about TypeScript 5.0 features.',
535535
tools: {
536536
parallel_search: gateway.tools.parallelSearch({
@@ -591,7 +591,7 @@ The tool works with both `generateText` and `streamText`:
591591
import { gateway, streamText } from 'ai';
592592

593593
const result = streamText({
594-
model: 'openai/gpt-5-nano',
594+
model: 'openai/gpt-5.4-nano',
595595
prompt: 'Research the latest AI safety guidelines.',
596596
tools: {
597597
parallel_search: gateway.tools.parallelSearch(),
@@ -622,7 +622,7 @@ import type { GatewayProviderOptions } from '@ai-sdk/gateway';
622622
import { generateText } from 'ai';
623623

624624
const { text } = await generateText({
625-
model: 'openai/gpt-5',
625+
model: 'openai/gpt-5.4',
626626
prompt: 'Summarize this document...',
627627
providerOptions: {
628628
gateway: {
@@ -723,7 +723,7 @@ import type { GatewayProviderOptions } from '@ai-sdk/gateway';
723723
import { generateText } from 'ai';
724724

725725
const { text } = await generateText({
726-
model: 'anthropic/claude-sonnet-4',
726+
model: 'anthropic/claude-sonnet-4.6',
727727
prompt: 'Explain quantum computing',
728728
providerOptions: {
729729
gateway: {
@@ -752,7 +752,7 @@ The following gateway provider options are available:
752752

753753
Specifies fallback models to use when the primary model fails or is unavailable. The gateway will try the primary model first (specified in the `model` parameter), then try each model in this array in order until one succeeds.
754754

755-
Example: `models: ['openai/gpt-5-nano', 'gemini-2.0-flash']` will try the fallback models in order if the primary model fails.
755+
Example: `models: ['openai/gpt-5.4-nano', 'gemini-3-flash-preview']` will try the fallback models in order if the primary model fails.
756756

757757
- **user** _string_
758758

@@ -780,7 +780,12 @@ The following gateway provider options are available:
780780

781781
- **zeroDataRetention** _boolean_
782782

783-
Restricts routing requests to providers that have zero data retention policies.
783+
Restricts routing requests to providers that have zero data retention agreements with Vercel for AI Gateway. If there are no providers available for the model with zero data retention, the request will fail. BYOK credentials are skipped when `zeroDataRetention` is set to `true` to ensure that requests are only routed to providers that support ZDR compliance. Request-level ZDR is only available for Vercel Pro and Enterprise plans.
784+
785+
- **disallowPromptTraining** _boolean_
786+
787+
Restricts routing requests to providers that have agreements with Vercel for AI Gateway to not use prompts for model training. If there are no providers available for the model that disallow prompt training, the request will fail. BYOK credentials are skipped when `disallowPromptTraining` is set to `true` to ensure that requests are only routed to providers that do not train on prompt data.
788+
784789

785790
- **providerTimeouts** _object_
786791

@@ -797,7 +802,7 @@ import type { GatewayProviderOptions } from '@ai-sdk/gateway';
797802
import { generateText } from 'ai';
798803

799804
const { text } = await generateText({
800-
model: 'anthropic/claude-sonnet-4',
805+
model: 'anthropic/claude-sonnet-4.6',
801806
prompt: 'Write a haiku about programming',
802807
providerOptions: {
803808
gateway: {
@@ -817,34 +822,32 @@ import type { GatewayProviderOptions } from '@ai-sdk/gateway';
817822
import { generateText } from 'ai';
818823

819824
const { text } = await generateText({
820-
model: 'openai/gpt-4o', // Primary model
825+
model: 'openai/gpt-5.4', // Primary model
821826
prompt: 'Write a TypeScript haiku',
822827
providerOptions: {
823828
gateway: {
824-
models: ['openai/gpt-5-nano', 'gemini-2.0-flash'], // Fallback models
829+
models: ['openai/gpt-5.4-nano', 'gemini-3-flash-preview'], // Fallback models
825830
} satisfies GatewayProviderOptions,
826831
},
827832
});
828833

829834
// This will:
830-
// 1. Try openai/gpt-4o first
831-
// 2. If it fails, try openai/gpt-5-nano
832-
// 3. If that fails, try gemini-2.0-flash
835+
// 1. Try openai/gpt-5.4 first
836+
// 2. If it fails, try openai/gpt-5.4-nano
837+
// 3. If that fails, try gemini-3-flash-preview
833838
// 4. Return the result from the first model that succeeds
834839
```
835840

836841
#### Zero Data Retention Example
837842

838-
Set `zeroDataRetention` to true to ensure requests are only routed to providers
839-
that have zero data retention policies. When `zeroDataRetention` is `false` or not
840-
specified, there is no enforcement of restricting routing.
843+
Set `zeroDataRetention` to true to route requests to providers that have zero data retention agreements with Vercel for AI Gateway. If there are no providers available for the model with zero data retention, the request will fail. When `zeroDataRetention` is `false` or not specified, there is no enforcement of restricting routing. BYOK credentials are skipped when `zeroDataRetention` is set to `true` to ensure that requests are only routed to providers that support ZDR compliance. Request-level ZDR is only available for Vercel Pro and Enterprise plans.
841844

842845
```ts
843846
import type { GatewayProviderOptions } from '@ai-sdk/gateway';
844847
import { generateText } from 'ai';
845848

846849
const { text } = await generateText({
847-
model: 'anthropic/claude-sonnet-4.5',
850+
model: 'anthropic/claude-sonnet-4.6',
848851
prompt: 'Analyze this sensitive document...',
849852
providerOptions: {
850853
gateway: {
@@ -854,6 +857,25 @@ const { text } = await generateText({
854857
});
855858
```
856859

860+
#### Disallow Prompt Training Example
861+
862+
Set `disallowPromptTraining` to true to route requests to providers that have agreements with Vercel for AI Gateway to not use prompts for model training. If there are no providers available for the model that disallow prompt training, the request will fail. When `disallowPromptTraining` is `false` or not specified, there is no enforcement of restricting routing. BYOK credentials are skipped when `disallowPromptTraining` is set to `true` to ensure that requests are only routed to providers that do not train on prompt data.
863+
864+
```ts
865+
import type { GatewayProviderOptions } from '@ai-sdk/gateway';
866+
import { generateText } from 'ai';
867+
868+
const { text } = await generateText({
869+
model: 'anthropic/claude-sonnet-4.6',
870+
prompt: 'Analyze this proprietary business data...',
871+
providerOptions: {
872+
gateway: {
873+
disallowPromptTraining: true,
874+
} satisfies GatewayProviderOptions,
875+
},
876+
});
877+
```
878+
857879
### Provider-Specific Options
858880

859881
When using provider-specific options through AI Gateway, use the actual provider name (e.g. `anthropic`, `openai`, not `gateway`) as the key:
@@ -864,7 +886,7 @@ import type { GatewayProviderOptions } from '@ai-sdk/gateway';
864886
import { generateText } from 'ai';
865887

866888
const { text } = await generateText({
867-
model: 'anthropic/claude-sonnet-4',
889+
model: 'anthropic/claude-sonnet-4.6',
868890
prompt: 'Explain quantum computing',
869891
providerOptions: {
870892
gateway: {
Lines changed: 27 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,27 @@
1+
import type { GatewayProviderOptions } from '@ai-sdk/gateway';
2+
import { streamText } from 'ai';
3+
import { run } from '../lib/run';
4+
5+
run(async () => {
6+
const result = streamText({
7+
model: 'openai/gpt-oss-120b',
8+
prompt: 'Tell me the history of the tenrec in a few sentences.',
9+
providerOptions: {
10+
gateway: {
11+
disallowPromptTraining: true,
12+
} satisfies GatewayProviderOptions,
13+
},
14+
});
15+
16+
for await (const textPart of result.textStream) {
17+
process.stdout.write(textPart);
18+
}
19+
20+
console.log();
21+
console.log('Token usage:', await result.usage);
22+
console.log('Finish reason:', await result.finishReason);
23+
console.log(
24+
'Provider metadata:',
25+
JSON.stringify(await result.providerMetadata, null, 2),
26+
);
27+
});

packages/gateway/src/gateway-language-model.test.ts

Lines changed: 40 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -1535,5 +1535,45 @@ describe('GatewayLanguageModel', () => {
15351535
},
15361536
});
15371537
});
1538+
1539+
it('should pass zeroDataRetention option', async () => {
1540+
prepareJsonResponse({
1541+
content: { type: 'text', text: 'Test response' },
1542+
});
1543+
1544+
await createTestModel().doGenerate({
1545+
prompt: TEST_PROMPT,
1546+
providerOptions: {
1547+
gateway: {
1548+
zeroDataRetention: true,
1549+
},
1550+
},
1551+
});
1552+
1553+
const requestBody = await server.calls[0].requestBodyJson;
1554+
expect(requestBody.providerOptions).toEqual({
1555+
gateway: { zeroDataRetention: true },
1556+
});
1557+
});
1558+
1559+
it('should pass disallowPromptTraining option', async () => {
1560+
prepareJsonResponse({
1561+
content: { type: 'text', text: 'Test response' },
1562+
});
1563+
1564+
await createTestModel().doGenerate({
1565+
prompt: TEST_PROMPT,
1566+
providerOptions: {
1567+
gateway: {
1568+
disallowPromptTraining: true,
1569+
},
1570+
},
1571+
});
1572+
1573+
const requestBody = await server.calls[0].requestBodyJson;
1574+
expect(requestBody.providerOptions).toEqual({
1575+
gateway: { disallowPromptTraining: true },
1576+
});
1577+
});
15381578
});
15391579
});

packages/gateway/src/gateway-provider-options.ts

Lines changed: 6 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -59,6 +59,12 @@ const gatewayProviderOptions = lazySchema(() =>
5959
* used.
6060
*/
6161
zeroDataRetention: z.boolean().optional(),
62+
/**
63+
* Whether to filter by only providers that do not train on prompt data.
64+
* When enabled, only providers that have agreements with Vercel AI Gateway
65+
* to not use prompts for model training will be used.
66+
*/
67+
disallowPromptTraining: z.boolean().optional(),
6268
/**
6369
* Per-provider timeouts for BYOK credentials in milliseconds.
6470
* Controls how long to wait for a provider to start responding

0 commit comments

Comments
 (0)