Skip to content

feat(provider): add new top-level reasoning parameter to spec and support it in generateText and streamText#13553

Merged
felixarntz merged 18 commits intomainfrom
fa/v7-top-level-reasoning
Mar 19, 2026
Merged

feat(provider): add new top-level reasoning parameter to spec and support it in generateText and streamText#13553
felixarntz merged 18 commits intomainfrom
fa/v7-top-level-reasoning

Conversation

@felixarntz
Copy link
Copy Markdown
Collaborator

@felixarntz felixarntz commented Mar 17, 2026

Background

Reasoning/thinking configuration has historically been handled entirely via providerOptions, requiring provider-specific knowledge from callers and making it impossible to write portable reasoning code.

Summary

Adds a top-level reasoning parameter to generateText and streamText (and the underlying LanguageModelV4 call options spec), as proposed in #12516. The parameter accepts a flat enum — 'provider-default' | 'none' | 'minimal' | 'low' | 'medium' | 'high' | 'xhigh' — aligned with the OpenAI/OpenRouter convention. For the v4 spec, 'provider-default' is omitted as that can be resolved at the generateText and streamText level by simply omitting the value.

Existing providerOptions for each provider remain supported, both to help with a smooth transition path and to continue to support cases where a specific providerOptions behavior may be more granular than what the new top-level reasoning parameter allows.

If reasoning-related keys are present in providerOptions, they take full precedence and the top-level reasoning parameter is ignored, so existing code continues to work without changes.

Two helper functions are added to provider-utils to make provider-side mapping straightforward:

  • mapReasoningToProviderEffort — maps the enum to a provider's native effort string, emitting a compatibility warning if coercion is needed.
  • mapReasoningToProviderBudget — maps the enum to a token budget by multiplying the model's max output tokens by a percentage, clamped between a min and max budget.

Provider migration status

Manual Verification

I ran all the relevant updated examples for each migrated provider, verifying they still work as before.

Checklist

  • Tests have been added / updated (for bug fixes / features)
  • Documentation has been added / updated (for bug fixes / features)
  • A patch changeset for relevant packages has been added (for bug fixes / features - run pnpm changeset in the project root)
  • I have reviewed this pull request (self-review)

Future Work

  • Migrate remaining providers (potentially via this PR, or in a follow up PR)
  • Consider deprecating use of certain providerOptions values that provide no benefit over the new top-level reasoning parameter

Related Issues

Fixes #12516

@tigent tigent bot added ai/core core functions like generateText, streamText, etc. Provider utils, and provider spec. ai/provider related to a provider package. Must be assigned together with at least one `provider/*` label documentation Improvements or additions to documentation feature New feature or request provider/amazon-bedrock Issues related to the @ai-sdk/amazon-bedrock provider provider/anthropic Issues related to the @ai-sdk/anthropic provider provider/google Issues related to the @ai-sdk/google provider provider/openai Issues related to the @ai-sdk/openai provider labels Mar 17, 2026

| Value | Behavior |
| -------------------- | --------------------------------------------------------------------------- |
| `'provider-default'` | Explicitly use the provider's default reasoning behavior (same as omitting) |
Copy link
Copy Markdown
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

(default when omited) - then 1st sentence below can be removed

Comment thread content/docs/03-ai-sdk-core/26-reasoning.mdx Outdated
Comment thread examples/ai-functions/src/generate-text/amazon/bedrock-reasoning.ts
Comment thread packages/ai/src/generate-text/generate-text.test.ts Outdated
Comment thread packages/ai/src/prompt/call-settings.ts Outdated
Comment on lines +20 to +23
'abortSignal' | 'headers' | 'maxRetries' | 'reasoning'
> & {
reasoning?: LanguageModelV4CallOptions['reasoning'];
} {
Copy link
Copy Markdown
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

this is strange. with my suggestion it could be removed

Copy link
Copy Markdown
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I worked on adjusting this, however we still need it if we want the v4 spec to always expect a value here (i.e. not support undefined); because CallSettings will have reasoning optional whereas LanguageModelV4CallOptions will have reasoning required.

Alternatively, if we do support undefined in the v4 spec for reasoning, it would be a bit odd because undefined would be the same as passing 'provider-default'.

That was why I initially left the 'provider-default' value off in the spec. I agree supporting it makes sense, but then not sure about undefined / no value. WDYT?

Copy link
Copy Markdown
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

that is where you would set the default afaik

Copy link
Copy Markdown
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

now that we fully align reasoning in language model V4 with CallSettings (including making the parameter optional), this workaround is no longer needed.

Comment thread packages/ai/src/prompt/prepare-call-settings.ts Outdated
Comment thread packages/provider/src/language-model/v4/language-model-v4-call-options.ts Outdated
@felixarntz
Copy link
Copy Markdown
Collaborator Author

@lgrammel Updated the language model v4 spec to also include provider-default in 8fea1bb#diff-27fe0f538da8213755c31c428d3f3afd26fcd7698d2d134492dd07ee2aa69155, based on how we talked about earlier.

Going to merge this once CI has passed, then I'll follow up with the PRs for the other relevant providers.

@felixarntz felixarntz merged commit 3887c70 into main Mar 19, 2026
18 checks passed
@felixarntz felixarntz deleted the fa/v7-top-level-reasoning branch March 19, 2026 16:43
@vercel-ai-sdk
Copy link
Copy Markdown
Contributor

vercel-ai-sdk bot commented Mar 19, 2026

🚀 Published in:

Package Version
ai 7.0.0-beta.27
@ai-sdk/alibaba 2.0.0-beta.10
@ai-sdk/amazon-bedrock 5.0.0-beta.10
@ai-sdk/angular 3.0.0-beta.27
@ai-sdk/anthropic 4.0.0-beta.10
@ai-sdk/assemblyai 3.0.0-beta.7
@ai-sdk/azure 4.0.0-beta.13
@ai-sdk/baseten 2.0.0-beta.8
@ai-sdk/black-forest-labs 2.0.0-beta.6
@ai-sdk/bytedance 2.0.0-beta.6
@ai-sdk/cerebras 3.0.0-beta.8
@ai-sdk/cohere 4.0.0-beta.6
@ai-sdk/deepgram 3.0.0-beta.6
@ai-sdk/deepinfra 3.0.0-beta.8
@ai-sdk/deepseek 3.0.0-beta.7
@ai-sdk/devtools 1.0.0-beta.4
@ai-sdk/elevenlabs 3.0.0-beta.6
@ai-sdk/fal 3.0.0-beta.6
@ai-sdk/fireworks 3.0.0-beta.8
@ai-sdk/gateway 4.0.0-beta.17
@ai-sdk/gladia 3.0.0-beta.6
@ai-sdk/google 4.0.0-beta.14
@ai-sdk/google-vertex 5.0.0-beta.19
@ai-sdk/groq 4.0.0-beta.7
@ai-sdk/huggingface 2.0.0-beta.8
@ai-sdk/hume 3.0.0-beta.6
@ai-sdk/klingai 4.0.0-beta.7
@ai-sdk/langchain 3.0.0-beta.27
@ai-sdk/llamaindex 3.0.0-beta.27
@ai-sdk/lmnt 3.0.0-beta.6
@ai-sdk/luma 3.0.0-beta.6
@ai-sdk/mcp 2.0.0-beta.10
@ai-sdk/mistral 4.0.0-beta.6
@ai-sdk/moonshotai 3.0.0-beta.8
@ai-sdk/open-responses 2.0.0-beta.7
@ai-sdk/openai 4.0.0-beta.13
@ai-sdk/openai-compatible 3.0.0-beta.8
@ai-sdk/perplexity 4.0.0-beta.7
@ai-sdk/prodia 2.0.0-beta.8
@ai-sdk/provider 4.0.0-beta.4
@ai-sdk/provider-utils 5.0.0-beta.6
@ai-sdk/react 4.0.0-beta.27
@ai-sdk/replicate 3.0.0-beta.7
@ai-sdk/revai 3.0.0-beta.7
@ai-sdk/rsc 3.0.0-beta.28
@ai-sdk/svelte 5.0.0-beta.27
@ai-sdk/togetherai 3.0.0-beta.8
@ai-sdk/valibot 3.0.0-beta.6
@ai-sdk/vercel 3.0.0-beta.8
@ai-sdk/vue 4.0.0-beta.27
@ai-sdk/xai 4.0.0-beta.14

felixarntz added a commit that referenced this pull request Mar 20, 2026
#13648)

## Background

The new top-level `reasoning` parameter was added to the AI SDK spec and
core in #13553. Providers need to be migrated to translate this
parameter into their native reasoning/thinking configuration.

## Summary

Migrates 7 providers to support the new top-level `reasoning` parameter:

- **deepseek**: Maps `reasoning` to existing thinking support (already
had infrastructure, just needed wiring)
- **groq**: Maps `reasoning` to `reasoning_effort` in provider options
- **xai**: Maps `reasoning` to `reasoning_effort` for both chat and
responses models
- **openai-compatible**: Maps `reasoning` to `reasoning_effort` in
provider options
- **open-responses**: Maps `reasoning` to `reasoning.effort` in the
responses format
- **alibaba**: Maps `reasoning` to `enable_thinking` + `thinking_budget`
via token budget calculation
- **cohere**: Maps `reasoning` to `thinking.type` +
`thinking.token_budget` via token budget calculation
- **fireworks**: Uses **openai-compatible**

Together with #13649, this completes the work on #12516.

## Manual Verification

Relevant examples were updated or added, then run for verification.

## Checklist

- [x] Tests have been added / updated (for bug fixes / features)
- [ ] Documentation has been added / updated (for bug fixes / features)
- [x] A _patch_ changeset for relevant packages has been added (for bug
fixes / features - run `pnpm changeset` in the project root)
- [x] I have reviewed this pull request (self-review)

## Future Work

N/A

## Related Issues

See #12516.
felixarntz added a commit that referenced this pull request Mar 20, 2026
…rameter (#13649)

## Background

A new top-level `reasoning` parameter was added to the AI SDK spec in
#13553 and is supported in `generateText`/`streamText`. Providers that
don't natively support reasoning configuration need to emit an
unsupported warning when a custom reasoning value is passed, rather than
silently ignoring it.

## Summary

- Added unsupported-feature warnings to `perplexity`, `mistral`, and
`prodia` providers when `isCustomReasoning(reasoning)` returns `true`
- Added documentation to `architecture/provider-abstraction.md`
explaining how providers should handle the `reasoning` parameter (effort
mapping, budget mapping, or unsupported warning).

Together with #13648, this completes the work on #12516.

## Manual Verification

N/A

## Checklist

- [x] Tests have been added / updated (for bug fixes / features)
- [x] Documentation has been added / updated (for bug fixes / features)
- [x] A _patch_ changeset for relevant packages has been added (for bug
fixes / features - run `pnpm changeset` in the project root)
- [x] I have reviewed this pull request (self-review)

## Future Work

N/A

## Related Issues

See #12516.
@gr2m gr2m mentioned this pull request Apr 9, 2026
2 tasks
felixarntz added a commit that referenced this pull request Apr 10, 2026
## Summary

Fixes the changesets for the following v7 major PRs after the fact to
mark them as `major`, for proper referencing in future changelog / docs:
- #13352 
- #13816
- #12880 
- #13553
- #13971
- #14150
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

ai/core core functions like generateText, streamText, etc. Provider utils, and provider spec. ai/provider related to a provider package. Must be assigned together with at least one `provider/*` label documentation Improvements or additions to documentation feature New feature or request provider/amazon-bedrock Issues related to the @ai-sdk/amazon-bedrock provider provider/anthropic Issues related to the @ai-sdk/anthropic provider provider/google Issues related to the @ai-sdk/google provider provider/openai Issues related to the @ai-sdk/openai provider

Projects

None yet

Development

Successfully merging this pull request may close these issues.

Make reasoning / thinking configuration part of top-level spec

2 participants