-
Notifications
You must be signed in to change notification settings - Fork 4.3k
Comparing changes
Open a pull request
base repository: vercel/ai
base: ai@6.0.84
head repository: vercel/ai
compare: ai@6.0.85
- 11 commits
- 95 files changed
- 3 contributors
Commits on Feb 12, 2026
-
refactor(provider/anthropic): finish fixture migration for doGenerate…
… tests (#12511) ## background anthropic tests were partially migrated to the fixture-based pattern - doStream and some doGenerate tests already used `prepareJsonFixtureResponse` / `prepareChunksFixtureResponse`, but 3 `prepareJsonResponse` helper definitions (55 total calls) still remained ## summary - record `anthropic-text.json` and `anthropic-text.chunks.txt` fixtures from real API (`claude-sonnet-4-5-20250929`) - replace 23 `prepareJsonResponse({})` calls (tests that only check request body/headers) with `prepareJsonFixtureResponse('anthropic-text')` - inline `server.urls` assignments for 10 main doGenerate tests that need specific response content (reasoning, usage, citations, tool calls, etc.) - inline `server.urls` assignments for 8 web search tests and delete the scoped helper - inline `server.urls` assignments for 4 code execution tests and delete the scoped helper - remove the main `prepareJsonResponse` function definition and unused `Citation`/`JSONObject` imports
Configuration menu - View commit details
-
Copy full SHA for 321a0bc - Browse repository at this point
Copy the full SHA 321a0bcView commit details -
refactor(provider/openai): migrate all tests to fixture pattern (#12514)
## background openai provider tests used inline `prepareJsonResponse` / `prepareStreamResponse` helpers that constructed fake API responses. this made tests fragile and inconsistent with the fixture-based pattern used by other migrated providers ## summary - record real API fixtures for all 6 openai test files (chat, responses, completion, embedding, transcription, image) - replace all `prepareJsonResponse` / `prepareStreamResponse` helpers with `prepareJsonFixtureResponse` / `prepareChunksFixtureResponse` that read from `__fixtures__/` files - inline `server.urls` for tests that check specific response content (finish reasons, logprobs, usage overrides, etc.) - trim large response data per conventions (embedding vectors to 5 values, image base64 to ~100 chars, transcription words to 5) - fix missing `.filter(line => line.trim().length > 0)` in chat `prepareChunksFixtureResponse` to match reference pattern - zero `prepareJsonResponse` or `prepareStreamResponse` remaining in the openai package
Configuration menu - View commit details
-
Copy full SHA for 55c969c - Browse repository at this point
Copy the full SHA 55c969cView commit details -
fix(provider/xai): handle inconsistent cached token reporting (#12485)
## background xAI's token reporting is inconsistent across models. most models report `prompt_tokens`/`input_tokens` inclusive of cached tokens (like OpenAI), but some models (e.g. `grok-4-1-fast-non-reasoning`) report them exclusive of cached tokens, where `cached_tokens > prompt_tokens` ## summary - detect which reporting style xAI is using based on whether `cached_tokens <= prompt_tokens` - when inclusive (normal): subtract cached from prompt to get noCache (OpenAI pattern) - when exclusive (anomalous): prompt tokens already represent noCache, add cached for total (Anthropic pattern) - applies to both chat completions and responses APIs - add unit tests for the non-inclusive reporting edge case - add responses usage test file ## verification <details> <summary>gateway bug case (cached > prompt)</summary> ``` before: total=4142, noCache=-186, cacheRead=4328 after: total=8470, noCache=4142, cacheRead=4328 ``` </details> <details> <summary>normal case (cached <= prompt)</summary> ``` raw: input_tokens: 12, cached_tokens: 3 sdk: noCache: 9, cacheRead: 3, total: 12 ``` </details> ## checklist - [x] tests have been added / updated (for bug fixes / features) - [ ] documentation has been added / updated (for bug fixes / features) - [x] a _patch_ changeset for relevant packages has been added (run `pnpm changeset` in root) - [x] i have reviewed this pull request (self-review)
Configuration menu - View commit details
-
Copy full SHA for 7ccb902 - Browse repository at this point
Copy the full SHA 7ccb902View commit details -
This PR was opened by the [Changesets release](https://github.com/changesets/action) GitHub action. When you're ready to do a release, you can merge this and the packages will be published to npm automatically. If you're not ready to do a release yet, that's fine, whenever you add more changesets to main, this PR will be updated. # Releases ## @ai-sdk/xai@3.0.56 ### Patch Changes - 7ccb902: fix(provider/xai): handle inconsistent cached token reporting Co-authored-by: vercel-ai-sdk[bot] <225926702+vercel-ai-sdk[bot]@users.noreply.github.com>
vercel-ai-sdk[bot] authoredFeb 12, 2026 Configuration menu - View commit details
-
Copy full SHA for fabdb4d - Browse repository at this point
Copy the full SHA fabdb4dView commit details -
test(provider/google-vertex): migrate to fixture-based tests (#12497)
## background google-vertex embedding and image model tests used inline `prepareJsonResponse` helpers with hand-constructed mock data. this normalizes them to the fixture-based pattern per #12270. ## summary - record real API fixtures from Vertex AI (text-embedding-005 for embeddings, imagen-4.0-generate-001 for images) - trim embedding vectors to 5 values per vector, trim image base64 to 100 chars - replace all `prepareJsonResponse` with `prepareJsonFixtureResponse` loading from `__fixtures__/` - convert all `toStrictEqual` assertions to `toMatchInlineSnapshot()` - add full result snapshot tests for both embedding and image models - add standalone response headers and response metadata tests - no tests lost (embedding: 11 -> 13, image: 22 -> 23)
Configuration menu - View commit details
-
Copy full SHA for 0a72ae4 - Browse repository at this point
Copy the full SHA 0a72ae4View commit details -
refactor(provider/deepgram): migrate transcription tests to fixture p…
…attern (#12517) ## background part of the ongoing effort to normalize provider tests to use real API response fixtures instead of inline mock data (#12270) ## summary - record real transcription fixture from deepgram nova-3 API with language detection enabled - replace inline `prepareJsonResponse` helper with `prepareJsonFixtureResponse` that reads from `__fixtures__/deepgram-transcription.json` - use `beforeEach` with fixture for standard tests, inline `server.urls` for edge cases (custom detected language, undefined language) - speech model test unchanged (uses `prepareAudioResponse` for binary audio data, not part of fixture migration scope)
Configuration menu - View commit details
-
Copy full SHA for bcf4d5d - Browse repository at this point
Copy the full SHA bcf4d5dView commit details -
refactor(provider/elevenlabs): migrate transcription tests to fixture…
… pattern (#12526) ## background part of the test fixture migration effort (#12270) - replacing inline `prepareJsonResponse` helpers with real API response fixtures ## summary - record real transcription fixture from elevenlabs `scribe_v1` API - replace inline `prepareJsonResponse` with `prepareJsonFixtureResponse` reading from `__fixtures__/` - organize tests into describe blocks (transcription, response headers, response metadata, no additional formats) - use `toMatchSnapshot()` for response assertions, `toMatchInlineSnapshot()` for other assertions - speech tests left as-is (binary `prepareAudioResponse` is not part of fixture migration scope)
Configuration menu - View commit details
-
Copy full SHA for d695488 - Browse repository at this point
Copy the full SHA d695488View commit details
Commits on Feb 13, 2026
-
refactor(provider/fal): migrate transcription tests to fixture-based …
…pattern (#12535) ## background fal transcription tests used an inline `prepareJsonResponse` helper that constructed fake response data directly in the test file - this makes tests harder to maintain and doesn't reflect real API responses ## summary - record 2 real API response fixtures from fal wizper model (queue submit, transcription result) - replace `prepareJsonResponse` with `prepareJsonFixtureResponse` that reads from fixture files - organize tests into describe blocks (`transcription`, `response headers`, `response metadata`) - use `toMatchSnapshot()` for response assertions
Configuration menu - View commit details
-
Copy full SHA for 03d9b27 - Browse repository at this point
Copy the full SHA 03d9b27View commit details -
refactor(provider/revai): migrate tests to fixture-based pattern (#12528
) ## background revai tests used an inline `prepareJsonResponse` helper that constructed fake response data directly in the test file - this makes tests harder to maintain and doesn't reflect real API responses ## summary - record 3 real API response fixtures from rev.ai (job submit, job status, transcript) - replace `prepareJsonResponse` with `prepareJsonFixtureResponse` that reads from fixture files - organize tests into describe blocks (`transcription`, `response headers`, `response metadata`) - use `toMatchSnapshot()` for response assertions
Configuration menu - View commit details
-
Copy full SHA for e5dc2ba - Browse repository at this point
Copy the full SHA e5dc2baView commit details -
fix (provider/gateway): image/video error handler (#12506)
# Add Gateway Timeout Error Handling Examples ## Background Proper error handling for timeout scenarios is crucial for applications using the AI Gateway. This PR improves the error handling for timeout situations and adds examples demonstrating how these errors are handled. ## Summary - Fixed error handling in Gateway models by making `asGatewayError` async in both image and video model implementations - Added three new examples demonstrating timeout error handling for: - Image generation (`generate-image/gateway-timeout.ts`) - Video generation (`generate-video/gateway-timeout.ts`) - Text streaming (`stream-text/gateway-timeout.ts`) Each example uses undici with an extremely short timeout (1ms) to deliberately trigger timeout errors, showing how the Gateway SDK catches and provides helpful error messages with troubleshooting guidance. ## Manual Verification Tested each example by running them with the AI Gateway API key set. Confirmed that they properly trigger timeout errors and display the expected error information including the original error cause. ## Checklist - [ ] Tests have been added / updated (for bug fixes / features) - [ ] Documentation has been added / updated (for bug fixes / features) - [ ] A _patch_ changeset for relevant packages has been added (for bug fixes / features - run `pnpm changeset` in the project root) - [ ] I have reviewed this pull request (self-review)
Configuration menu - View commit details
-
Copy full SHA for e858654 - Browse repository at this point
Copy the full SHA e858654View commit details -
This PR was opened by the [Changesets release](https://github.com/changesets/action) GitHub action. When you're ready to do a release, you can merge this and the packages will be published to npm automatically. If you're not ready to do a release yet, that's fine, whenever you add more changesets to main, this PR will be updated. # Releases ## ai@6.0.85 ### Patch Changes - Updated dependencies [e858654] - @ai-sdk/gateway@3.0.45 ## @ai-sdk/angular@2.0.86 ### Patch Changes - ai@6.0.85 ## @ai-sdk/gateway@3.0.45 ### Patch Changes - e858654: fix (provider/gateway): Fixed error handling in Gateway models by making asGatewayError async in both image and video model implementations. ## @ai-sdk/langchain@2.0.91 ### Patch Changes - ai@6.0.85 ## @ai-sdk/llamaindex@2.0.85 ### Patch Changes - ai@6.0.85 ## @ai-sdk/react@3.0.87 ### Patch Changes - ai@6.0.85 ## @ai-sdk/rsc@2.0.85 ### Patch Changes - ai@6.0.85 ## @ai-sdk/svelte@4.0.85 ### Patch Changes - ai@6.0.85 ## @ai-sdk/vue@3.0.85 ### Patch Changes - ai@6.0.85 Co-authored-by: vercel-ai-sdk[bot] <225926702+vercel-ai-sdk[bot]@users.noreply.github.com>
vercel-ai-sdk[bot] authoredFeb 13, 2026 Configuration menu - View commit details
-
Copy full SHA for 0be03a4 - Browse repository at this point
Copy the full SHA 0be03a4View commit details
This comparison is taking too long to generate.
Unfortunately it looks like we can’t render this comparison for you right now. It might be too big, or there might be something weird with your repository.
You can try running this command locally to see the comparison on your machine:
git diff ai@6.0.84...ai@6.0.85