• SSH into running Vercel Sandboxes with the CLI

    You can now open secure, interactive shell sessions to running Sandboxes with the Vercel Sandbox CLI.

    pnpm i -g sandbox
    sandbox login
    sandbox create # If you don't have a running Sandbox to SSH into
    sandbox ssh <sandbox-id>

    Note: While you’re connected, the Sandbox timeout is automatically extended in 5-minute increments to help avoid unexpected disconnections, for up to 5 hours.

    Learn more in the Sandbox CLI docs.

  • Enhanced Observability for Hono and Express projects

    Users can opt in to an experimental build mode for Hono and Express projects.

    This mode lets you filter logs by route, similar to Next.js, and it updates the build pipeline with better module resolution.

    • Relative imports no longer require file extensions

    • TypeScript path aliases are supported

    • Improved ESM and CommonJS interoperability

    To enable it, set VERCEL_EXPERIMENTAL_BACKENDS=1 in your project’s environment variables.

    Jeff See

  • OpenResponses API now supported on Vercel AI Gateway

    Vercel AI Gateway is a day 0 launch partner for the OpenResponses API, an open-source specification from OpenAI for multi-provider AI interactions.

    OpenResponses provides a unified interface for text generation, streaming, tool calling, image input, and reasoning across providers.

    AI Gateway supports OpenResponses for:

    • Text generation: Send messages and receive responses from any supported model.

    • Streaming: Receive tokens as they're generated via server-sent events.

    • Tool calling: Define functions that models can invoke with structured arguments.

    • Image input: Send images alongside text for vision-capable models.

    • Reasoning: Enable extended thinking with configurable effort levels.

    • Provider fallbacks: Configure automatic fallback chains across models and providers.

    Use OpenResponses with your AI Gateway key, and switch models across providers by changing the model string.

    const response = await fetch('https://ai-gateway.vercel.sh/v1/responses', {
    method: 'POST',
    headers: {
    'Content-Type': 'application/json',
    Authorization: `Bearer ${process.env.VERCEL_AI_GATEWAY_KEY}`,
    },
    body: JSON.stringify({
    model: 'anthropic/claude-sonnet-4.5',
    input: [
    {
    type: 'message',
    role: 'user',
    content: 'Explain quantum computing in one sentence.',
    }
    ],
    }),
    });

    You can also use OpenResponses for more complex cases, like tool calling.

    const response = await fetch('https://ai-gateway.vercel.sh/v1/responses', {
    method: 'POST',
    headers: {
    'Content-Type': 'application/json',
    Authorization: `Bearer ${process.env.VERCEL_AI_GATEWAY_KEY}`,
    },
    body: JSON.stringify({
    model: 'zai/glm-4.7',
    input: [{ type: 'message', role: 'user', content: 'What is the weather in SF?' }],
    tools: [{
    type: 'function',
    name: 'get_weather',
    description: 'Get current weather for a location',
    parameters: {
    type: 'object',
    properties: { location: { type: 'string' } },
    required: ['location'],
    },
    }],
    }),
    });

    Read the OpenResponses API documentation or view the specification.

  • Node.js runtime now defaults to version 24 for Vercel Sandbox

    Vercel Sandbox for Node.js now uses Node.js 24 by default. This keeps the Node.js runtime aligned with the latest Node.js features and performance improvements.

    If you don’t explicitly configure a runtime, Sandbox will use Node.js 24 (as shown below).

    main.ts
    import { Sandbox } from "@vercel/sandbox";
    async function main() {
    const sandbox = await Sandbox.create();
    const version = await sandbox.runCommand("node", ["-v"]);
    console.log(`Node.js version: ${await version.stdout()}`);
    }
    main().catch(console.error);

    Read the Sandbox documentation to learn more.

    Andy Waller

  • Access Perplexity Web Search on Vercel AI Gateway with any model

    You can now give any model the ability to search the web using Perplexity through Vercel's AI Gateway.

    AI Gateway supports Perplexity Search as a universal web search tool that works with all models, regardless of provider. Unlike native search tools that are exclusive to specific providers, Perplexity Search can be added to all models.

    To use Perplexity Search with the AI SDK, import gateway.tools.perplexitySearch() from @ai-sdk/gateway and pass it in the tools parameter as perplexity_search to any model.

    import { generateText } from "ai"
    import { gateway } from "@ai-sdk/gateway"
    const result = await generateText({
    model: "openai/gpt-5.2",
    tools: {
    perplexity_search: gateway.tools.perplexitySearch(),
    },
    prompt: "What changed in Next.js this week?",
    })
    console.log(result.text)

    Some example use cases include:

    Models without native search: Enable web search on models like zai/glm-4.7 or any from any other providers that don't expose a built-in search tool.

    import { streamText } from "ai"
    import { gateway } from "@ai-sdk/gateway"
    const result = await streamText({
    model: "zai/glm-4.7",
    prompt:
    "What are the latest AI safety guidelines " +
    "published by major tech companies?",
    tools: {
    perplexity_search: gateway.tools.perplexitySearch({
    maxResults: 5,
    searchRecencyFilter: "month",
    searchLanguageFilter: ["en"],
    }),
    },
    })

    Developer tooling and CI assistants: Get current package versions, recently merged PRs, release notes, or docs updates.

    import { generateText } from "ai"
    import { gateway } from "@ai-sdk/gateway"
    const { text } = await generateText({
    model: "minimax/minimax-m2.1",
    prompt:
    "What breaking changes were introduced in " +
    "Next.js 16.1? Check the latest release notes " +
    "and migration guide.",
    tools: {
    perplexity_search: gateway.tools.perplexitySearch({
    maxResults: 5,
    searchDomainFilter: [
    "github.com",
    "nextjs.org",
    "vercel.com",
    ],
    searchRecencyFilter: "month",
    }),
    },
    })

    Consistency with fallbacks: Maintain search behavior across multiple providers without rewriting search logic.

    import { streamText } from "ai"
    import { gateway } from "@ai-sdk/gateway"
    const result = await streamText({
    model: "meta/llama-3.3-70b",
    prompt:
    "What are the latest critical CVEs disclosed " +
    "for Node.js in the past week?",
    tools: {
    perplexity_search: gateway.tools.perplexitySearch({
    maxResults: 5,
    searchDomainFilter: [
    "nodejs.org",
    "cve.mitre.org",
    "github.com",
    ],
    }),
    },
    providerOptions: {
    order: ["cerebras", "togetherai"],
    },
    })

    For more information, see the AI Gateway Perplexity Web Search docs.

  • GPT 5.2 Codex now available on Vercel AI Gateway

    You can now access GPT 5.2 Codex with Vercel's AI Gateway and no other provider accounts required. GPT 5.2 Codex combines GPT 5.2's strength in professional knowledge work with GPT 5.1 Codex Max's agentic coding capabilities.

    GPT 5.2 Codex is better at working on long running coding tasks compared to predecessors and can handle more complex tasks like large refactors and migrations more reliably. The model has stronger vision performance for more accurate processing of screenshots and charts that are shared while coding. GPT 5.2 Codex also surpasses GPT 5.1 Codex Max in cyber capabilities and outperformed the previous model in OpenAI's Professional Capture-the-Flag (CTF) cybersecurity eval.

    To use the GPT 5.2 Codex, with the AI SDK, set the model to openai/gpt-5.2-codex:

    import { streamText } from 'ai';
    const result = streamText({
    model: 'openai/gpt-5.2-codex',
    prompt:
    `Take the attached prototypes, diagram, and reference screenshots
    to build a production app for customer analytics dashboards.`
    });

    AI Gateway provides a unified API for calling models, tracking usage and cost, and configuring retries, failover, and performance optimizations for higher-than-provider uptime. It includes built-in observability, Bring Your Own Key support, and intelligent provider routing with automatic retries.

    Learn more about AI Gateway, view the AI Gateway model leaderboard or try it in our model playground.

    AI Gateway: Track top AI models by usage

    The AI Gateway model leaderboard ranks the most used models over time by total token volume across all traffic through the Gateway. Updates regularly.

    View the leaderboard

  • AI Voice Elements

    AI Elements Voice (Dark)AI Elements Voice (Dark)

    Today we're releasing a brand new set of components for AI Elements designed to work with the Transcription and Speech functions of the AI SDK, helping you build the next generation of voice agents, transcription services and apps powered by natural language.

    Link to headingPersona

    The Persona component displays an animated AI visual that responds to different conversational states. Built with Rive WebGL2, it provides smooth, high-performance animations for various AI interaction states including idle, listening, thinking, speaking, and asleep. The component supports multiple visual variants to match different design aesthetics.

    npx ai-elements@latest add persona

    Link to headingSpeech Input

    The SpeechInput component provides an easy-to-use interface for capturing voice input in your application. It uses the Web Speech API for real-time transcription in supported browsers (Chrome, Edge), and falls back to MediaRecorder with an external transcription service for browsers that don't support Web Speech API (Firefox, Safari).

    npx ai-elements@latest add speech-input

    Link to headingTranscription

    The Transcription component provides a flexible render props interface for displaying audio transcripts with synchronized playback. It automatically highlights the current segment based on playback time and supports click-to-seek functionality for interactive navigation.

    npx ai-elements@latest add transcription

    Link to headingAudio Player

    The AudioPlayer component provides a flexible and customizable audio playback interface built on top of media-chrome. It features a composable architecture that allows you to build audio experiences with custom controls, metadata display, and seamless integration with AI-generated audio content.

    npx ai-elements@latest add audio-player

    Link to headingMicrophone Selector

    The MicSelector component provides a flexible and composable interface for selecting microphone input devices. Built on shadcn/ui's Command and Popover components, it features automatic device detection, permission handling, dynamic device list updates, and intelligent device name parsing.

    npx ai-elements@latest add mic-selector

    Link to headingVoice Selector

    The VoiceSelector component provides a flexible and composable interface for selecting AI voices. Built on shadcn/ui's Dialog and Command components, it features a searchable voice list with support for metadata display (gender, accent, age), grouping, and customizable layouts. The component includes a context provider for accessing voice selection state from any nested component.

    npx ai-elements@latest add voice-selector