Skip to content
  • Postgres connections now work through Sandbox firewall

    Vercel Sandbox can now connect to hosted Postgres databases, including Neon, Supabase, AWS RDS, Nile, and Prisma Postgres. To enable a connection, add the database host to your Sandbox's allowed domains.

    Link to headingBackground

    When SNI based filtering is used with Vercel Sandbox, the sandbox firewall restricts outbound network access by checking the domain name during a connection's TLS handshake. This works seamlessly for HTTPS traffic, where the domain is visible at the start of the connection.

    Postgres, however, negotiates TLS differently. A Postgres client first opens a plain TCP connection and then upgrades to TLS. Because the domain isn't available when the firewall first needs it, Postgres connections through a standard domain-restricted Sandbox would fail.

    Link to headingWhat changed

    The Sandbox firewall now adjusts for the Postgres TLS negotiation flow. It detects the protocol's startup sequence, waits for the TLS upgrade, and then applies your domain policy before forwarding the connection to the database. No changes are needed to your code or database configuration.

    Link to headingConnecting to hosted database

    Here's a full example: create a Sandbox, install a Postgres client, lock down the network to only the database host, and run a query.

    import { Sandbox } from '@vercel/sandbox';
    const { PGHOST, PGUSER, PGPASSWORD, PGDATABASE } = process.env;
    const connectionString = `postgres://${PGUSER}:${PGPASSWORD}@${PGHOST}:5432/${PGDATABASE}?sslmode=require`;
    // Start with unrestricted network access to install dependencies.
    const sandbox = await Sandbox.create();
    await sandbox.runCommand({
    cmd: 'sudo',
    args: ['dnf', 'install', '-y', 'postgresql15'],
    });
    // Lock the sandbox down to only the database host before running untrusted code.
    await sandbox.updateNetworkPolicy({
    allowDomains: [PGHOST!],
    });
    const result = await sandbox.runCommand({
    cmd: 'psql',
    args: [connectionString, '-c', 'SELECT now();'],
    });
    console.log(await result.stdout());

    Link to headingImportant to know

    • TLS is required: Domain-based rules rely on the hostname being visible during the TLS handshake, so clients must connect with sslmode=require or higher. If your database doesn't support TLS, you can allow it by IP range instead. Most managed Postgres providers require TLS by default.

    • GSSAPI encryption is not supported: Clients using gssencmode=prefer will fall back to TLS automatically; gssencmode=require will not connect.

    • No silent downgrades: If a client uses sslmode=prefer and the database doesn't support TLS, the connection will fail rather than silently falling back to plain-text.

    Learn more about the Sandbox firewall.

    Brandon Tuttle

  • Grok 4.3 on AI Gateway

    Grok 4.3 is now available on Vercel AI Gateway. The model has a 1M token context window and improvements in accuracy, tool calling, and instruction following.

    To use Grok 4.3, set model to xai/grok-4.3 in the AI SDK.

    import { streamText } from 'ai';
    const result = streamText({
    model: 'xai/grok-4.3',
    prompt: 'Analyze this dataset and summarize the key trends.',
    });

    AI Gateway provides a unified API for calling models, tracking usage and cost, and configuring retries, failover, and performance optimizations for higher-than-provider uptime. It includes built-in custom reporting, observability, Bring Your Own Key support, and intelligent provider routing with automatic retries.

    Learn more about AI Gateway, view the AI Gateway model leaderboard or try it in our model playground.

  • Custom tags available in beta on Vercel Sandbox

    As teams scale isolated environments for AI agents, code generation, or dev workflows, keeping track of which sandbox belongs to whom, and why, becomes critical. Custom tags allow you to organize, filter, and manage Vercel Sandboxes at scale. Each sandbox supports up to five tags.

    Link to headingOrganize by environment, team, or customer

    Tags are flexible by design. Use them to separate staging from production, attribute usage to specific teams, or isolate sandboxes per customer in multi-tenant platforms:

    const sandbox = await Sandbox.create({
    name: "my-sandbox",
    tags: { env: "staging" },
    });

    Link to headingUpdate tags as context changes

    Promote a sandbox from staging to production, reassign ownership, or mark it for cleanup without recreating it:

    await sandbox.update({
    tags: { env: "production", team: "infra" },
    });

    Link to headingEasily track your sandboxes

    Filter sandboxes by any tag to quickly surface the ones that matter. This is useful for dashboards, cleanup scripts, or routing logic that needs to find all sandboxes matching a specific environment or team:

    const productionSandboxes = await Sandbox.list({
    tags: { env: "production" },
    });
    console.log(
    "Production sandboxes:",
    productionSandboxes.sandboxes.map((s) => s.name),
    ); // my-sandbox

    Link to headingUse Cases

    • AI agents at scale: Tag sandboxes by session, user, or agent run to track which execution environment belongs to which workflow.

    • Multi-tenant platforms: Isolate and filter sandboxes per customer or workspace, making billing attribution and cleanup straightforward.

    • Team-level visibility: Attribute sandbox usage to specific teams for cost tracking or capacity planning.

    This feature is in beta and requires upgrading to the beta SDK and CLI packages. Learn more in the documentation.

    Andy Waller

  • Vercel now supports Pro plan in Stripe Projects

    You can now sign up for or upgrade to a Vercel Pro plan directly from Stripe Projects using shared payment tokens (SPTs). Agents and developers can manage plan changes programmatically from the Stripe CLI, without leaving their workflow.

    Link to headingWhat’s new

    • Provision or upgrade to Vercel Pro directly from the Stripe CLI

    • Support for both upgrade and downgrade flows

    • Powered by shared payment tokens for secure, streamlined billing

    This builds on our Stripe Projects launch in developer preview by enabling end-to-end provisioning and billing in one place. Instead of switching between dashboards, you can now handle infrastructure setup and plan management directly from the terminal.

    Link to headingGetting started

    If you’re already using Stripe Projects and have set up billing via stripe projects billing add , you can upgrade your Vercel plan from the CLI simply by running stripe projects add vercel/pro

    If you are new to Stripe Projects, Install the plugin and initialize your project:

    stripe plugin install projects
    stripe projects init my-app
    stripe projects add vercel/pro

    Tony Pan, Marc Brakken, Bhrigu Srivastava

  • Native Deployment Checks are now available

    You can now run lint and typecheck on every Vercel deployment, in parallel with the build. Native Deployment Checks are available to every team and join your existing Deployment Checks alongside GitHub and Marketplace integrations.

    Once added from your project's Build and Deployment settings, Vercel runs the matching script from your package.json on each deployment, and skips the check if no matching script exists. You can mark a check as required to hold the deployment from production until it passes, and choose which environments each check runs on.

    When a Native Deployment Check fails on a pull request, Vercel Agent investigates the failure and suggests a fix you can review and merge.

    +3

    Cody W, Jeffrey A, Shay C, Marcos G, William B

  • Hobby projects now default to 30-day deployment retention

    Starting April 29th, the maximum retention policy for Hobby plans will be capped at 30 days. Deployments outside your retention window will be automatically removed. This excludes your 10 most recent production deployments and any aliased deployments, which continue to be preserved regardless of retention settings.

    Pro and Enterprise plans are not affected.

    Learn more about Deployment Retention.

  • GPT 5.5 on AI Gateway

    GPT-5.5 is now available on Vercel AI Gateway.

    There are 2 variants: GPT-5.5 and GPT-5.5 Pro. Both models are tuned for long-running agentic work across coding, computer use, knowledge work, and scientific research, and are more token-efficient than the previous generation.

    GPT-5.5 is stronger at agentic coding and long-horizon work where the model needs to hold context across a large system and carry changes through the surrounding codebase. Paired with computer-use skills, it can operate real software and turn raw material into documents, spreadsheets, or slide presentations.

    GPT-5.5 Pro is built for demanding, multi-step work where response quality matters more than latency. Early testing shows gains in business, legal, education, data science, and technical research workflows that involve critiquing work over multiple passes and stress-testing arguments.

    To use GPT-5.5, set model to openai/gpt-5.5 or openai/gpt-5.5-pro in the AI SDK.

    import { streamText } from 'ai';
    const result = streamText({
    model: 'openai/gpt-5.5', // or 'openai/gpt-5.5-pro'
    prompt:
    `Migrate our user settings page from REST to the new
    GraphQL schema, update the affected components and tests,
    and open a PR with a summary of the changes.`,
    });

    AI Gateway provides a unified API for calling models, tracking usage and cost, and configuring retries, failover, and performance optimizations for higher-than-provider uptime. It includes built-in custom reporting, observability, Bring Your Own Key support, and intelligent provider routing with automatic retries.

    Learn more about AI Gateway, view the AI Gateway model leaderboard or try it in our model playground.

  • Deepseek V4 on AI Gateway

    DeepSeek V4 is now available on Vercel AI Gateway.

    There are 2 model variants: DeepSeek V4 Pro and DeepSeek V4 Flash. A 1M token context window is the default across both models.

    DeepSeek V4 Pro focuses on agentic coding, formal mathematical reasoning, and long-horizon workflows. It handles feature development, bug fixing, and refactoring across stacks, with tool use that works across harnesses like MCP workflows and agent frameworks. It also writes clear, well-structured long-form documents.

    DeepSeek V4 Flash performs close to V4 Pro on reasoning and holds up on simpler agent tasks, with a smaller parameter size for faster responses and lower API cost. It's a good fit for high-volume workloads and latency-sensitive use cases.

    To use DeepSeek V4, set model to deepseek/deepseek-v4-pro or deepseek/deepseek-v4-flash in the AI SDK.

    import { streamText } from 'ai';
    const result = streamText({
    model: 'deepseek/deepseek-v4-pro', // or 'deepseek/deepseek-v4-flash'
    prompt:
    `Audit this repository for unsafe concurrent access patterns,
    propose a refactor that introduces proper synchronization,
    and open the changes as a PR with a migration plan.`,
    });

    AI Gateway provides a unified API for calling models, tracking usage and cost, and configuring retries, failover, and performance optimizations for higher-than-provider uptime. It includes built-in custom reporting, observability, Bring Your Own Key support, and intelligent provider routing with automatic retries.

    Learn more about AI Gateway, view the AI Gateway model leaderboard or try it in our model playground.