Skip to content

Latest commit

 

History

History
274 lines (230 loc) · 9.5 KB

File metadata and controls

274 lines (230 loc) · 9.5 KB
title Guide Documents > Utilization Cases > Vercel AI SDK

import { Callout, Tabs } from "nextra/components";

import LocalSource from "../../../components/LocalSource";

toVercelTools() function

<Tabs items={[ @typia/vercel, ILlmController, IHttpLlmController, HttpLlm.controller, ]}> <Tabs.Tab>

export function toVercelTools(props: {
  controllers: Array<ILlmController | IHttpLlmController>;
  prefix?: boolean | undefined;
}): Record<string, Tool>;

</Tabs.Tab> <Tabs.Tab> </Tabs.Tab> <Tabs.Tab> </Tabs.Tab> <Tabs.Tab> </Tabs.Tab>

Vercel AI SDK integration for typia.

toVercelTools() converts TypeScript classes or OpenAPI documents into Vercel AI SDK Record<string, Tool> at once.

Every class method becomes a tool, JSDoc comments become tool descriptions, and TypeScript types become JSON schemas — all at compile time. For OpenAPI documents, every API endpoint is converted to a Vercel tool with schemas from the specification.

Lenient JSON parsing, type coercion, and validation feedback are all embedded automatically — the complete function calling harness that turns unreliable LLM output into 100% correct structured data.

Setup

npm install @typia/vercel ai
npm install typia
npx typia setup

From TypeScript Class

<Tabs items={[ "Vercel AI Tools", Calculator, BbsArticleService, IBbsArticle, ]}> <Tabs.Tab>

import { openai } from "@ai-sdk/openai";
import { toVercelTools } from "@typia/vercel";
import { generateText, GenerateTextResult, Tool } from "ai";
import typia from "typia";

import { Calculator } from "./Calculator";

const tools: Record<string, Tool> = toVercelTools({
  controllers: [
    typia.llm.controller<Calculator>("calculator", new Calculator()),
  ],
});

const result: GenerateTextResult = await generateText({
  model: openai("gpt-4o"),
  prompt: "What is 10 + 5?",
  tools,
});

</Tabs.Tab> <Tabs.Tab> </Tabs.Tab> <Tabs.Tab> </Tabs.Tab> <Tabs.Tab> </Tabs.Tab>

Create controllers from TypeScript classes with typia.llm.controller<Class>(), and pass them to toVercelTools().

  • controllers: Array of controllers created via typia.llm.controller<Class>() or HttpLlm.controller()
  • prefix: When true (default), tool names are formatted as {controllerName}_{methodName}. Set to false to use bare method names
**Type Restrictions**

Every method's parameter type must be a keyworded object type with static keys — not a primitive, array, or union. The return type must also be an object type or void. Primitive return types like number or string are not allowed; wrap them in an object (e.g., { value: number }). See typia.llm.application() Restrictions for details.

From OpenAPI Document

import { toVercelTools } from "@typia/vercel";
import { HttpLlm } from "@typia/utils";
import { Tool } from "ai";

const tools: Record<string, Tool> = toVercelTools({
  controllers: [
    HttpLlm.controller({
      name: "shopping",
      document: await fetch(
        "https://shopping-be.wrtn.ai/editor/swagger.json",
      ).then((r) => r.json()),
      connection: {
        host: "https://shopping-be.wrtn.ai",
        headers: { Authorization: "Bearer ********" },
      },
    }),
  ],
});

Create controllers from OpenAPI documents with HttpLlm.controller(), and pass them to toVercelTools().

  • name: Controller name used as prefix for tool names
  • document: Swagger/OpenAPI document (v2.0, v3.0, or v3.1)
  • connection: HTTP connection info including host and optional headers

The Function Calling Harness

toVercelTools() embeds lenient JSON parsing, type coercion, and validation feedback in every tool — all automatically. When validation fails, the error is returned as text content with inline // ❌ comments at each invalid property:

{
  "name": "John",
  "age": "twenty", // ❌ [{"path":"$input.age","expected":"number"}]
  "email": "not-an-email", // ❌ [{"path":"$input.email","expected":"string & Format<\"email\">"}]
  "hobbies": "reading" // ❌ [{"path":"$input.hobbies","expected":"Array<string>"}]
}

The LLM reads this feedback and self-corrects on the next turn.

In the AutoBe project (AI-powered backend code generator by Wrtn Technologies), qwen3-coder-next showed only 6.75% raw function calling success rate on compiler AST types. However, with the complete harness, it reached 100% — across all four tested Qwen models.

Working on compiler AST means working on any type and any use case.

// Compiler AST may be the hardest type structure possible
//
// Unlimited union types + unlimited depth + recursive references
export type IExpression =
  | IBooleanLiteral
  | INumericLiteral
  | IStringLiteral
  | IArrayLiteralExpression   // <- recursive (contains IExpression[])
  | IObjectLiteralExpression  // <- recursive (contains IExpression)
  | INullLiteral
  | IUndefinedKeyword
  | IIdentifier
  | IPropertyAccessExpression // <- recursive
  | IElementAccessExpression  // <- recursive
  | ITypeOfExpression         // <- recursive
  | IPrefixUnaryExpression    // <- recursive
  | IPostfixUnaryExpression   // <- recursive
  | IBinaryExpression         // <- recursive (left & right)
  | IArrowFunction            // <- recursive (body is IExpression)
  | ICallExpression           // <- recursive (args are IExpression[])
  | INewExpression            // <- recursive
  | IConditionalPredicate     // <- recursive (then & else branches)
  | ... // 30+ expression types total

Structured Output

Use typia.llm.parameters<T>() with Vercel's jsonSchema() to generate structured output with validation:

import { openai } from "@ai-sdk/openai";
import { dedent, LlmJson } from "@typia/utils";
import { generateObject, jsonSchema } from "ai";
import typia, { tags } from "typia";

interface IMember {
  email: string & tags.Format<"email">;
  name: string;
  age: number & tags.Minimum<0> & tags.Maximum<100>;
  hobbies: string[];
  joined_at: string & tags.Format<"date">;
}

const { object } = await generateObject({
  model: openai("gpt-4o"),
  schema: jsonSchema<IMember>(typia.llm.parameters<IMember>(), {
    validate: (value) => {
      const result = typia.validate<IMember>(value);
      if (result.success) return { success: true, value: result.data };
      return {
        success: false,
        error: new Error(LlmJson.stringify(result)),
      };
    },
  }),
  prompt: dedent`
    I am a new member of the community.

    My name is John Doe, and I am 25 years old.
    I like playing basketball and reading books,
    and joined to this community at 2022-01-01.
  `,
});
{
  email: 'john.doe@example.com',
  name: 'John Doe',
  age: 25,
  hobbies: [ 'playing basketball', 'reading books' ],
  joined_at: '2022-01-01'
}

The IMember interface is the single source of truth. typia.llm.parameters<IMember>() generates the JSON schema, and typia.validate<IMember>() validates the output — all from the same type.

Error Handling

<Tabs items={[ "Error Handling Test", Calculator, ]}> <Tabs.Tab> </Tabs.Tab> <Tabs.Tab> </Tabs.Tab>

When a tool execution throws a runtime error (e.g., division by zero), @typia/vercel catches the exception and returns { error: true, message: "Error: Division by zero is not allowed" }. This is different from validation errors — validation errors indicate wrong argument types, while runtime errors indicate the function itself failed.