Skip to content

One beautiful Node.js API for OpenAI, Anthropic, Gemini, DeepSeek and more. Chat, Vision, Audio, Images, Embeddings, Tools, Streaming & Structured Output

License

Notifications You must be signed in to change notification settings

node-llm/node-llm

Repository files navigation

NodeLLM logo

NodeLLM

An architectural layer for integrating Large Language Models in Node.js.

Provider-agnostic by design.

Integrating multiple LLM providers often means juggling different SDKs, API styles, and update cycles. NodeLLM provides a single, unified, production-oriented API for interacting with over 540+ models across multiple providers (OpenAI, Gemini, Anthropic, DeepSeek, OpenRouter, Ollama, etc.) that stays consistent even when providers change.

                        


npm version License: MIT TypeScript


🛑 What NodeLLM is NOT

NodeLLM represents a clear architectural boundary between your system and LLM vendors.

NodeLLM is NOT:

  • A wrapper around a single provider SDK (like openai or @google/generative-ai)
  • A prompt-engineering framework
  • An agent playground or experimental toy

🏗️ Why NodeLLM?

Direct integrations often become tightly coupled to specific providers, making it difficult to adapt as models evolve. LLMs should be treated as infrastructure, and NodeLLM helps you build a stable foundation that persists regardless of which model is currently "state of the art."

NodeLLM helps solve architectural problems, not just provide API access. It serves as the core integration layer for LLMs in the Node.js ecosystem.

Strategic Goals

  • Provider Isolation: Decouple your services from vendor SDKs.
  • Production-Ready: Native support for streaming, automatic retries, and unified error handling.
  • Predictable API: Consistent behavior for Tools, Vision, and Structured Outputs across all models, now including full parity for streaming.

⚡ The Architectural Path

import { NodeLLM } from "@node-llm/core";

// 1. Zero-Config (NodeLLM automatically reads NODELLM_PROVIDER and API keys)
const chat = NodeLLM.chat("gpt-4o");

// 2. Chat (High-level request/response)
const response = await chat.ask("Explain event-driven architecture");
console.log(response.content);

// 3. Streaming (Standard AsyncIterator)
for await (const chunk of chat.stream("Explain event-driven architecture")) {
  process.stdout.write(chunk.content);
}

🎯 Real-World Example: Brand Perception Checker

Built with NodeLLM - Multi-provider AI analysis, tool calling, and structured outputs working together.

View Example →


🔧 Strategic Configuration

NodeLLM provides a flexible, lazy-initialized configuration system designed for enterprise usage. It is safe for ESM and resolved only when your first request is made, eliminating the common dotenv race condition.

// Recommended for multi-provider pipelines
const llm = createLLM({
  openaiApiKey: process.env.OPENAI_API_KEY,
  anthropicApiKey: process.env.ANTHROPIC_API_KEY,
  ollamaApiBase: process.env.OLLAMA_API_BASE
});

// Support for Custom Endpoints (e.g., Azure or LocalAI)
const llm = createLLM({
  openaiApiKey: process.env.AZURE_KEY,
  openaiApiBase: "https://your-resource.openai.azure.com/openai/deployments/..."
});

Full Configuration Guide →



🔮 Capabilities

💬 Unified Chat

Stop rewriting code for every provider. NodeLLM normalizes inputs and outputs into a single, predictable mental model.

import { NodeLLM } from "@node-llm/core";

// Uses NODELLM_PROVIDER from environment (defaults to GPT-4o)
const chat = NodeLLM.chat();
await chat.ask("Hello world");

👁️ Smart Vision & Files

Pass images, PDFs, or audio files directly to both ask() and stream(). We handle the heavy lifting: fetching remote URLs, base64 encoding, and MIME type mapping.

await chat.ask("Analyze this interface", {
  files: ["./screenshot.png", "https://example.com/spec.pdf"]
});

🛠️ Auto-Executing Tools

Define tools once;NodeLLM manages the recursive execution loop for you, keeping your controller logic clean. Works seamlessly with both regular chat and streaming!

import { Tool, z } from "@node-llm/core";

// Class-based DSL
class WeatherTool extends Tool {
  name = "get_weather";
  description = "Get current weather";
  schema = z.object({ location: z.string() });

  async execute({ location }) {
    return `Sunny in ${location}`;
  }
}

// Now the model can use it automatically
await chat.withTool(WeatherTool).ask("What's the weather in Tokyo?");

// Lifecycle Hooks for Error & Flow Control
chat.onToolCallError((call, err) => "STOP");

Full Tool Calling Guide →

🔍 Comprehensive Debug Logging

Enable detailed logging for all API requests and responses across every feature and provider:

// Set environment variable
process.env.NODELLM_DEBUG = "true";

// Now see detailed logs for every API call:
// [NodeLLM] [OpenAI] Request: POST https://api.openai.com/v1/chat/completions
// { "model": "gpt-4o", "messages": [...] }
// [NodeLLM] [OpenAI] Response: 200 OK
// { "id": "chatcmpl-123", ... }

Covers: Chat, Streaming, Images, Embeddings, Transcription, Moderation - across all providers!

✨ Structured Output

Get type-safe, validated JSON back using Zod schemas.

import { z } from "@node-llm/core";
const Product = z.object({ name: z.string(), price: z.number() });

const res = await chat.withSchema(Product).ask("Generate a gadget");
console.log(res.parsed.name); // Full type-safety

🎨 Image Generation

await NodeLLM.paint("A cyberpunk city in rain");

🎤 Audio Transcription

await NodeLLM.transcribe("meeting-recording.wav");

⚡ Scoped Parallelism

Run multiple providers in parallel safely without global configuration side effects using isolated contexts.

const [gpt, claude] = await Promise.all([
  // Each call branch off into its own isolated context
  NodeLLM.withProvider("openai").chat("gpt-4o").ask(prompt),
  NodeLLM.withProvider("anthropic").chat("claude-3-5-sonnet").ask(prompt)
]);

🧠 Deep Reasoning

Direct access to the thought process of models like DeepSeek R1 or OpenAI o1/o3 using the .reasoning field.

const res = await NodeLLM.chat("deepseek-reasoner").ask("Solve this logical puzzle");
console.log(res.reasoning); // Chain-of-thought

🚀 Why use this over official SDKs?

Feature NodeLLM Official SDKs Architectural Impact
Provider Logic Transparently Handled Exposed to your code Low Coupling
Streaming Standard AsyncIterator Vendor-specific Events Predictable Data Flow
Streaming + Tools Automated Execution Manual implementation Seamless UX
Tool Loops Automated Recursion Manual implementation Reduced Boilerplate
Files/Vision Intelligent Path/URL handling Base64/Buffer management Cleaner Service Layer
Configuration Centralized & Global Per-instance initialization Easier Lifecycle Mgmt

📋 Supported Providers

Provider Supported Features
OpenAI Chat, Streaming + Tools, Vision, Audio, Images, Transcription, Reasoning
Gemini Chat, Streaming + Tools, Vision, Audio, Video, Embeddings
Anthropic Chat, Streaming + Tools, Vision, PDF, Structured Output
DeepSeek Chat (V3), Reasoning (R1), Streaming + Tools
OpenRouter Aggregator, Chat, Streaming, Tools, Vision, Embeddings, Reasoning
Ollama Local Inference, Chat, Streaming, Tools, Vision, Embeddings

📚 Documentation & Installation

npm install @node-llm/core

View Full Documentation ↗

🍿 Try the Live Demo

Want to see it in action? Run this in your terminal:

git clone https://github.com/node-llm/node-llm.git
cd node-llm
npm install
npm run demo

🤝 Contributing

We welcome contributions! Please see our Contributing Guide for more details on how to get started.


🫶 Credits

Heavily inspired by the elegant design of RubyLLM.


📄 License

MIT © [NodeLLM contributors]

About

One beautiful Node.js API for OpenAI, Anthropic, Gemini, DeepSeek and more. Chat, Vision, Audio, Images, Embeddings, Tools, Streaming & Structured Output

Topics

Resources

License

Contributing

Stars

Watchers

Forks

Packages

No packages published

Contributors 3

  •  
  •  
  •