An architectural layer for integrating Large Language Models in Node.js.
Provider-agnostic by design.
Integrating multiple LLM providers often means juggling different SDKs, API styles, and update cycles. NodeLLM provides a single, unified, production-oriented API that stays consistent even when providers change.
Get Started Read on Blog View on GitHub
Distributed via npm
๐ What NodeLLM is NOT
NodeLLM represents a clear architectural boundary between your system and LLM vendors.
NodeLLM is NOT:
- A wrapper around a single provider SDK (like
openaior@google/generative-ai) - A prompt-engineering framework
- An agent playground or experimental toy
๐๏ธ Why NodeLLM?
Direct integrations often become tightly coupled to specific providers, making it difficult to adapt as models evolve. LLMs should be treated as infrastructure, and NodeLLM helps you build a stable foundation that persists regardless of which model is currently โstate of the art.โ
NodeLLM helps solve architectural problems, not just provide API access. It serves as the core integration layer for LLMs in the Node.js ecosystem.
Strategic Principles
- Provider Isolation: Decouple your services from vendor SDKs.
- Production-Ready: Native support for streaming, retries, and unified error handling.
- Predictable API: Consistent behavior for Tools, Vision, and Structured Outputs across all models.
โก The Architectural Path
import { NodeLLM } from "@node-llm/core";
// 1. Zero-Config (Uses NODELLM_PROVIDER and API keys from environment)
const chat = NodeLLM.chat("gpt-4o");
// 2. Chat (High-level request/response)
const response = await chat.ask("Explain event-driven architecture");
console.log(response.content);
// 3. Streaming (Standard AsyncIterator)
for await (const chunk of chat.stream("Explain event-driven architecture")) {
process.stdout.write(chunk.content);
}
๐ง Strategic Configuration
NodeLLM provides a flexible configuration system designed for enterprise usage:
// Recommended for multi-provider pipelines
// Use createLLM() instead;
// Switch providers at the framework level
const llm = createLLM({ provider: "anthropic" });
// Support for Custom Endpoints (e.g., Azure or LocalAI)
const llm = createLLM({
openaiApiKey: process.env.AZURE_KEY,
openaiApiBase: "https://your-resource.openai.azure.com/openai/deployments/..."
});
๐ฎ Capabilities
๐ฌ Unified Chat
Stop rewriting code for every provider. NodeLLM normalizes inputs and outputs into a single, predictable mental model.
import { NodeLLM } from "@node-llm/core";
// Uses NODELLM_PROVIDER from environment (defaults to GPT-4o)
const chat = NodeLLM.chat();
await chat.ask("Hello world");
๐๏ธ Smart Vision & Files
Pass images, PDFs, or audio files directly. We handle the heavy lifting: fetching remote URLs, base64 encoding, and MIME type mapping.
await chat.ask("Analyze this interface", {
files: ["./screenshot.png", "https://example.com/spec.pdf"]
});
๐ ๏ธ Auto-Executing Tools
Define tools once using our clean Class-Based DSL; NodeLLM manages the recursive execution loop for you.
import { Tool, z } from "@node-llm/core";
class WeatherTool extends Tool {
name = "get_weather";
description = "Get current weather";
schema = z.object({ loc: z.string() });
async handler({ loc }) {
return `Sunny in ${loc}`;
}
}
await chat.withTool(WeatherTool).ask("Weather in Tokyo?");
โจ Structured Output
Get type-safe, validated JSON back using Zod schemas.
import { z } from "@node-llm/core";
const Product = z.object({ name: z.string(), price: z.number() });
const res = await chat.withSchema(Product).ask("Generate a gadget");
console.log(res.parsed.name); // Full type-safety
๐ก๏ธ Security & Compliance
Implement custom security, PII detection, and compliance logic using pluggable asynchronous hooks (beforeRequest and afterResponse).
๐งฑ Smart Context Isolation
Stop worrying about prompt injection or instruction drift. NodeLLM automatically separates system instructions from the conversation history, providing a higher level of protection and strictness while automatically mapping roles like OpenAIโs developer role.
๐จ Image Generation
await NodeLLM.paint("A cyberpunk city in rain");
๐ค Audio Transcription
await NodeLLM.transcribe("meeting-recording.wav");
โก Scoped Parallelism
Run multiple providers in parallel safely without global configuration side effects using isolated contexts. You can also override credentials (API keys) for specific instances.
const [gpt, claude] = await Promise.all([
// Each call branch off into its own isolated context
NodeLLM.withProvider("openai").chat("gpt-4o").ask(prompt),
NodeLLM.withProvider("anthropic", { anthropicApiKey: "..." })
.chat("claude-3-5-sonnet")
.ask(prompt)
]);
๐ง Deep Reasoning
Direct access to the thought process of models like DeepSeek R1 or OpenAI o1/o3 using the .reasoning field.
const res = await NodeLLM.chat("deepseek-reasoner").ask("Solve this logical puzzle");
console.log(res.reasoning); // Chain-of-thought
๐ Why Use This Over Official SDKs?
| Feature | NodeLLM | Official SDKs | Architectural Impact |
|---|---|---|---|
| Provider Logic | Transparently Handled | Exposed to your code | Low Coupling |
| Streaming | Standard AsyncIterator | Vendor-specific Events | Predictable Data Flow |
| Tool Loops | Automated Recursion | Manual implementation | Reduced Boilerplate |
| Files/Vision | Intelligent Path/URL handling | Base64/Buffer management | Cleaner Service Layer |
| Configuration | Centralized & Global | Per-instance initialization | Easier Lifecycle Mgmt |
๐ Supported Providers
| Provider | Supported Features |
|---|---|
| Chat, Streaming, Tools, Vision, Audio, Images, Transcription, Reasoning, Smart Developer Role | |
| Chat, Streaming, Tools, Vision, Audio, Video, Embeddings | |
| Chat, Streaming, Tools, Vision, PDF, Structured Output | |
| Chat (V3), Reasoning (R1), Tools, Streaming | |
| Aggregator, Chat, Streaming, Tools, Vision, Embeddings, Reasoning | |
| Local Inference, Chat, Streaming, Tools, Vision, Embeddings |
๐ฟ Try The Live Demo
Want to see NodeLLM in action? We have a full-stack, multi-provider example application ready for you.
git clone https://github.com/node-llm/node-llm.git
cd node-llm
npm install
npm run demo
๐ค Contributing
We welcome contributions! Please see our Contributing Guide for more details on how to get started.
๐ซถ Credits
Heavily inspired by the elegant design of RubyLLM.
Upgrading to v1.6.0? Read the Migration Guide to understand the new strict provider requirements and typed error hierarchy.