I keep seeing the same story in teams that ship fast: they need one language for the browser and the server, they need to handle thousands of concurrent connections without exploding infrastructure costs, and they need a runtime that feels simple but scales to real production traffic. That’s why I still reach for Node.js when I’m building APIs, CLIs, event-driven backends, or automation pipelines. It’s not magic; it’s a JavaScript runtime built on Chrome’s V8 engine that executes code outside the browser. The interesting part is how its event loop and non-blocking I/O let you handle concurrency with a single-threaded model.
If you’re new to Node.js, you don’t need a mountain of theory to get value. You need a clear mental model, a few core modules, and a setup that doesn’t trip you. I’ll walk you through the essentials I rely on day-to-day: how Node runs your code, how modules work, how the event loop behaves, and how to build a small HTTP server you can actually run. Along the way, I’ll call out common mistakes, when Node is a great fit, and when you should use something else. You’ll finish with practical steps you can apply immediately and a plan for what to learn next.
Why Node.js Feels Different
When I first learned server-side JavaScript, I assumed it would feel like browser JavaScript with a few extra APIs. The reality is a bit more profound: Node.js flips the server model from “one thread per request” to “one thread handling many requests” using asynchronous, non-blocking I/O. That single change opens a new class of applications—real-time dashboards, chat systems, event-stream processors—where latency and concurrency matter more than raw CPU performance.
Here’s a simple analogy I use: imagine a restaurant with one skilled waiter who can multitask. When a table orders, the waiter relays the order to the kitchen and immediately moves on to another table, instead of standing and waiting for the food to cook. That’s Node’s event loop. The waiter doesn’t cook (CPU-bound tasks), but they do coordinate the flow, so everyone gets served quickly.
This is why Node feels “fast” under load: it spends less time blocked. You should still be careful with CPU-heavy tasks, but for I/O-heavy work—web servers, file access, databases, queues—Node is in its comfort zone.
Installation and a Quick Sanity Check
On a modern machine, installing Node.js is straightforward. I recommend installing an LTS version unless you have a specific reason to chase the newest features. After installing, verify your environment:
node -v
npm -v
If those commands return versions, you’re ready. I still like creating a new project with a minimal package.json even for experiments:
mkdir node-project
cd node-project
npm init -y
That file becomes your project’s map: it defines dependencies, scripts, and metadata. Even in 2026, npm remains the simplest default, though many teams also use pnpm or yarn for performance and workspace features. You can switch later; the fundamentals stay the same.
Environment basics that save headaches
- Node vs npm: Node runs your JS files; npm (or pnpm/yarn) installs packages and runs scripts.
- Project vs global installs: Prefer local installs so versions stay consistent in CI.
.npmrcor.pnpmrc: Great for private registries or caching, but keep configs versioned.- Engine constraints: Use the
enginesfield inpackage.jsonif you’re on a team to avoid version drift.
Your First Server, the Real Way
Here’s a minimal HTTP server I’ve used in demos for years, updated with small comments to make the flow clear:
const http = require(‘http‘);
const server = http.createServer((req, res) => {
// Set a simple response header
res.writeHead(200, { ‘Content-Type‘: ‘text/plain‘ });
// Send the response body
res.write(‘Hello World!‘);
res.end();
});
server.listen(3000, () => {
console.log(‘Server running on port 3000‘);
});
Run it with:
node app.js
Open http://localhost:3000 and you’ll see the response. This tiny example is deceptively powerful because it demonstrates how Node handles requests: an event-driven handler ((req, res) => {}) that can respond quickly and then release the event loop back to handle more work.
A frequent mistake I see is forgetting to end the response. If you call res.write() but never res.end(), the client waits forever. Another is doing heavy CPU work directly in the request handler, which blocks the event loop. Keep request handlers fast, defer CPU work, and you’ll avoid most performance pitfalls.
A better “hello” for real use
If you want something closer to a production-ish handler, include a few guardrails:
const http = require(‘http‘);
const server = http.createServer((req, res) => {
if (req.method !== ‘GET‘) {
res.writeHead(405, { ‘Content-Type‘: ‘application/json‘ });
res.end(JSON.stringify({ error: ‘Method not allowed‘ }));
return;
}
if (req.url === ‘/health‘) {
res.writeHead(200, { ‘Content-Type‘: ‘application/json‘ });
res.end(JSON.stringify({ status: ‘ok‘, time: new Date().toISOString() }));
return;
}
res.writeHead(404, { ‘Content-Type‘: ‘application/json‘ });
res.end(JSON.stringify({ error: ‘Not found‘ }));
});
server.listen(3000, () => {
console.log(‘Server running on port 3000‘);
});
Even basic routing and status codes make local testing much more realistic.
Node’s Architecture: The Event Loop in Practice
I don’t need you to memorize the internal phases of the event loop, but you should understand how work moves through Node so you can debug and design effectively. Here’s the mental model I use:
- JavaScript executes on a single main thread. That’s the code you write.
- I/O tasks get delegated. File reads, network calls, timers—these are handled by libuv and the OS.
- Callbacks get queued. When an I/O task completes, its callback is pushed back to the event loop.
- The event loop keeps spinning. It picks up ready callbacks and runs them, one at a time.
This model explains why Node can handle high concurrency: while your code waits on I/O, Node can accept and process other requests. It also explains why CPU-heavy work is dangerous—there is only one thread for your JavaScript, so if you block it, you block everything.
In production, you often combine this with a process manager or container orchestration to run multiple Node processes in parallel. That gives you multi-core usage without leaving Node’s mental model. You can also use worker threads for specific CPU-bound tasks, but I treat those as an advanced tool rather than a default.
Event loop lag in plain English
If the event loop is “lagging,” it means callbacks are waiting longer than expected before they run. That delay shows up as sluggish APIs, late timers, and overall jitter. It usually comes from one of three causes:
- CPU-heavy code running on the main thread
- Too many synchronous operations (like
readFileSyncor JSON parsing giant payloads) - Unbounded queues or backpressure ignored in streams
A simple fix is to push CPU work into a worker, or chunk it so it yields back to the loop.
Modules: The Core Building Blocks
Node.js was built around modules, and I still think it’s one of its best features. A module is simply a file (or a package) that exports code you can reuse. You’ll encounter three module types:
- Built-in modules like
http,fs,path, andevents. - Local modules: your own files that you import with
./or../paths. - External modules installed from the package registry.
Even in 2026, many codebases still use CommonJS (require) for compatibility, while others use ES Modules (import). Node supports both. My guidance is simple: pick one style per project, and make it consistent. If you’re starting fresh and you don’t need older tooling, ES Modules feel more modern:
import http from ‘node:http‘;
const server = http.createServer((req, res) => {
res.writeHead(200, { ‘Content-Type‘: ‘text/plain‘ });
res.end(‘Hello World!‘);
});
server.listen(3000, () => {
console.log(‘Server running on port 3000‘);
});
To enable ES Modules, you can set "type": "module" in package.json or use the .mjs extension.
Common built-in modules I use constantly
http: Raw HTTP server capabilities.fs: File system access (read/write/stream).path: Cross-platform file paths.events: Event-driven patterns.crypto: Hashes, HMACs, encryption helpers.
These are enough for many scripts and CLI tools. You’ll reach for external packages once your needs grow beyond the core.
Module boundaries and why they matter
Good modules reduce surface area. When you hide complexity behind a module, your app becomes easier to test and reason about. I usually keep modules small and focused:
db.jsexports database helpers onlyserver.jshandles wiring and startuproutes/contains route handlersservices/contains business logic
This separation saves time when you debug or onboard new teammates.
Asynchronous Patterns You’ll Use Every Day
Non-blocking I/O is the heart of Node. If you ignore it, you’ll write code that “works” but doesn’t scale. I rely on three patterns: callbacks, Promises, and async/await.
Callbacks (legacy but still around)
Callbacks are the original Node style. Many older APIs still use them:
const fs = require(‘fs‘);
fs.readFile(‘notes.txt‘, ‘utf8‘, (err, data) => {
if (err) {
console.error(‘Read failed:‘, err);
return;
}
console.log(‘File contents:‘, data);
});
Promises (my default for new code)
Promises are more composable and cleaner for error handling:
const fs = require(‘fs/promises‘);
async function readNotes() {
try {
const data = await fs.readFile(‘notes.txt‘, ‘utf8‘);
console.log(‘File contents:‘, data);
} catch (err) {
console.error(‘Read failed:‘, err);
}
}
readNotes();
async/await (readable and expressive)
async/await is just syntax on top of Promises. I use it almost everywhere because it reads like synchronous code without blocking the event loop.
Common mistake: forgetting to await inside try blocks or mixing callbacks and Promises in the same function. If you convert a callback-style function to Promises, do it fully, not halfway.
Practical pattern: parallelizing async work
One of the simplest performance wins is running independent async calls in parallel:
const fs = require(‘fs/promises‘);
async function loadConfig() {
const [env, settings] = await Promise.all([
fs.readFile(‘env.json‘, ‘utf8‘),
fs.readFile(‘settings.json‘, ‘utf8‘)
]);
return {
env: JSON.parse(env),
settings: JSON.parse(settings)
};
}
You’ll see this in APIs that need to hit multiple services before responding.
Event-Driven Design: Why It Matters
Node’s event-driven architecture is more than a buzzword. It’s a way of building systems where actions trigger reactions without blocking the main flow. You see it everywhere: servers emitting request events, streams emitting data events, and custom event emitters in your own code.
Here’s a small example using EventEmitter:
const EventEmitter = require(‘events‘);
class OrderProcessor extends EventEmitter {
process(order) {
this.emit(‘received‘, order);
// Simulate async work
setTimeout(() => {
this.emit(‘completed‘, { id: order.id, status: ‘done‘ });
}, 50);
}
}
const processor = new OrderProcessor();
processor.on(‘received‘, (order) => {
console.log(‘Order received:‘, order.id);
});
processor.on(‘completed‘, (result) => {
console.log(‘Order complete:‘, result);
});
processor.process({ id: ‘ORD-1007‘ });
This pattern becomes incredibly powerful when you’re modeling workflows, integrating with queues, or writing plugins. You can add new listeners without changing the core processing logic. That’s a maintainability win.
Edge case: event leaks
It’s easy to accidentally attach too many listeners and cause memory warnings. If you’re adding listeners in a loop, make sure you remove them or use once when appropriate:
processor.once(‘completed‘, (result) => {
console.log(‘Completed once:‘, result);
});
Files, Streams, and Real-World I/O
Most production Node apps live and breathe I/O: file reads, network requests, database calls. If you load an entire file into memory, you might be fine for small files, but you’ll struggle with large datasets. Streams solve that by processing data in chunks.
Here’s a practical example using file streams:
const fs = require(‘fs‘);
const readStream = fs.createReadStream(‘large-report.csv‘, ‘utf8‘);
readStream.on(‘data‘, (chunk) => {
console.log(‘Read chunk size:‘, chunk.length);
});
readStream.on(‘end‘, () => {
console.log(‘Finished reading file‘);
});
readStream.on(‘error‘, (err) => {
console.error(‘Stream error:‘, err);
});
When I build pipelines or data processors, streams keep memory usage stable, often in the tens of megabytes instead of gigabytes. That’s a practical performance win you can measure in production.
Backpressure in practice
Backpressure means a slow consumer can overwhelm a fast producer. If you ignore it, your memory usage climbs until the process crashes. The stream API supports backpressure automatically when you use pipe:
const fs = require(‘fs‘);
const input = fs.createReadStream(‘big.log‘);
const output = fs.createWriteStream(‘big-copy.log‘);
input.pipe(output);
For transformations, use the stream module’s pipeline utility to handle errors and backpressure cleanly.
Handling HTTP Requests Properly
The built-in http module is great for learning, but real-world apps need routing, middleware, and validation. That’s where frameworks like Express shine. Still, you can learn a lot by writing a minimal router yourself:
const http = require(‘http‘);
const server = http.createServer((req, res) => {
if (req.method === ‘GET‘ && req.url === ‘/health‘) {
res.writeHead(200, { ‘Content-Type‘: ‘application/json‘ });
res.end(JSON.stringify({ status: ‘ok‘, time: Date.now() }));
return;
}
if (req.method === ‘POST‘ && req.url === ‘/echo‘) {
let body = ‘‘;
req.on(‘data‘, (chunk) => {
body += chunk;
});
req.on(‘end‘, () => {
res.writeHead(200, { ‘Content-Type‘: ‘application/json‘ });
res.end(JSON.stringify({ youSent: body }));
});
return;
}
res.writeHead(404, { ‘Content-Type‘: ‘text/plain‘ });
res.end(‘Not Found‘);
});
server.listen(3000, () => {
console.log(‘Server running on port 3000‘);
});
This example shows a key detail: request bodies arrive in chunks. If you try to read the body synchronously, you’ll miss data. Handle it as a stream and respond after end fires.
Practical improvements for handling requests
- Set timeouts: Prevent hanging connections.
- Validate input: Never trust external data.
- Limit payload size: Protect memory and abuse.
- Use
req.headers: Content type matters when parsing.
A quick example of a manual payload limit:
const MAX_BODY = 1e6; // ~1MB
let body = ‘‘;
req.on(‘data‘, (chunk) => {
body += chunk;
if (body.length > MAX_BODY) {
res.writeHead(413, { ‘Content-Type‘: ‘application/json‘ });
res.end(JSON.stringify({ error: ‘Payload too large‘ }));
req.destroy();
}
});
Express (and Friends) as a Practical Default
If you’re building APIs, a minimal framework saves you time. Express is still the most common, but other frameworks exist. I use Express when I need quick iteration and plenty of middleware.
Here’s a simple Express API with routes and middleware:
const express = require(‘express‘);
const app = express();
app.use(express.json());
app.get(‘/health‘, (req, res) => {
res.json({ status: ‘ok‘ });
});
app.post(‘/notes‘, (req, res) => {
const { title, body } = req.body || {};
if (!title) {
res.status(400).json({ error: ‘title required‘ });
return;
}
res.status(201).json({ id: Date.now(), title, body: body || ‘‘ });
});
app.listen(3000, () => {
console.log(‘API running on port 3000‘);
});
That’s a good baseline. You can add logging, auth, and rate-limiting as you grow.
Working with the File System Safely
Node gives you deep access to the file system, which is great and dangerous. I prefer asynchronous operations and always handle errors explicitly.
Common file operations
- Read:
fs.readFileorfs.promises.readFile - Write:
fs.writeFileorfs.promises.writeFile - Append:
fs.appendFile - List directories:
fs.readdir - Check stats:
fs.stat
Here’s a safe write with error handling:
const fs = require(‘fs/promises‘);
async function saveReport(data) {
try {
await fs.writeFile(‘report.json‘, JSON.stringify(data, null, 2), ‘utf8‘);
} catch (err) {
console.error(‘Write failed:‘, err);
throw err;
}
}
Path handling matters
Never concatenate paths manually; use path.join to avoid cross-platform bugs and path traversal mistakes.
const path = require(‘path‘);
const fullPath = path.join(dirname, ‘logs‘, ‘app.log‘);
Networking Beyond HTTP
HTTP is the most common surface, but Node can do more:
- TCP/UDP: build custom protocols or lightweight services
- WebSockets: real-time bi-directional communication
- DNS: basic lookups and resolver logic
Even if you never implement these directly, knowing they exist helps you choose the right tool for the job.
Error Handling That Doesn’t Bite You Later
Error handling is where most “works on my machine” apps die in production. I use three layers:
- Local try/catch for async functions
- Middleware-level error handlers for web frameworks
- Process-level handlers for unhandled exceptions and rejections (mostly for logging and safe shutdown)
A basic process-level safety net for CLI apps:
process.on(‘unhandledRejection‘, (reason) => {
console.error(‘Unhandled rejection:‘, reason);
process.exitCode = 1;
});
process.on(‘uncaughtException‘, (err) => {
console.error(‘Uncaught exception:‘, err);
process.exit(1);
});
I’m careful using this in servers because a graceful shutdown is better than a hard crash, but logging is still essential.
Testing the Basics Early
Testing isn’t just for huge apps. A tiny API benefits from a few sanity tests. Node now has a built-in test runner, which means you can start without extra tools.
A minimal example:
import test from ‘node:test‘;
import assert from ‘node:assert/strict‘;
test(‘simple math‘, () => {
assert.equal(2 + 2, 4);
});
As your app grows, you’ll probably add a test framework or use HTTP testing tools, but you don’t need them on day one.
When Node.js Is the Right Tool
I don’t push Node.js for every backend, but I’m quick to recommend it in these scenarios:
- Real-time apps: chat, notifications, live dashboards.
- API gateways: lots of network calls, orchestration, and data shaping.
- Serverless functions: quick cold starts, async I/O heavy tasks.
- CLI tools and automation: strong ecosystem, easy package distribution.
- Full-stack JavaScript teams: fewer context switches and shared language.
Node thrives when you’re I/O-bound or need fast iteration. It’s less ideal for CPU-heavy workloads like video encoding, machine learning inference, or complex image processing. In those cases, I either offload CPU work to a specialized service or use worker threads with care.
When Node.js Is Not the Best Fit
You should be honest about where Node struggles. I’d avoid it as the primary runtime in these situations:
- CPU-heavy batch processing: it will block the event loop.
- Hard real-time systems: latency jitter can be unpredictable.
- Legacy ecosystems: if you must rely on libraries that only exist in another language.
That doesn’t mean you can’t use Node at all—just split responsibilities. I often pair Node with a separate service in Rust, Go, or Python for the CPU-heavy parts. That way you keep Node’s strengths without forcing it to do everything.
Common Mistakes and How I Avoid Them
Here are the mistakes I still see in 2026, and the fixes that prevent production incidents:
- Blocking the event loop: Avoid synchronous file I/O (
fs.readFileSync) in request handlers. Use async versions instead. - Uncaught promise rejections: Always
awaitor handle errors. Add a global handler for safety in CLI apps. - Too many dependencies: Use core modules when you can. Smaller dependency graphs mean fewer security updates.
- Ignoring backpressure: When streaming data, respect flow control or you’ll spike memory.
- Logging secrets: Don’t log request bodies or tokens in production. Redact or filter logs by default.
If you only fix one thing, fix event loop blocking. It’s the root cause behind “Node is slow” complaints I see in incident reports.
Performance Considerations You Can Measure
Performance in Node is rarely about raw CPU cycles. It’s about I/O efficiency, memory use, and latency under concurrency. Here are metrics I track:
- Response time percentiles: the 95th and 99th percentile are often more important than average.
- Event loop lag: when it spikes beyond 10–20ms, users feel it.
- Heap size and GC pauses: memory bloat can cause unpredictable stalls.
In my experience, simple API endpoints in Node can typically respond in the 10–15ms range when hitting in-memory caches, while database-backed endpoints often land around 40–120ms depending on query complexity. Those numbers aren’t guarantees, but they’re useful baselines for alerting and diagnostics.
Lightweight profiling without heavy tooling
console.time/console.timeEndfor rough hotspots- Built-in inspector for CPU profiles
- Heap snapshots to track memory leaks
You don’t need to profile every day, but doing it early prevents surprises later.
Traditional vs Modern Patterns
As teams evolve, Node patterns evolve too. Here’s a quick comparison I use when modernizing codebases:
Modern approach
—
async/await
Modular routes and services
Service layer or repository
Structured logs + tracing
Environment-based config loaders
Schema validation on input
Practical Scenario: Build a Simple JSON API
Let’s take the basics and make a small API that handles JSON safely, logs requests, and has a clean structure.
Structure:
server.js: entry pointroutes/notes.js: route handlersservices/notes.js: data logic
server.js
const http = require(‘http‘);
const { handleNotes } = require(‘./routes/notes‘);
const server = http.createServer((req, res) => {
const start = Date.now();
res.setHeader(‘Content-Type‘, ‘application/json‘);
if (req.url.startsWith(‘/notes‘)) {
handleNotes(req, res).then(() => {
const ms = Date.now() - start;
console.log(${req.method} ${req.url} ${ms}ms);
});
return;
}
res.statusCode = 404;
res.end(JSON.stringify({ error: ‘Not found‘ }));
});
server.listen(3000, () => {
console.log(‘Server running on port 3000‘);
});
routes/notes.js
const { listNotes, createNote } = require(‘../services/notes‘);
async function handleNotes(req, res) {
if (req.method === ‘GET‘ && req.url === ‘/notes‘) {
const notes = await listNotes();
res.statusCode = 200;
res.end(JSON.stringify({ data: notes }));
return;
}
if (req.method === ‘POST‘ && req.url === ‘/notes‘) {
let body = ‘‘;
req.on(‘data‘, (chunk) => { body += chunk; });
await new Promise((resolve) => req.on(‘end‘, resolve));
let payload;
try {
payload = JSON.parse(body || ‘{}‘);
} catch {
res.statusCode = 400;
res.end(JSON.stringify({ error: ‘Invalid JSON‘ }));
return;
}
if (!payload.title) {
res.statusCode = 400;
res.end(JSON.stringify({ error: ‘title required‘ }));
return;
}
const note = await createNote(payload.title, payload.body || ‘‘);
res.statusCode = 201;
res.end(JSON.stringify({ data: note }));
return;
}
res.statusCode = 405;
res.end(JSON.stringify({ error: ‘Method not allowed‘ }));
}
module.exports = { handleNotes };
services/notes.js
let notes = [];
async function listNotes() {
return notes;
}
async function createNote(title, body) {
const note = { id: Date.now(), title, body };
notes.push(note);
return note;
}
module.exports = { listNotes, createNote };
This is intentionally simple but already separates concerns. If you swap the in-memory array for a database later, you only update services/notes.js.
Edge Cases and How to Handle Them
Node apps break in predictable ways. Planning for these avoids most incidents:
- Large payloads: Add a max body size and fail fast.
- Slow clients: Set server timeouts and keep-alive configuration.
- JSON parsing failures: Always wrap JSON parsing in try/catch.
- Unexpected errors: Normalize error responses so clients can handle them.
- Resource leaks: Close DB connections, file handles, and timers.
A practical timeout:
server.setTimeout(30_000); // 30 seconds
Alternative Approaches for Common Problems
Node gives you many options for the same goal. Pick the one that matches your scale and team.
Serving APIs
- Minimal
httpmodule for learning or tiny services - Express or Fastify for typical APIs
- Specialized frameworks when you need performance or schema-first design
Scheduling jobs
setIntervalfor simple periodic tasks- External job queues for reliability and retries
Parsing input
- Manual parsing for tiny payloads
- Schema validation libraries for safer production APIs
Handling CPU-heavy work
- Worker threads for local offload
- Separate services for heavy tasks
Scaling Node.js Without Losing Your Mind
Scaling Node is mostly about avoiding single-process limits and handling state properly.
Horizontal scaling
- Run multiple Node processes behind a load balancer.
- Use containers or process managers to spread across cores.
- Keep state external (database, cache), not in-memory, so any instance can serve requests.
Stateless mindset
If your server stores user sessions in memory, scaling becomes painful. I prefer stateless servers where all state is stored in a database or cache, so scaling is just a matter of adding instances.
Connection pooling
Database connections are expensive. Use a pool instead of opening a new connection per request. Most DB drivers support pooling out of the box.
Logging and Observability Basics
I treat logging as part of the product, not an afterthought. Useful logs answer: what happened, where, and how long it took.
- Structured logs: JSON logs are easier to search and analyze.
- Request IDs: Connect logs across services.
- Error context: Include user ID or endpoint but not secrets.
- Latency: Log timing for each request.
Even if you’re just writing to stdout, structured logging pays off when you deploy.
Security Basics for Node Beginners
Security isn’t optional, even for small apps. I stick to a few essentials:
- Never trust input: Validate and sanitize.
- Use HTTPS: Terminate TLS at a proxy or load balancer.
- Protect secrets: Use environment variables, not hardcoded strings.
- Keep dependencies updated: Fewer dependencies, fewer risks.
- Limit request sizes: Avoid memory attacks.
You don’t need to be a security expert to avoid the obvious pitfalls.
Modern Tooling That Makes Life Easier
Even if you’re learning, modern tooling saves hours:
- Formatters: keep code consistent
- Linters: catch bugs early
- Type checking: catch whole classes of mistakes
- Dev reload: speed up feedback loops
Node basics don’t require all of these, but they become essential once your codebase grows.
A Quick Note on TypeScript
TypeScript isn’t required for Node, but I like it for medium-to-large projects because it catches errors before runtime. It adds a small learning curve but makes refactors safer.
If you stay in JavaScript, that’s fine too—just be strict with validation and tests.
Practical Node CLI Example
A lot of Node usage happens outside web servers: in scripts and automation. Here’s a tiny CLI that reads a file and prints a summary:
const fs = require(‘fs/promises‘);
async function run() {
const file = process.argv[2];
if (!file) {
console.error(‘Usage: node cli.js ‘);
process.exit(1);
}
const data = await fs.readFile(file, ‘utf8‘);
const lines = data.split(‘\n‘).length;
console.log(JSON.stringify({ file, lines }));
}
run().catch((err) => {
console.error(err);
process.exit(1);
});
It’s small, but it shows how Node’s file APIs and process arguments work together.
Deployment: What Actually Matters
Deployment choices depend on your environment, but a few principles stay consistent:
- Use an LTS Node version for stability.
- Set
NODE_ENV=productionfor optimized behavior. - Graceful shutdown: handle
SIGTERMto stop cleanly. - Health checks: always provide a simple endpoint.
A basic graceful shutdown pattern:
process.on(‘SIGTERM‘, () => {
console.log(‘Received SIGTERM, shutting down‘);
server.close(() => {
console.log(‘Server closed‘);
process.exit(0);
});
});
This avoids cutting off in-flight requests when your process stops.
My Minimal “Production Readiness” Checklist
If I’m sending a Node service to production, I look for:
- Health endpoint (
/health) - Structured logs with request IDs
- Environment config and secrets handling
- Timeouts and payload limits
- Metrics or at least basic latency logging
- Tests for critical paths
If I can check those boxes, I’m comfortable shipping.
A Roadmap for Learning Next
Once you’re comfortable with Node basics, here’s what I usually explore next:
- Build a small API with a real database
- Add authentication (JWT or sessions)
- Implement validation for all inputs
- Add logging and metrics
- Test the main paths
- Deploy and monitor
Each step makes your app more reliable and closer to production quality.
Final Thoughts
Node.js basics aren’t about memorizing APIs—they’re about internalizing a way of thinking. Once you understand the event loop, asynchronous patterns, and modules, you can build almost anything. The ecosystem is huge, but you don’t need the entire ecosystem to be effective. A handful of core modules and a good mental model can carry you far.
If you’re just starting, focus on building small, real projects: a tiny API, a CLI tool, or a file processor. Each project will reveal a different part of Node’s strengths. And as you grow, keep Node’s core idea in mind: do I/O efficiently, keep the main thread free, and build systems that respond fast under load. That mindset will serve you well long after the basics.


