Understanding Node.js require(): Lifecycle, Caching, and Practical Patterns

I’ve debugged enough “why is this module behaving twice” issues to know that understanding require() is not optional if you build serious Node.js systems. The first time you hit a circular dependency, or you upgrade a service to run in multiple processes, or you try to load the same file from two paths and get two instances, the module system stops being magic and starts being a critical piece of engineering. If you work in 2026 with modern Node runtimes, AI-assisted refactors, and mixed ESM/CommonJS codebases, this knowledge saves real time.

What I’m going to do here is walk you through the actual lifecycle of a CommonJS module as Node executes require(): resolution and loading, wrapping, execution, returning exports, and caching. I’ll show you how each step influences real-world behavior, how you can observe it, and how to avoid mistakes. I’ll keep it technical but practical, using runnable examples and short analogies to make the moving parts stick.

Why require() Still Matters in 2026

ES modules are mainstream, but CommonJS isn’t going away soon. I still see require() in:

  • Legacy production services that can’t flip to ESM without large migrations.
  • Tools, test runners, and CLIs that still ship CommonJS by default.
  • Bundler plugin ecosystems that expose CommonJS entry points.
  • Mixed repositories where build scripts use require() while app code uses import.

So yes, ESM is the future, but understanding require() is still essential. I recommend knowing the lifecycle cold, because you’ll use it to reason about runtime behavior, memory, and performance.

The Five Steps in the require() Lifecycle

At a high level, Node’s require() does five things, in order:

  • Resolves and loads the module
  • Wraps the module code in a function
  • Executes the function wrapper
  • Returns exports
  • Caches the module

I like to think of this like checking out a library book. First you locate it (resolve), then you open the book within a private reading room (wrap), you read the book (execute), you write down the notes (exports), and you remember you already read it so you don’t re-read later (cache). Now let’s get precise.

Step 1: Resolving and Loading

When you call require(), Node first decides what you’re asking for, then it loads the file contents. The resolution logic is strict and predictable, which is good once you understand it.

Resolution Rules I Rely On

  • If the argument is a core module name (like fs, path, crypto), Node resolves it to a built-in module.
  • If the argument starts with ./, ../, or /, Node treats it as a file path.
  • If it’s neither core nor a path, Node treats it as a package name and searches node_modules.

File vs Folder Resolution

For paths or package roots, Node looks for:

  • A file matching the exact path.
  • The same path plus supported extensions (.js, .json, .node).
  • If it’s a directory, it checks package.json for a main entry.
  • If main is missing, it falls back to index.js (and other extensions).

Here’s a minimal example you can run:

// ./app.js

const config = require(‘./config‘);

console.log(config);

// ./config/index.js

module.exports = { env: ‘dev‘, region: ‘us-east-1‘ };

You don’t explicitly use index.js, but Node will load it. This behavior is convenient, but I recommend being explicit in large codebases to avoid confusion with new teammates and AI refactors.

Common Mistake: Duplicate Module Instances

If you require the same file using two different paths, Node considers them different modules. That creates two separate instances in memory.

// ./services/logger.js

module.exports = { count: 0 };

// ./app.js

const a = require(‘./services/logger‘);

const b = require(‘./services/../services/logger‘);

console.log(a === b); // false

I’ve seen this cause strange behavior with singleton configs, DB connections, and global caches. Use consistent paths and tools like ESLint rules or path aliases to prevent this.

Practical Diagnostic: Print require.resolve

When a resolution bug shows up, I log the resolved path to see what Node thinks the module is:

// ./debug-resolve.js

console.log(require.resolve(‘./services/logger‘));

If two require() statements resolve to different absolute paths, they’re different module instances, even if they point to the “same” file through different path strings. This is often the root cause of “singleton broke” issues.

Package Resolution and the exports Field

When you require a package, Node checks its package.json. In modern packages, the exports field can change which files are exposed to consumers. This is a frequent source of “it works in dev but not in prod” issues if you reach into internal package paths.

Rule of thumb: if you’re using require(‘some-package/lib/internal‘) and it works today, it may break tomorrow because the package author might hide it behind exports. If you must reach into internal paths, pin the package version or wrap that internal path behind your own abstraction.

Step 2: Wrapping

Once Node loads the file contents, it doesn’t execute the raw code directly. Instead, it wraps the module in a function with a known signature. That function defines the module’s private scope.

Node’s internal wrapper looks like this (simplified):

(function (exports, require, module, filename, dirname) {

// Your module code lives here

});

This is why top-level variables are not global, and why you get access to module, exports, dirname, and filename without defining them. You can inspect it yourself:

// ./show-wrapper.js

console.log(require(‘module‘).wrapper);

When I mentor juniors, I explain it like a staging area: each module gets its own backstage room. You can see the stage manager (module), a pass to export props (exports), and a contact list (require). The audience never sees the backstage mess unless you export it.

Why Wrapping Matters for Security and Isolation

Wrapping gives each file its own scope. That’s a major reason CommonJS is more predictable than a script tag dump. Without wrapping, any global variable would leak, and you’d have constant naming collisions. In larger systems, this is the difference between sane architecture and chaos.

A Practical Use of the Wrapper: Local “Globals”

You can create intentionally shared state within a module without exposing it:

// ./rate-limiter.js

let tokens = 100;

module.exports = function takeToken() {

if (tokens <= 0) return false;

tokens -= 1;

return true;

};

No other module can reach tokens unless you expose it. This makes internal state management simpler and safer.

Step 3: Execution

Now Node executes the wrapper. This is the moment when your module’s top-level code runs. It runs once per process, per resolved path, because of caching (we’ll get there).

This is where side effects happen:

  • console.log() statements run
  • Database connections can open
  • Timers can start
  • Shared state can be created

I recommend keeping top-level side effects small and deterministic. If you want a module to export a function that creates a connection on demand, put that logic inside the exported function, not at top level.

Example: Top-Level Side Effects

// ./db.js

const pool = createPool({ host: ‘db‘, max: 10 });

console.log(‘DB pool created‘);

module.exports = pool;

When you require(‘./db‘), the pool is created immediately. That might be fine, but if you require it inside a script that you expect to run quickly, you just introduced a network dependency at startup.

If you want lazy behavior:

// ./db.js

let pool;

module.exports = function getPool() {

if (!pool) {

pool = createPool({ host: ‘db‘, max: 10 });

}

return pool;

};

That moves the side effect into an explicit call.

Execution Order and “Why Did This Run First?”

If Module A require()s Module B, B runs before A completes. That means any top-level work in B happens before A’s module.exports is finalized. That can matter for configuration modules, logger setup, or environment initialization.

A pattern I recommend is: do environment loading and configuration validation in a single, explicit entry module (often index.js or bootstrap.js) and keep other modules clean. This gives you a predictable order and keeps module loading deterministic.

Step 4: Returning Exports

After execution, Node returns the module’s exports. This is where a lot of mistakes happen, mostly around module.exports vs exports.

The Rule I Use

  • module.exports is the actual exported value.
  • exports is just a reference to module.exports.

So if you do this:

exports.add = (a, b) => a + b;

It works, because you’re adding a property to the existing object.

But if you do this:

exports = function add(a, b) { return a + b; };

You break the link. exports now points to a new function, but module.exports still points to the original object. The module exports an empty object instead of your function.

I recommend always writing one of these two patterns consistently:

// Pattern A: Single export

module.exports = function add(a, b) { return a + b; };

// Pattern B: Multiple exports

module.exports = { add, subtract, multiply };

Practical Example

// ./math.js

function add(a, b) { return a + b; }

function subtract(a, b) { return a - b; }

module.exports = { add, subtract };

// ./app.js

const math = require(‘./math‘);

console.log(math.add(3, 4));

Keep it simple and explicit. In teams, I push for a shared convention to avoid debugging “why is this undefined?” at 2 a.m.

Exports as a Function vs Object

Choosing between a function export and object export is a design decision:

  • Function export: good for single-purpose modules, factories, or classes.
  • Object export: good for utility libraries or multiple related functions.

I default to a single function export when the module has one main responsibility, because it’s easier to reason about and test. I use object exports when I know the module will evolve to include multiple helpers.

Step 5: Caching

Caching is the part that surprises people, especially in testing or REPL workflows. Node caches modules after the first require(). Every future require() returns the same module.exports object reference.

Here’s an example you can run:

// ./counter.js

let count = 0;

count += 1;

console.log(‘module executed, count =‘, count);

module.exports = {

getCount: () => count,

};

// ./app.js

const a = require(‘./counter‘);

const b = require(‘./counter‘);

console.log(a.getCount());

console.log(b.getCount());

console.log(a === b);

Output:

  • The “module executed” log appears only once.
  • a and b are the same object.

This caching is great for performance, but you must remember that mutable exports are shared across all require calls.

Clearing the Cache (When You Must)

In tests or dynamic loading, you might want to reload a module:

// ./reload.js

const path = require(‘path‘);

const id = require.resolve(‘./counter‘);

delete require.cache[id];

const fresh = require(‘./counter‘);

I use this in test environments, but I avoid it in production code. It’s a foot-gun if you do it casually.

Caching and Multi-Process Systems

If you run Node with clustering or multiple processes, each process has its own module cache. That means a “singleton” in code is only a singleton per process, not per machine. If you want cross-process shared state, you need a shared store (Redis, database, IPC, etc.) or a coordination layer.

This is why a module-level cache is great for performance but not for distributed consistency.

A Full Walkthrough Example

Here’s a more complete example that shows all five steps in action.

// ./lib/config.js

console.log(‘config module executed‘);

module.exports = { featureFlag: true };

// ./lib/app.js

const config = require(‘./config‘);

module.exports = function run() {

if (config.featureFlag) {

console.log(‘Feature is on‘);

}

};

// ./index.js

const run = require(‘./lib/app‘);

run();

What happens:

  • Node resolves ./lib/app and ./lib/config.
  • Each module is wrapped.
  • config executes first, printing once.
  • app exports run.
  • Both modules are cached.

If you require ./lib/app again later, config will not execute again.

Common Mistakes and How I Avoid Them

Here are the mistakes I see most often, and the fixes that actually work in practice:

Mistake 1: Assuming require() Runs Every Time

If you expect a module to run on every require(), you’ll get burned. Put repeatable logic in a function and export it.

Mistake 2: Mutating Shared State Unintentionally

If a module exports an object and you mutate it, every other part of your app sees the same mutation. That’s by design, but it can be dangerous. I recommend freezing config objects or exporting factory functions if you need isolation.

// ./config.js

const config = Object.freeze({ region: ‘us-east-1‘ });

module.exports = config;

Mistake 3: Circular Dependencies

Circular requires can return partially initialized exports. Node handles it by exporting what it has at that moment, but you can get undefined or half-built objects.

If you can’t avoid a cycle, restructure to export functions that resolve dependencies at call time, not at module load time.

Mistake 4: Different Paths for Same Module

Use absolute paths or consistent relative paths. In big repos, I often set up a path alias and configure tooling to enforce it.

Mistake 5: Hidden Startup Cost

People require() heavy modules at startup and then complain that boot time is slow. This isn’t just about size; it’s also about what runs at top level. When I optimize cold start, I first move heavy work out of module scope and into explicit init functions.

Mistake 6: Changing module.exports After Async Code

If you do something like this:

// ./bad-async.js

module.exports = {};

setTimeout(() => {

module.exports.ready = true;

}, 1000);

Consumers that require the module will see the empty object at load time and may read ready before it exists. If you need async initialization, export an async function or a promise instead.

// ./good-async.js

module.exports = async function init() {

await new Promise(r => setTimeout(r, 1000));

return { ready: true };

};

When to Use require() vs When Not To

I’ll make this opinionated because generic advice wastes time.

Use require() when:

  • You are in a CommonJS environment (many CLI tools, older libraries).
  • You need dynamic loading based on runtime conditions.
  • You are writing scripts that will run in older Node runtimes without ESM support.

Avoid require() when:

  • You control the runtime and can use ESM consistently.
  • You rely on tree-shaking for bundling.
  • You want import syntax benefits like static analysis and top-level await.

Traditional vs Modern Approach

Here’s a direct comparison that helps in decision-making:

Traditional (CommonJS)

Modern (ESM)

const fs = require(‘fs‘)

import fs from ‘fs‘

Dynamic loading is trivial

Dynamic loading uses import()

Caching and execution are implicit

Module evaluation order is explicit

Works in most older tooling

Requires Node ESM configI still reach for require() in scripts and internal tools because it’s fast to use and easy to reason about, but I avoid mixing it with ESM unless I have to.

Performance Considerations

Most require() calls are cheap after the first load. In production apps, I typically see warm require() calls in the sub-millisecond range, while cold loads can be a few milliseconds depending on disk and module size. The exact numbers will vary by machine and workload, but you should still follow a few rules:

  • Avoid require() inside hot loops. It’s cached, but it still does a hash lookup and can slow tight loops.
  • If you need dynamic modules, batch them during startup or use import() with async control.
  • Keep module initialization light; defer heavy work.

A Simple Micro-Benchmark Pattern

When you need to verify a suspicion, you can run a crude benchmark:

// ./bench-require.js

console.time(‘first‘);

require(‘./heavy-module‘);

console.timeEnd(‘first‘);

console.time(‘second‘);

require(‘./heavy-module‘);

console.timeEnd(‘second‘);

This won’t give lab-grade accuracy, but it will show the shape of the cost and whether caching is doing what you expect.

Preloading and Startup Strategy

For services where cold start matters (serverless or autoscaling), I often do a startup pass that preloads the essential modules and does minimal initialization. If a module is heavy, I move the heavy work to an explicit init() that can be called after the service is warmed.

Edge Cases You Should Know

Native Addons (.node)

If a module resolves to a .node file, Node will load a compiled native addon. That’s still part of the same lifecycle, but loading time can be significantly higher and errors can be more complex.

The practical takeaway: treat .node addons as “heavy modules” and avoid loading them in hot paths if possible.

JSON Modules

You can require() JSON, and Node will parse it once and cache the resulting object. That object is mutable unless you freeze it.

// ./settings.json

{ "retry": 3 }

// ./app.js

const settings = require(‘./settings.json‘);

settings.retry = 10; // This mutation is shared!

If you expect the JSON to be immutable configuration, freeze it, or clone it before use.

Requiring a Directory Without main

If you require(‘./foo‘) and ./foo/package.json has no main, Node falls back to ./foo/index.js. I personally avoid relying on this in big projects because it makes module boundaries less explicit. The more people and tools involved, the more explicit you want to be.

Requiring a File That Doesn’t Exist Yet

If you dynamically generate a file and then require it in the same process, you might hit a race condition. Always ensure file creation is complete before requiring it. I’ve seen flaky tests caused by asynchronous writes followed by immediate require().

Circular Dependencies: How to Survive Them

Circular dependencies are a recurring pain. Node handles them by returning the partially initialized module.exports of the module that is still loading. That means require() may return an object that doesn’t have all the properties you expect yet.

An Example of a Cycle

// ./a.js

const b = require(‘./b‘);

module.exports = { name: ‘A‘, fromB: b.name };

// ./b.js

const a = require(‘./a‘);

module.exports = { name: ‘B‘, fromA: a.name };

Depending on load order, one of these fromX values will be undefined.

Practical Fixes

  • Move shared dependencies into a third module.
  • Export functions that access dependencies at call time, not at module load.
  • Avoid deep mutual imports across layers. Enforce boundaries (e.g., “models can’t import services”).

I’ve found that most cycles are architectural signals: two modules are doing too much or are coupled in a way that should be made explicit.

Debugging require() in Real Systems

When a module behaves oddly, I walk through this checklist:

  • What exact string is passed to require()?
  • How does Node resolve it (core, path, node_modules)?
  • Has it already been required and cached?
  • Are there side effects at top level?
  • Is anything mutating exported objects?
  • Do any circular dependencies exist?

That mental model saves a lot of time, and you can automate parts of it with tracing tools or by temporarily logging require.cache during debugging sessions.

Quick require.cache Inspection

You can dump cache keys to see what’s loaded:

// ./dump-cache.js

console.log(Object.keys(require.cache));

If you see two entries that point to the same file under different paths, you’ve found a duplicate instance problem.

Practical Patterns I Recommend

Pattern: Export a Factory

When you need isolated instances:

// ./client.js

module.exports = function createClient(options) {

return { options, id: Math.random().toString(36).slice(2) };

};

Pattern: Lazy Initialization

When startup time matters:

// ./metrics.js

let client;

module.exports = function getClient() {

if (!client) {

client = createMetricsClient();

}

return client;

};

Pattern: Immutable Config

When you want to avoid accidental mutation:

// ./config.js

const config = Object.freeze({

env: process.env.NODE_ENV || ‘development‘,

region: process.env.REGION || ‘us-east-1‘,

});

module.exports = config;

Pattern: Runtime-Selected Modules

When you need conditional logic based on environment:

// ./logger.js

const isProd = process.env.NODE_ENV === ‘production‘;

module.exports = isProd

? require(‘./logger-prod‘)

: require(‘./logger-dev‘);

This is common in CLIs and services where you want different backends without bundling everything.

Pattern: Wrapper Module to Normalize Exports

If you depend on packages that export differently (some use default exports, some use named exports), build a wrapper:

// ./http-client.js

const axios = require(‘axios‘);

module.exports = function request(options) {

return axios(options);

};

This keeps the rest of your codebase consistent even if the underlying library changes.

Working with Mixed ESM and CommonJS

In 2026, a lot of repos are mixed. I still see:

  • App code in ESM
  • CLI scripts in CommonJS
  • Test configs in CommonJS
  • Dependencies with a mix of exports entries

Here’s how I keep my sanity:

  • Keep boundaries clear: ESM app code should import from CommonJS with careful wrappers.
  • Don’t require() ESM unless you know the module supports it.
  • If you need to import CommonJS from ESM, use import and treat it as a default export.

A safe pattern is to define a bridge file. If your main app is ESM but you need a CommonJS dependency, wrap it in a small adapter and keep it isolated. This avoids peppering interoperability logic across your codebase.

Production Considerations: Deployment, Monitoring, Scaling

Deployment and Cold Starts

If your service cold-starts frequently, the cost of module loading can dominate. I usually measure:

  • Time to require the main entry file
  • Time spent in top-level initialization
  • Which modules show up in early require stack traces

If cold starts are too slow, I split initialization into two phases: “load lightweight modules quickly” and “init heavy dependencies after the service is listening.”

Monitoring for Module-Related Bugs

I’ve shipped a few incidents caused by accidental double-loading. The fix was almost always: add a startup log that records the resolved paths of the core modules and ensure they’re identical across environments. That’s a simple, low-cost guardrail.

Scaling Across Processes and Containers

Remember: each Node process has its own cache. In clustered or containerized deployments, you’ll have multiple independent caches. If a module holds state that must be consistent across workers (like a rate limiter), you need shared state outside the module system.

Testing Strategies with require()

Tests are the place where module cache surprises show up first. When you use a single process test runner, module state leaks between tests unless you reset it.

Strategy 1: Clear Cache in Setup

// ./test-setup.js

const clear = id => { delete require.cache[require.resolve(id)]; };

beforeEach(() => {

clear(‘../src/config‘);

clear(‘../src/db‘);

});

Strategy 2: Design for Stateless Modules

A better long-term approach is to keep modules stateless and inject dependencies. If each test constructs its own instance using a factory export, you rarely need to clear caches.

Strategy 3: Run Tests in Isolated Processes

Some test runners can isolate each test file in a separate process. That’s a good tradeoff when debugging caching-related bugs, because each process gets a clean module cache by default.

A Deeper Mental Model: require() as a Loader + Registry

I think of require() as two systems:

  • A loader that resolves a string into a file and runs it in a wrapper.
  • A registry (the cache) that stores exports under a key.

When something feels “off,” it’s usually one of these:

  • The loader resolved a different file than you intended.
  • The registry is already populated from an earlier run.
  • You mutated the export in a way you didn’t expect.

Once you see those as separate components, debugging becomes more mechanical.

AI-Assisted Workflows and require() in 2026

AI refactors are great, but they can introduce subtle module bugs. I’ve seen:

  • module.exports = fn replaced with exports.default = fn (breaking CommonJS consumers).
  • Path changes that accidentally create duplicate module instances.
  • Auto-updates that add ESM exports fields and break require() for internal paths.

My guardrails:

  • Run a quick “module integrity” test after refactors: check that core modules resolve to the expected paths and that require() returns the expected shape.
  • Prefer explicit export patterns and avoid relying on implicit exports behavior.
  • Keep high-level modules small and predictable; tools are less likely to introduce subtle bugs in small modules.

Alternative Approaches and When They Make Sense

Sometimes require() isn’t the right tool. Alternatives include:

  • import() for dynamic loading in ESM.
  • Dependency injection frameworks or container registries for large apps.
  • Build-time bundling to make module boundaries explicit.

If you’re building a long-lived service with complex dependencies, a DI container can make dependencies explicit and reduce the “magic” of require(). But for small tools, require() is still the fastest and cleanest choice.

A Real-World Scenario: Config, Env, and Hot Reload

Imagine a dev server that reloads configuration when env vars change. Naively, you might do:

// ./config.js

module.exports = {

region: process.env.REGION || ‘us-east-1‘

};

If you change REGION and re-require config, you’ll still see the old value because the module is cached. The correct approach is to export a function:

// ./config.js

module.exports = function getConfig() {

return { region: process.env.REGION || ‘us-east-1‘ };

};

That way you can re-read process.env whenever you call it. This is a pattern I use often in dev tooling.

Security and Supply Chain Implications

The require() system can be a supply chain attack surface because it loads files by path. A few practical tips:

  • Avoid dynamic require() with user input.
  • Validate paths when loading plugins.
  • Prefer explicit allowlists for plugin names.

If you build plugin systems, I recommend mapping plugin names to absolute paths yourself rather than letting arbitrary strings reach require().

A Quick Checklist for Production-Ready Modules

When I review a module in production code, I look for:

  • Are exports explicit and stable?
  • Are top-level side effects minimal?
  • Does the module rely on mutable shared state?
  • Does it behave correctly under multiple processes?
  • Are require() paths consistent and canonical?

If the answer is “yes” across those, the module is usually solid.

Key Takeaways

require() is simple on the surface, but it’s also one of the most important mechanisms in Node. The five-step lifecycle—resolve, wrap, execute, export, cache—explains almost every weird runtime behavior you’ll see. Once you internalize that, you can predict module behavior, debug faster, and design modules that scale cleanly.

If you take only one thing from this, take this: require() is not a function that “just loads a file.” It’s a loader, a wrapper, a runtime executor, and a registry all in one. Treat it with that level of respect, and your code will be more predictable, more testable, and much easier to maintain.

Scroll to Top