Production bugs around async flows still surprise experienced JavaScript teams. I see the same pattern over and over: someone thinks a value is ready, reads it too early, and now your API response is empty, your UI flickers, or your logs show events out of order. Most of these bugs are not about syntax. They come from misunderstanding callbacks.
A callback is simply a function you pass into another function so it can run later. That sounds basic, but this one idea powers timers, event listeners, file I/O, network requests, streams, and most plugin systems. If you understand callbacks deeply, you can reason about execution order, avoid race conditions, and read older codebases with confidence. If you only know async/await, you will still hit walls when you touch event APIs or Node core modules.
I want to make this practical. I will show where callbacks are still the right tool, where they become painful, and how I decide between callbacks, Promises, and async/await in modern code. You will get runnable examples, error-handling patterns that hold up in production, and a clear migration path for callback-heavy code.
Callback Basics That Actually Matter
At face value, a callback is straightforward:
function greet(name, callback) {
console.log("Hello, " + name);
callback();
}
function sayBye() {
console.log("Goodbye!");
}
greet("Ajay", sayBye);
This prints the greeting first, then goodbye. The important detail is not "passing a function." The important detail is who controls when that function runs.
When you call greet("Ajay", sayBye), you are handing control to greet. From that moment, greet decides whether to call callback now, later, once, many times, or never. That control transfer is the core mental model.
I recommend you ask these four questions whenever you see a callback parameter:
- When is it called?
- How many times is it called?
- With what arguments?
- What happens if it throws?
If those answers are unclear, the API is risky, even if the code "works" in a demo.
Here is a tiny but useful upgrade: naming your callbacks by intent.
function processOrder(order, onSuccess, onFailure) {
try {
const receipt = { id: "R-1024", total: order.total };
onSuccess(receipt);
} catch (error) {
onFailure(error);
}
}
onSuccess and onFailure tell you more than cb. I still see cb everywhere, but clear names reduce mistakes, especially when teams move quickly.
Callbacks are often introduced as a beginner concept, yet this is where architecture starts: API contracts, ownership of timing, and failure behavior.
Synchronous Callbacks: Small Pattern, Big Payoff
Not all callbacks are async. Many of the best callback designs are synchronous and intentionally simple.
Array methods are the classic example:
const prices = [29, 49, 99];
const withTax = prices.map((price) => price * 1.08);
const premium = withTax.filter((price) => price > 50);
console.log(withTax); // [31.32, 52.92, 106.92]
console.log(premium); // [52.92, 106.92]
map and filter call your callback immediately for each element. No event loop delay. No hidden thread. Just deterministic execution.
I use synchronous callbacks heavily for three tasks:
- Data shaping (
map,reduce,sortcustom comparators) - Policy injection (pass different behavior into shared workflows)
- Domain rules (pricing, validation, scoring)
A practical example:
function calculate(a, b, operation) {
return operation(a, b);
}
function add(x, y) {
return x + y;
}
function multiply(x, y) {
return x * y;
}
console.log(calculate(5, 3, add)); // 8
console.log(calculate(5, 3, multiply)); // 15
This pattern keeps core workflows stable while behavior changes through small callback functions.
Two mistakes I often fix:
- Hiding side effects inside callbacks passed to array methods.
- Mixing sync and async callbacks in the same API.
For example, if a function sometimes calls back right away and sometimes after setTimeout, callers get weird timing bugs. Pick one execution model and stick to it.
If your callback API is synchronous, document that clearly. If it is asynchronous, document that clearly. Ambiguity causes more bugs than complexity.
Asynchronous Callbacks and the Event Loop
JavaScript executes one call stack at a time, but asynchronous APIs schedule callback execution for later. That is why this runs in a surprising order:
console.log("Start");
setTimeout(() => {
console.log("Inside setTimeout");
}, 2000);
console.log("End");
Output:
StartEndInside setTimeout
setTimeout registers a callback and returns immediately. The runtime stores your callback and re-inserts it after the delay expires and the stack is clear.
I explain this with a restaurant analogy: placing an order does not block you at the counter until the meal is ready. You get a number, step aside, and your number is called later. The callback is that number.
Real async callback use cases:
- Network calls
- File system access in Node.js
- User events in browsers
- Stream chunks and completion handlers
- Background tasks in worker APIs
A safe fetch wrapper with callback style:
function fetchTodo(onDone, onError) {
fetch("https://jsonplaceholder.typicode.com/todos/1")
.then((response) => response.json())
.then((data) => onDone(data))
.catch((error) => onError(error));
}
fetchTodo(
(todo) => {
console.log("Fetched:", todo.title);
},
(error) => {
console.error("Request failed:", error.message);
}
);
Yes, this uses Promises under the hood. That is normal in modern code. Callback-friendly wrappers are still useful when integrating with older APIs, plugin systems, or UI frameworks expecting handlers.
Performance-wise, callback scheduling overhead is usually tiny compared with I/O. In typical app code, callback dispatch often lands in the low microseconds to sub-millisecond range, while network latency is usually 20-300ms and disk access may be 1-20ms depending on workload. In other words, focus on correctness and API clarity first.
Error-First Callbacks: Still Relevant in Node.js
If you work with older Node modules or internal tooling, you will see the error-first callback convention:
function callback(error, result) {
// ...
}
error is non-null on failure. result is valid on success.
Example:
function divide(a, b, done) {
if (b === 0) {
done(new Error("Cannot divide by zero"), null);
return;
}
done(null, a / b);
}
divide(10, 2, (error, result) => {
if (error) {
console.error(error.message);
return;
}
console.log("Result:", result);
});
I still recommend this shape when you need compatibility with callback-based interfaces. It gives a consistent calling contract and plays well with utility helpers.
But there are strict rules you should follow:
- Call the callback exactly once.
- Never call success and error for the same operation.
- Return after calling the callback to avoid fall-through bugs.
- Standardize error types.
A common production bug looks like this:
function saveRecord(record, done) {
if (!record.id) {
done(new Error("Missing id"));
}
// Oops: this still runs and calls done again.
done(null, { ok: true });
}
The fix is simple: early return.
function saveRecord(record, done) {
if (!record.id) {
done(new Error("Missing id"));
return;
}
done(null, { ok: true });
}
If double-callback bugs keep showing up, add a guard wrapper:
function once(fn) {
let called = false;
return (...args) => {
if (called) return;
called = true;
fn(...args);
};
}
Then wrap handlers at boundaries. This one helper can remove whole classes of race bugs in callback-heavy systems.
Event-Driven Callbacks in the Browser
Browser programming is callback-driven by design. Clicks, key presses, form submissions, visibility changes, media events, drag operations, intersection observers, resize observers, and many accessibility hooks all depend on callbacks.
Basic click handler:
const button = document.getElementById("saveButton");
button.addEventListener("click", () => {
console.log("Saved");
});
This looks trivial, but there are production details you should not skip.
First, avoid anonymous handlers when you need cleanup:
function handleSaveClick() {
console.log("Saved");
}
button.addEventListener("click", handleSaveClick);
// Later, when component/view unmounts
button.removeEventListener("click", handleSaveClick);
If you attach listeners and forget to remove them in dynamic UIs, memory usage grows and handlers fire multiple times.
Second, understand event object lifetimes and shape:
document.addEventListener("keydown", (event) => {
if ((event.ctrlKey || event.metaKey) && event.key.toLowerCase() === "s") {
event.preventDefault();
console.log("Custom save shortcut triggered");
}
});
Third, keep callbacks thin. Heavy computation inside UI event callbacks can block rendering and input handling. If work is expensive, schedule it or move it to a worker.
As a rule of thumb, if a callback in response to direct user input takes more than roughly 8-16ms on common hardware, users may notice lag. Split work, batch updates, and defer non-critical tasks.
In modern front-end stacks, you often wrap event callbacks in framework abstractions. The underlying model is still the same: the runtime triggers your function later, and you must treat timing and cleanup as first-class concerns.
Callback Hell: How It Starts and How to Stop It
Nested callbacks are not automatically bad. The issue is uncontrolled nesting, duplicated error branches, and hidden state passing.
Classic shape:
function step1(callback) {
setTimeout(() => {
console.log("Step 1 completed");
callback();
}, 500);
}
function step2(callback) {
setTimeout(() => {
console.log("Step 2 completed");
callback();
}, 500);
}
function step3(callback) {
setTimeout(() => {
console.log("Step 3 completed");
callback();
}, 500);
}
step1(() => {
step2(() => {
step3(() => {
console.log("All steps completed");
});
});
});
This is readable at three steps. At ten steps with branching and retries, it becomes hard to reason about.
I use these techniques before rewriting everything:
- Name every callback function.
- Lift nested functions to top-level helpers.
- Centralize error handling.
- Pass a context object instead of long argument chains.
Refactor pattern:
function runPipeline(done) {
step1(onStep1);
function onStep1(error, result1) {
if (error) return done(error);
step2(result1, onStep2);
}
function onStep2(error, result2) {
if (error) return done(error);
step3(result2, onStep3);
}
function onStep3(error, finalResult) {
if (error) return done(error);
done(null, finalResult);
}
}
Still callback-based, but clearer flow and one terminal callback.
If the workflow is a strict sequence with async operations, moving to Promise chains or async/await is often the better long-term choice. But for event emitters, streams, and repeated notifications, callbacks remain natural.
The key is not "callbacks bad, Promises good." The key is choosing the control-flow model that matches the job.
Promises and async/await: Better Defaults for Sequencing
In 2026, I treat Promises and async/await as defaults for one-shot async tasks. They flatten control flow and make error paths consistent with try/catch.
Promise-based version of the step workflow:
function step1() {
return new Promise((resolve) => {
setTimeout(() => {
console.log("Step 1 completed");
resolve("data-from-step-1");
}, 500);
});
}
function step2(input) {
return new Promise((resolve) => {
setTimeout(() => {
console.log("Step 2 completed");
resolve(input + " -> data-from-step-2");
}, 500);
});
}
function step3(input) {
return new Promise((resolve) => {
setTimeout(() => {
console.log("Step 3 completed");
resolve(input + " -> final-data");
}, 500);
});
}
step1()
.then(step2)
.then(step3)
.then((result) => console.log("All steps completed:", result))
.catch((error) => console.error("Pipeline failed:", error));
async/await version:
async function runSteps() {
try {
const r1 = await step1();
const r2 = await step2(r1);
const r3 = await step3(r2);
console.log("All steps completed:", r3);
} catch (error) {
console.error("Pipeline failed:", error);
}
}
runSteps();
Here is how I choose in real projects:
Callback style
—
Best fit
Works, but verbose
Can become nested fast
Manual branching
try/catch or .catch() Native
Specific guidance I give teams:
- Use callbacks for event subscriptions and push-style notifications.
- Use Promises or
async/awaitfor request-response workflows. - Do not wrap everything "just because." Keep native API shape when it is already clear.
Bridging Old and New Code Without Breakage
Most real codebases are mixed. You will see callback APIs next to Promise-based services. The migration goal is gradual safety, not big-bang rewrites.
Convert callback API to Promise with promisify pattern:
import fs from "node:fs";
function readFilePromise(path, encoding = "utf8") {
return new Promise((resolve, reject) => {
fs.readFile(path, encoding, (error, data) => {
if (error) {
reject(error);
return;
}
resolve(data);
});
});
}
async function loadConfig() {
try {
const text = await readFilePromise("./config.json");
const config = JSON.parse(text);
console.log("Config loaded", config);
} catch (error) {
console.error("Could not load config:", error.message);
}
}
loadConfig();
Convert Promise API to callback when needed:
function toCallback(promise, done) {
promise
.then((value) => done(null, value))
.catch((error) => done(error));
}
function getUserWithCallback(userId, done) {
const promise = fetch(https://api.example.com/users/${userId}).then((r) => r.json());
toCallback(promise, done);
}
I also recommend typed contracts in TypeScript for callback-heavy modules. Even if your app is mostly JavaScript, adding declaration files around boundary modules catches wrong callback shapes and argument order mistakes before they ship.
Callback Contracts: Design Them Like You Mean It
Most callback pain is really contract pain. When the contract is clear, callback-based code can be boring and reliable. When the contract is vague, every usage becomes a guessing game.
When I design a callback-style API, I make these decisions explicit and consistent:
- Sync or async: Always async is often safer for public APIs.
- Single-shot or multi-shot: One result, or a stream of updates?
- Error channel: Error-first callback, separate
onError, or emitted errors? - Cancellation: Can the caller stop it?
- Cleanup: Who removes listeners or frees resources?
Here is a compact example of a callback contract written into code. This function is always async (even for cache hits), calls back at most once, and returns a cancel function.
function createUserLoader() {
const cache = new Map();
function loadUser(userId, done) {
let cancelled = false;
// Always async to avoid "sometimes sync" timing bugs.
queueMicrotask(() => {
if (cancelled) return;
if (cache.has(userId)) {
done(null, cache.get(userId));
return;
}
fetch(/api/users/${encodeURIComponent(userId)})
.then((r) => {
if (!r.ok) throw new Error(HTTP ${r.status});
return r.json();
})
.then((user) => {
if (cancelled) return;
cache.set(userId, user);
done(null, user);
})
.catch((err) => {
if (cancelled) return;
done(err);
});
});
return () => {
cancelled = true;
};
}
return { loadUser };
}
const { loadUser } = createUserLoader();
const cancel = loadUser("123", (err, user) => {
if (err) return console.error(err.message);
console.log("Loaded", user);
});
// If the UI navigates away before the request completes:
cancel();
This style (a callback plus a returned cancel function) is a simple alternative to inventing your own abort protocol. It’s not perfect for everything, but it is easy to teach and hard to misuse.
The "Zalgo" Problem: When Callbacks Sometimes Run Synchronously
One of the most expensive classes of callback bugs comes from a single behavior: a function that sometimes calls the callback synchronously and sometimes asynchronously. People nickname this “releasing Zalgo,” and the name sticks because the results feel cursed.
Here is a realistic bug pattern:
function getCachedOrFetch(key, done) {
const cached = sessionStorage.getItem(key);
if (cached) {
// Synchronous path
done(null, JSON.parse(cached));
return;
}
// Asynchronous path
fetch(/api/data?key=${encodeURIComponent(key)})
.then((r) => r.json())
.then((data) => done(null, data))
.catch((err) => done(err));
}
let loading = true;
getCachedOrFetch("k", (err, data) => {
loading = false;
console.log("loading?", loading);
});
console.log("after call, loading?", loading);
Depending on cache state, loading flips before or after the last console.log. In UIs, that becomes flicker. In servers, that becomes responses that sometimes include fields and sometimes don’t.
My default fix is to make the callback always async:
function getCachedOrFetch(key, done) {
const finish = (...args) => queueMicrotask(() => done(...args));
const cached = sessionStorage.getItem(key);
if (cached) {
try {
finish(null, JSON.parse(cached));
} catch (e) {
finish(e);
}
return;
}
fetch(/api/data?key=${encodeURIComponent(key)})
.then((r) => r.json())
.then((data) => finish(null, data))
.catch((err) => finish(err));
}
Using queueMicrotask is a nice middle ground: it yields control back to the current stack without a full timer delay. (In environments where queueMicrotask isn’t available, Promise.resolve().then(...) is the usual fallback.)
Defensive Callback Wrappers I Actually Use
I keep a small toolbox of wrappers for callback-heavy modules. They pay for themselves quickly because they turn silent failure modes into loud ones.
1) once: prevent double-invoke
You already saw a basic once. I often use a stricter variant in development that throws if invoked twice, because silent drops can hide serious logic errors.
function onceStrict(fn) {
let called = false;
return (...args) => {
if (called) {
throw new Error("Callback invoked more than once");
}
called = true;
return fn(...args);
};
}
In production you might prefer the silent version to avoid crashing, but in tests and local dev I want it to fail fast.
2) withTimeout: fail instead of hanging
Hanging callbacks are brutal because nothing “looks wrong,” it just never finishes. I wrap high-risk boundaries (network calls, file ops, third-party SDKs) with timeouts.
function withTimeout(done, ms, label = "operation") {
let timer = setTimeout(() => {
timer = null;
done(new Error(${label} timed out after ${ms}ms));
}, ms);
return (...args) => {
if (!timer) return;
clearTimeout(timer);
timer = null;
done(...args);
};
}
function simulateNeverCallsBack(done) {
// intentionally empty
}
simulateNeverCallsBack(withTimeout((err) => {
console.error(err.message);
}, 1000, "simulate"));
This turns “stuck” into a normal error path you can handle.
3) safe: isolate exceptions thrown inside callbacks
If you call a callback and it throws, what should happen? In many event systems, an exception inside a handler can break the entire dispatch loop or cause inconsistent state.
When I’m writing a dispatcher or plugin host, I treat handler exceptions as data:
function safe(fn, onError) {
return (...args) => {
try {
return fn(...args);
} catch (err) {
onError(err);
}
};
}
function emit(listeners, value) {
for (const listener of listeners) {
safe(listener, (err) => {
console.error("Listener failed:", err.message);
})(value);
}
}
This keeps one bad listener from taking down the whole system.
Node-Style Callbacks in Practice: Wrapping a Real Boundary
Let’s make this concrete with a mini “write file atomically” helper using Node-style callbacks. This is the kind of function that shows up in CLIs, build tools, and internal services.
Requirements I care about:
- Write to a temp file first.
- Rename to the final path (atomic on most filesystems).
- Ensure the callback is called exactly once.
- Surface meaningful errors.
import fs from "node:fs";
import path from "node:path";
function writeFileAtomic(filePath, data, done) {
done = once(done);
const dir = path.dirname(filePath);
const base = path.basename(filePath);
const tmp = path.join(dir, .${base}.${process.pid}.${Date.now()}.tmp);
fs.writeFile(tmp, data, (err) => {
if (err) return done(err);
fs.rename(tmp, filePath, (err2) => {
if (err2) {
// Best-effort cleanup. Don‘t hide the original rename error.
fs.unlink(tmp, () => done(err2));
return;
}
done(null);
});
});
}
writeFileAtomic("./out.txt", "hello\n", (err) => {
if (err) return console.error("write failed", err.message);
console.log("write ok");
});
This example is intentionally callback-based because the core Node APIs are callback-based in many environments, and because it illustrates a key point: callbacks are a good fit when you are orchestrating multiple “completion” signals with early exits.
If you later want to migrate this to async/await, you can do it by wrapping fs.writeFile and fs.rename as Promises. But the correctness rules (once, return after done, don’t double-call) remain the same.
Event Emitters, Streams, and Backpressure: Where Callbacks Still Shine
Promises are great for “one result.” They are awkward for “a million results.” That’s why event emitters and streams continue to be callback-heavy.
Event emitters: repeated notifications
A common pattern is: register a callback, receive updates until you unsubscribe.
function createTicker(intervalMs = 1000) {
const listeners = new Set();
let timer = null;
function start() {
if (timer) return;
timer = setInterval(() => {
const now = new Date();
for (const fn of listeners) fn(now);
}, intervalMs);
}
function stop() {
if (!timer) return;
clearInterval(timer);
timer = null;
}
function subscribe(fn) {
listeners.add(fn);
start();
return () => {
listeners.delete(fn);
if (listeners.size === 0) stop();
};
}
return { subscribe };
}
const ticker = createTicker(500);
const unsubscribe = ticker.subscribe((t) => console.log("tick", t.toISOString()));
setTimeout(() => {
unsubscribe();
console.log("unsubscribed");
}, 2200);
Notice the callback contract again: multi-shot updates, and a cleanup function.
Streams: flow control matters
In streaming systems, the important problem is not “how do I wait,” it’s “how do I not get overwhelmed.” That is backpressure: the consumer needs a way to slow down the producer.
Even if you don’t use Node streams directly, you see this idea everywhere: read a chunk, process it, signal readiness for the next chunk. That readiness signal is often a callback.
Here is a tiny “pull-based” chunk processor to demonstrate the shape:
function processChunks(getNextChunk, onChunk, done) {
done = once(done);
function loop() {
getNextChunk((err, chunk) => {
if (err) return done(err);
if (chunk === null) return done(null);
try {
onChunk(chunk);
} catch (e) {
return done(e);
}
// Ask for the next chunk only after processing this one.
loop();
});
}
loop();
}
function makeChunkSource(chunks) {
let i = 0;
return (done) => {
setTimeout(() => {
if (i >= chunks.length) return done(null, null);
done(null, chunks[i++]);
}, 50);
};
}
processChunks(
makeChunkSource(["a", "b", "c"]),
(c) => console.log("chunk", c),
(err) => {
if (err) return console.error("failed", err.message);
console.log("complete");
}
);
That “call me when you’re ready” shape is hard to represent with a single Promise. You can build async iterators for it, but under the hood there is still a callback or queue managing readiness.
Practical Scenarios: When I Choose Callbacks on Purpose
I still reach for callbacks in modern code when any of these are true:
- The API is subscription-based:
onMessage,onChange,onResize,onChunk. - The operation yields multiple values over time.
- The caller needs direct control over cleanup (unsubscribe) without involving abort controllers.
- I’m building a plugin system where user code is “called back” by my framework.
Plugin system example (callback as an extension point)
function createPipeline() {
const plugins = [];
function use(plugin) {
plugins.push(plugin);
}
function run(input, done) {
done = once(done);
let ctx = { input, output: null, meta: {} };
try {
for (const plugin of plugins) {
// Plugin can mutate ctx or return a new output.
const result = plugin(ctx);
if (result !== undefined) ctx.output = result;
}
done(null, ctx.output);
} catch (err) {
done(err);
}
}
return { use, run };
}
const p = createPipeline();
p.use((ctx) => {
ctx.meta.startedAt = Date.now();
});
p.use((ctx) => String(ctx.input).trim());
p.run(" hello ", (err, out) => {
if (err) return console.error(err.message);
console.log(out);
});
This is synchronous, but the point is the same: callbacks are a clean boundary when one side (the pipeline) owns the control flow and the other side (the plugin) provides behavior.
Practical Scenarios: When I Avoid Callbacks
I avoid callbacks when:
- The operation is one-shot and naturally
await-able. - I need composition: parallel work, retries, cancellation, and timeouts.
- I want stack traces that make sense across async boundaries.
- The call chain already uses Promises; adding callbacks just adds a second control model.
In these cases, I’ll expose a Promise API and optionally provide a callback adapter for legacy integrations.
Performance Considerations That Matter (and the Ones That Don’t)
A lot of callback discussion gets stuck on “are callbacks faster than Promises?” In normal app work, the bigger performance questions are:
- Are you doing unnecessary work inside hot callbacks (like scroll handlers)?
- Are you accidentally scheduling too many callbacks (like thousands of timers)?
- Are you blocking the event loop (CPU work) in a callback that should be I/O-bound?
If a callback is triggered by an input event (scroll, mousemove, resize), the dispatch frequency can be high. That’s where you need throttling or debouncing.
Debounce: run after the user stops
function debounce(fn, waitMs) {
let timer = null;
return (...args) => {
if (timer) clearTimeout(timer);
timer = setTimeout(() => fn(...args), waitMs);
};
}
window.addEventListener(
"resize",
debounce(() => {
console.log("resized, recalculating layout");
}, 150)
);
Throttle: run at most once per interval
function throttle(fn, intervalMs) {
let last = 0;
let scheduled = false;
return (...args) => {
const now = Date.now();
const remaining = intervalMs - (now - last);
if (remaining <= 0) {
last = now;
fn(...args);
return;
}
if (scheduled) return;
scheduled = true;
setTimeout(() => {
scheduled = false;
last = Date.now();
fn(...args);
}, remaining);
};
}
document.addEventListener(
"scroll",
throttle(() => {
// Avoid heavy work every scroll event.
console.log("scroll tick");
}, 200)
);
These are callback patterns, but they directly impact real performance because they reduce how often expensive code runs.
Testing Callback-Based Code Without Losing Your Mind
Callback-heavy code is testable, but you need discipline around termination conditions. The two common test failures are:
- The test finishes before the callback fires.
- The callback fires twice and the test does something weird.
A simple pattern is to wrap your test done (or whatever completion signal your test runner uses) with onceStrict.
Pseudo-example:
it("calls back with a user", (done) => {
done = onceStrict(done);
loadUser("123", (err, user) => {
if (err) return done(err);
if (!user || user.id !== "123") return done(new Error("wrong user"));
done();
});
});
If you don’t have a done-style runner, you can wrap callback APIs as Promises and then await in tests. That gives you linear flow without changing production code.
function asPromise(fn) {
return new Promise((resolve, reject) => {
fn((err, value) => {
if (err) reject(err);
else resolve(value);
});
});
}
I like this in tests because it keeps the production API stable and makes assertions simpler.
Observability: Make Callback Order Visible
When something goes wrong in async flows, the first question is “what happened first?” Callbacks make it easy to lose that story unless you log with structure.
Two habits help a lot:
- Add correlation IDs to callback chains.
- Log at boundaries (entry, exit, error) rather than logging every step.
A small example for a request handler:
function withRequestId(handler) {
return (req, res) => {
const requestId = crypto.randomUUID();
handler(req, res, requestId);
};
}
const handler = withRequestId((req, res, requestId) => {
const done = (err, payload) => {
if (err) {
console.error({ requestId, err: err.message }, "request failed");
res.statusCode = 500;
res.end("error");
return;
}
console.log({ requestId }, "request ok");
res.end(JSON.stringify(payload));
};
loadUser(req.query.id, done);
});
You don’t need fancy tooling to benefit from this. Even plain logs become much more useful when callback chains share an ID.
A Clear Migration Path for Callback-Heavy Code
When teams ask me “should we rewrite callbacks to async/await?” I usually answer with a sequence rather than a yes/no.
- Stabilize contracts: decide sync vs async, once vs many, cancellation, error channels.
- Add defensive wrappers at boundaries:
once, timeouts, safe dispatch. - Create adapters: callback-to-Promise and Promise-to-callback.
- Migrate the highest value paths first: complex sequential flows.
- Leave event subscriptions alone unless they are actively causing pain.
A concrete example: migrating a sequential callback flow to Promises while keeping the old API.
function oldStyleGetProfile(userId, done) {
// New implementation underneath.
newStyleGetProfile(userId)
.then((profile) => done(null, profile))
.catch((err) => done(err));
}
async function newStyleGetProfile(userId) {
const user = await fetch(/api/users/${encodeURIComponent(userId)}).then((r) => r.json());
const org = await fetch(/api/orgs/${encodeURIComponent(user.orgId)}).then((r) => r.json());
return { user, org };
}
This is a low-risk migration because callers don’t change. You gradually rewrite internals, then later you can introduce a Promise-first public API if the project can handle it.
Callback Checklist (What I Look For in Code Review)
When I’m reviewing callback-based code, I scan for the same issues every time:
- Does every error path return after calling
done? - Can the callback be called twice (multiple code paths, event + timeout, retries)?
- Is the callback sometimes sync, sometimes async?
- Are there missing cleanups for event listeners?
- Are timeouts and cancellation handled where they matter?
- If a callback throws, do we know what happens next?
- Are argument orders consistent across the module?
If you fix these, callback-heavy code stops being scary.
Closing Thought: Callbacks Are Not Old, They’re Foundational
I like async/await for request-response work, and I use it daily. But callbacks are still the foundation for the parts of JavaScript that are push-based: UI events, streams, subscriptions, and plugin hooks.
The skill is not memorizing syntax. The skill is reading and designing callback contracts: who owns timing, how errors flow, whether cleanup exists, and what guarantees the caller can rely on.
When you can answer those questions quickly, the async bugs that “still surprise experienced teams” stop being surprises.


