I still remember the first time a production dashboard froze because a single for loop ran too long. The browser tab was “alive,” but the UI felt dead — clicks queued, scroll stuttered, and the status badge never updated. That moment pushed me to understand JavaScript’s execution model in a real, practical way. JavaScript runs on a single thread, which means only one chunk of code can execute at a time. That sounds like a bottleneck, yet the web stays responsive, APIs return, timers fire, and users keep typing.
The trick is that JavaScript is single-threaded for executing your code, but the runtime around it is built to stay non-blocking. In modern runtimes, I can kick off a network request or a file read, keep the UI smooth, and only handle the result when it’s ready. If you build web apps, Node services, or even serverless functions, this mental model is the difference between “works on my machine” and “survives real traffic.”
I’m going to unpack why JavaScript is single-threaded, how the event loop makes it non-blocking, and what that means for your code in 2026. You’ll get practical examples, real mistakes to avoid, and clear guidance on when to reach for Workers, queues, or native modules.
Single Thread ≠ Single Task
When I say JavaScript is single-threaded, I mean the JavaScript engine executes one statement at a time in one call stack. There is one “active” execution context for your code. If you call a function, it goes onto the stack. When it returns, it pops off. No parallel execution of your JavaScript code happens inside that same thread.
This is intentional. The early web needed a predictable, safe way to manipulate the DOM. A single thread avoids race conditions around shared state like DOM nodes, layout information, and user events. If two scripts could mutate the DOM at the same time, you’d get nondeterministic UI behavior and subtle bugs that are hard to reproduce.
So why doesn’t this lock everything up? Because the JavaScript engine is only one piece of the runtime. The browser (or Node) hosts the engine and provides extra systems that handle I/O outside the JavaScript thread. That lets JavaScript request work and then step aside while the runtime completes it.
A simple analogy I use: JavaScript is the cashier, not the entire store. The cashier handles one customer at a time, but stockers, delivery trucks, and kitchen staff work in parallel. When the cashier calls for a special item, the work happens elsewhere, and the cashier keeps serving the next customer. That’s non-blocking.
The Call Stack: One Lane, No Overtaking
The call stack is the single lane JavaScript uses for your code. Every synchronous function call enters that lane. This is why a long-running computation blocks the UI.
function countLargeRange() {
let total = 0;
for (let i = 0; i < 2000000_000; i += 1) {
total += i;
}
return total;
}
console.log("Start");
countLargeRange();
console.log("End");
If you run that in a browser tab, “End” won’t appear until the loop finishes. Inputs feel frozen because the main thread is busy. The key point: this isn’t a JavaScript flaw, it’s the natural result of a single call stack.
Where non-blocking behavior shines is when you step outside that stack and use asynchronous APIs. The runtime can handle work in parallel while your JavaScript stays free to process the next task when it’s ready.
The Event Loop: The Traffic Controller
The event loop is the runtime’s traffic controller. It watches the call stack and queues, and decides when JavaScript can pick up the next task. I explain it to teams like this:
- The call stack executes your JavaScript, one frame at a time.
- The runtime provides Web APIs (in browsers) or libuv (in Node) to run tasks off the main thread.
- When those tasks finish, their callbacks or promise resolutions go into queues.
- The event loop checks if the stack is clear, then pushes the next queued task onto the stack.
If you understand this, “single-threaded but non-blocking” stops sounding like a contradiction.
Here’s the classic example with timers:
console.log("Start");
setTimeout(() => {
console.log("Timer fired");
}, 2000);
console.log("End");
Output:
Start
End
Timer fired
setTimeout doesn’t block. It schedules work with the runtime and returns immediately. Your code keeps moving. After the timer completes, the callback enters the task queue and runs when the stack is free.
Microtasks vs Macrotasks: Why Promises Feel “Faster”
In modern JavaScript, there are two main queues: the macrotask queue (timers, DOM events, I/O callbacks) and the microtask queue (promise reactions, queueMicrotask). The event loop drains microtasks before moving to the next macrotask. That’s why promise callbacks run sooner than setTimeout(..., 0).
console.log("A");
setTimeout(() => {
console.log("Timer");
}, 0);
Promise.resolve().then(() => {
console.log("Promise");
});
console.log("B");
Output:
A
B
Promise
Timer
Understanding this ordering is how I avoid UI jank and race conditions. If you rely on setTimeout(..., 0) to “wait for the DOM,” you can still get surprising results. Promises resolve before timers, so you need to be explicit about scheduling.
Web APIs and libuv: The Real Reason I/O Is Non-Blocking
Your JavaScript runs on one thread, but the runtime uses native threads and OS facilities for I/O. In browsers, Web APIs handle timers, network requests, and storage. In Node, libuv manages a thread pool and evented I/O. That is why you can fetch data without freezing the UI.
async function loadProfile(userId) {
const response = await fetch(/api/users/${userId});
if (!response.ok) {
throw new Error("Failed to load profile");
}
const profile = await response.json();
return profile;
}
console.log("Requesting profile...");
loadProfile("user-742")
.then(profile => {
console.log("Profile loaded:", profile.name);
})
.catch(err => {
console.error(err.message);
});
console.log("UI still responsive");
fetch runs outside the JavaScript call stack. The response eventually comes back as a promise resolution (a microtask). Meanwhile, the UI stays active.
In Node, fs.readFile or fetch in 2026 Node releases behaves similarly: the OS handles the actual I/O, and your JavaScript gets notified when it’s done. That is non-blocking in practice.
Why Single-Threaded Was a Smart Tradeoff
From a practical engineering view, single-threaded JavaScript is a feature, not a handicap. Here’s why I’m glad it exists:
1) Deterministic execution: I can trace a bug by walking the call stack. There’s one place where my JavaScript runs.
2) DOM safety: UI state stays consistent, which is critical for rendering and layout.
3) Simpler programming model: You avoid a class of data races that plague UI code in multi-threaded environments.
4) Performance predictability: The main thread does a known set of work. You can measure and profile it reliably.
Does that mean you never need parallelism? No. It means you choose it intentionally via Web Workers, Worker Threads, or native extensions.
Common Mistakes That Turn Non-Blocking into Blocking
I see the same pitfalls on teams, and they all come down to misunderstanding the event loop. These are the ones I watch for:
- Heavy loops on the main thread: Anything that runs longer than a frame budget (typically 10–15ms) can cause jank. I chunk work or move it to a Worker.
- Synchronous JSON parsing of huge payloads: Parsing a 50MB JSON response can block the UI. I switch to streaming parsers or break work into slices.
- Chained microtasks: Resolving thousands of promises back-to-back can starve the macrotask queue. I add occasional
setTimeoutorawait new Promise(requestAnimationFrame)to yield. - Blocking crypto or compression: If you do crypto or zlib operations on the main thread, you’ll feel it. Use Worker Threads or native bindings in Node.
Here’s an example of chunking CPU work to keep the UI responsive:
function processLargeArray(items, onProgress) {
let index = 0;
const chunkSize = 10_000;
function processChunk() {
const end = Math.min(index + chunkSize, items.length);
for (; index < end; index += 1) {
// Simulate heavy work
items[index] = items[index] * 2;
}
onProgress(index / items.length);
if (index < items.length) {
// Yield to the event loop so the UI can paint
setTimeout(processChunk, 0);
}
}
processChunk();
}
This doesn’t make JavaScript multi-threaded, but it keeps it non-blocking by yielding control back to the event loop.
When to Use Workers (And When Not To)
You should reach for Workers when you have CPU-heavy work or large data processing that can’t be chunked cleanly. That includes:
- Image manipulation or video processing
- Large JSON parsing or compression
- Encryption or hashing in bulk
- Data analytics in the browser
In Node, Worker Threads let you keep the event loop responsive while running CPU-bound tasks. In the browser, Web Workers offload work away from the UI thread.
On the other hand, you shouldn’t use Workers for everything. They come with serialization overhead, complexity, and messaging costs. If your task runs in under 10–15ms, a Worker is usually not worth it. I treat Workers as a tool for heavy lifting, not a default pattern.
Async Patterns: Traditional vs Modern
JavaScript async patterns have changed a lot since callbacks dominated. Here’s a quick comparison of traditional and modern approaches in 2026:
Traditional Pattern
Recommendation
—
—
Callback
Use promises for composability
Nested callbacks
async/await Use async/await for readability
Manual counters
Promise.all / Promise.allSettled Use Promise.all for independent tasks
Error-first callbacks
try/catch Use try/catch with async/await
Custom flags
AbortController Use AbortController for fetch and custom tasksI still see callback-based APIs, but I wrap them in promises to keep code clear and to work naturally with async/await.
Real-World Scenarios: How Non-Blocking Saves You
Here are a few patterns I rely on daily:
1) Rendering + data fetch
The UI should render immediately, even if the API is slow. I show a skeleton state, then swap in real data when the promise resolves. That keeps the main thread free.
2) Debounced user input
Typing should never feel sticky. I debounce expensive operations like search or validation.
function debounce(fn, delayMs) {
let timerId;
return (...args) => {
clearTimeout(timerId);
timerId = setTimeout(() => fn(...args), delayMs);
};
}
const handleSearch = debounce(async (query) => {
const response = await fetch(/api/search?q=${encodeURIComponent(query)});
const results = await response.json();
console.log("Results:", results);
}, 250);
3) Progressively loaded lists
Instead of rendering thousands of DOM nodes at once, I batch rendering in slices and allow the event loop to handle user input between slices.
Performance Considerations I Watch in 2026
In 2026, the browser and Node have improved scheduling, but the fundamentals remain. These are the signals I watch:
- Long tasks: Anything over 50ms shows up in DevTools as a long task. I aim for chunks of 10–15ms to keep a smooth 60fps UI.
- Overloaded microtasks: A promise-heavy loop can delay UI rendering. I add breaks with
await new Promise(requestAnimationFrame)orsetTimeout. - Idle time: I use
requestIdleCallbackto schedule low-priority work like preloading or analytics. - GC pressure: Large allocations can pause the main thread. I reuse buffers and avoid building huge arrays at once.
AI-Assisted Workflows Without Blocking the UI
A modern workflow I use often is on-device inference or remote AI calls. Both can stall the UI if handled poorly. My approach:
- If inference is local (like a small embedding model in the browser), I move it into a Web Worker.
- If inference is remote, I debounce calls and use
AbortControllerto cancel outdated requests. - For Node services, I route heavy inference to a separate process or a worker pool.
That preserves the event loop and keeps both UI and API servers responsive.
Practical Checklist I Use in Code Reviews
When I review code for event-loop safety, I ask myself:
- Is there any loop or computation that could block for more than 10–15ms?
- Does a promise chain create a giant microtask backlog?
- Are we doing heavy JSON parsing or compression on the main thread?
- Are we ignoring
AbortControllerand wasting work on stale requests? - Should this be moved to a Worker or a server-side job?
These questions catch most real-world performance problems before they hit users.
Key Takeaways and What I Recommend Next
The big idea is simple: JavaScript runs your code on one thread, but the runtime provides non-blocking APIs so you can keep the app responsive. Once you internalize the call stack, event loop, and task queues, the “single-threaded but non-blocking” phrasing stops feeling contradictory and starts feeling practical.
If you want to get better at this, I suggest three steps. First, open DevTools and watch the main thread while you trigger UI actions — it’s the fastest way to build intuition. Second, refactor one blocking workflow into chunks or a Worker so you feel the difference. Third, add simple performance budgets: if a task regularly exceeds 10–15ms, treat it like a bug and fix it.
Most of the time, you don’t need more threads; you need clearer scheduling. When you do need more parallel work, use Workers or worker pools intentionally. That way, you keep the simplicity of JavaScript’s single-threaded model while still delivering modern, responsive experiences.
If you want help reviewing a specific workflow or measuring long tasks in a real app, tell me what you’re building and I’ll walk through the exact changes I’d make.
Why “Single-Threaded” Refers to Your Code, Not the Entire Runtime
One of the most confusing things about JavaScript is the phrase “single-threaded.” It sounds like the whole system has one thread, when in reality it’s your JavaScript execution that is single-threaded. The runtime uses multiple threads to keep the system responsive.
In the browser, the UI thread is tightly coupled with the JavaScript thread. That’s why layout, painting, and JS share a budget. But the browser also has separate threads for networking, decoding images, and garbage collection. In Node, your JS runs on one thread, while libuv farms out I/O to an internal pool or the OS.
This distinction matters because it shapes how you design software. If you expect parallelism inside JavaScript itself, you’ll run into performance walls. But if you treat JS as a “coordinator” and offload heavy work, you’ll get smooth, scalable systems.
A Mental Model I Use: The “Coordinator and Workers” Pattern
I explain this to junior devs as a coordinator model:
- Coordinator (JS thread): orchestrates tasks, updates state, handles user events.
- Workers (runtime + OS): execute I/O, timers, and heavy work.
The coordinator is fast and nimble. It should make decisions, not carry heavy loads. When the coordinator spends too long crunching data, everything else stops. That’s why the non-blocking model matters: the coordinator stays responsive, delegates heavy work, and picks results up when they’re ready.
When I design systems with this in mind, code becomes clearer and performance becomes consistent.
A Deeper Look at the Event Loop with Realistic Timing
Let’s walk through a slightly more realistic example where you see how tasks interleave. Consider this block:
console.log("1. Start");
setTimeout(() => {
console.log("4. Timer");
}, 0);
Promise.resolve()
.then(() => {
console.log("3. Promise A");
})
.then(() => {
console.log("5. Promise B");
});
console.log("2. End");
You might expect timer and promises to mix, but the actual order is:
1. Start
- End
- Promise A
- Promise B
- Timer
Why? The event loop finishes the current stack, then drains microtasks (the promise chain), and only then touches the macrotask queue (the timer). The “non-blocking” promise chain is still able to delay timers and UI events if it is huge. That’s why a long chain of microtasks can starve the UI even when you “use async.”
Microtask Starvation: The Hidden Performance Trap
Microtasks are useful because they execute quickly after the current stack clears. But they can starve rendering. I’ve seen apps where thousands of promise callbacks run back-to-back, and the UI never gets a chance to paint for hundreds of milliseconds.
Here’s a simplified example of what not to do:
function createBacklog() {
let p = Promise.resolve();
for (let i = 0; i < 100_000; i += 1) {
p = p.then(() => i);
}
}
createBacklog();
This builds a microtask backlog that can delay rendering and user input. The fix is to yield occasionally with a macrotask break:
async function yieldToBrowser() {
return new Promise(requestAnimationFrame);
}
async function processWithYielding(items) {
for (let i = 0; i < items.length; i += 1) {
// do work
if (i % 5_000 === 0) {
await yieldToBrowser();
}
}
}
I use this pattern when the UI feels “sticky” even though I’m using promises.
Blocking Isn’t Just Computation: Layout and Rendering Can Block Too
Another misconception is that only heavy JavaScript blocks. In reality, layout and rendering can also consume the main thread. If your JS triggers style recalculation or forced layout in a tight loop, you can stall rendering even if the code itself is “small.”
For example, reading layout in a loop forces the browser to recalculate styles and layout repeatedly:
const boxes = document.querySelectorAll(".box");
for (const box of boxes) {
const height = box.offsetHeight; // forces layout
box.style.height = ${height + 10}px;
}
This seems harmless, but it can cause jank in a large list. The fix is to batch reads and writes:
const boxes = document.querySelectorAll(".box");
const heights = [];
for (const box of boxes) {
heights.push(box.offsetHeight);
}
for (let i = 0; i < boxes.length; i += 1) {
boxes[i].style.height = ${heights[i] + 10}px;
}
It’s still single-threaded, but now you’ve reduced forced layouts, which keeps the UI more responsive.
Non-Blocking Isn’t Magic: You Can Still Block with Async
Just because you use async/await doesn’t mean you’re safe. If you do heavy work inside an async function, it still blocks the main thread between awaits.
async function parseAndRender(data) {
// This blocks the main thread before the first await
const parsed = JSON.parse(data);
render(parsed);
await Promise.resolve();
}
If data is large, JSON.parse will freeze the UI. The fix is to parse off-thread or chunk the workload. The “async” keyword does not give you a new thread. It only gives you an easier syntax for the event loop.
The Real Benefits of Single-Threaded UI Code
I’ve shipped systems in multi-threaded UI toolkits before, and I actually prefer JavaScript’s model for the web. The predictability is a big deal. Here’s what I value most:
- Sequential logic: Most UI code reads like a story, not a concurrent puzzle.
- Simpler state management: No locks, no races, no atomic operations for the DOM.
- Debuggability: When a state update happens, you can trace it to a single stack.
That doesn’t mean you ignore concurrency. It means concurrency happens at the boundaries, not in every function.
Practical Example: Keeping a Dashboard Responsive Under Load
Here’s a realistic dashboard pattern I use when a single big data update could freeze the UI. Imagine you have a massive list of transactions arriving every few seconds. The naive approach would parse and render everything in one pass.
Instead, I break it into phases: parsing, computing aggregates, and rendering. Each phase yields to the event loop.
async function processTransactions(rawJson, updateUI) {
const parsed = JSON.parse(rawJson); // still can be heavy
const chunkSize = 2000;
const totals = new Map();
for (let i = 0; i < parsed.length; i += 1) {
const item = parsed[i];
totals.set(item.type, (totals.get(item.type) || 0) + item.amount);
if (i % chunkSize === 0) {
// let the UI breathe
await new Promise(requestAnimationFrame);
}
}
updateUI(totals);
}
This keeps the app usable during spikes. If parsing is the bottleneck, I move parsing into a Worker and only return the aggregate results to the UI thread.
Using Web Workers Without Over-Engineering
Workers are a powerful escape hatch, but they have overhead. Here’s how I decide:
- Use a Worker if the task is CPU-heavy and cannot be chunked without complexity.
- Avoid a Worker if the task completes in a single frame or needs tight DOM access.
- Use a Worker if you can batch work and send back minimal results (not massive objects).
I keep the worker API thin and simple:
// main.js
const worker = new Worker("worker.js");
worker.onmessage = (event) => {
const { result } = event.data;
render(result);
};
worker.postMessage({ items });
// worker.js
self.onmessage = (event) => {
const { items } = event.data;
const result = items.map(x => x * 2); // heavy work
self.postMessage({ result });
};
It’s not fancy, but it’s effective. If I need shared memory or high throughput, I consider SharedArrayBuffer, but that’s a tradeoff I take only when necessary.
Node.js: Event Loop Responsiveness Under Real Load
In Node, the single-threaded model matters even more because it’s often the backend. If your event loop is blocked, your server can’t handle requests efficiently.
Here’s a common mistake:
app.get("/stats", (req, res) => {
const data = computeStatsSync(bigDataset); // blocks event loop
res.json(data);
});
If computeStatsSync takes 200ms and you have concurrent requests, throughput collapses. The fix: either precompute, run in a worker thread, or move to a separate service.
A safer pattern:
import { Worker } from "node:worker_threads";
function computeInWorker(data) {
return new Promise((resolve, reject) => {
const worker = new Worker("./stats-worker.js", { workerData: data });
worker.on("message", resolve);
worker.on("error", reject);
});
}
app.get("/stats", async (req, res) => {
const data = await computeInWorker(bigDataset);
res.json(data);
});
Yes, it’s more complex, but it protects your event loop and keeps request latency predictable.
Async I/O vs CPU Work: The Critical Distinction
A recurring confusion is mixing I/O and CPU work. JavaScript handles I/O well because the runtime can offload it. But JavaScript doesn’t magically offload CPU work. That’s your job.
I keep this heuristic handy:
- If the bottleneck is waiting (I/O): async/await is usually enough.
- If the bottleneck is computation (CPU): chunk or offload.
This helps me avoid the trap of “I used async, so it should be fine.”
Edge Cases That Surprise Even Experienced Developers
Here are a few edge cases I’ve seen cause real bugs:
1) setTimeout is not precise
Timers are not guaranteed to fire exactly on time. They only fire when the call stack is clear. If you assume a 100ms timer always fires in 100ms, you will see drift.
2) await inside loops can serialize work
This is common when you want parallel requests:
// serial: waits each time
for (const id of ids) {
const res = await fetch(/api/${id});
results.push(await res.json());
}
The fix is to run in parallel with Promise.all when safe:
const promises = ids.map(id => fetch(/api/${id}).then(r => r.json()));
const results = await Promise.all(promises);
3) requestAnimationFrame only runs before painting
It’s great for UI tasks, but it won’t run in inactive tabs, and it ties work to the render loop. That can be a feature or a limitation depending on your use case.
Non-Blocking Patterns for “Heavy” Frontend Work
Here are some patterns I use to keep the UI responsive:
Chunked processing
Break work into small slices and yield between them. This is the easiest way to make a big task non-blocking without workers.
Lazy rendering
Render visible content first, then fill in details in idle time. For long lists, virtualization is a must.
Progressive enhancement
Load core UI first, then defer non-critical features (like analytics or extra widgets) using requestIdleCallback.
Async hydration
For server-rendered apps, defer non-critical components so the main thread can handle interactions sooner.
These strategies keep the single thread light and responsive.
Understanding Frame Budgets in Human Terms
I like to think in terms of frames because that’s how users perceive responsiveness. At 60fps, you have about 16ms per frame to do everything: JS, style, layout, paint. If you take 40ms, the browser misses frames and users feel jank.
That’s why I aim for:
- 2–5ms: ideal per interaction, virtually instant.
- 10–15ms: acceptable for occasional tasks.
- 50ms+: noticeable lag and likely a long task warning.
These are not hard rules, but they’re great instincts to develop.
Observability: How I Measure Event Loop Health
You can’t manage what you don’t measure. These are the tools I lean on:
- Browser DevTools Performance panel: find long tasks and layout thrashing.
- Performance Observer: log long tasks programmatically.
- Node event loop delay metrics: measure how much time the loop is blocked.
For a quick browser check, I use this snippet to log long tasks:
const observer = new PerformanceObserver((list) => {
for (const entry of list.getEntries()) {
console.log("Long task:", entry.duration);
}
});
observer.observe({ entryTypes: ["longtask"] });
This gives me a real-time signal when something blocks the main thread.
Practical Example: Non-Blocking Search UI at Scale
Let’s build a realistic search input that avoids blocking. It includes debounce, request cancellation, and a visual loading state.
function createSearchClient(updateUI) {
let abortController = null;
async function search(query) {
if (abortController) {
abortController.abort();
}
abortController = new AbortController();
updateUI({ loading: true });
try {
const response = await fetch(/api/search?q=${encodeURIComponent(query)}, {
signal: abortController.signal,
});
const results = await response.json();
updateUI({ loading: false, results });
} catch (err) {
if (err.name !== "AbortError") {
updateUI({ loading: false, error: "Search failed" });
}
}
}
return debounce(search, 200);
}
This feels fast because:
- Old requests are canceled.
- The UI shows a loading state quickly.
- The main thread doesn’t block.
It’s a small pattern that prevents a lot of “sticky UI” complaints.
Alternative Approaches: When Async Isn’t Enough
Sometimes you need more than event loop tricks. Here are my go-to alternatives:
1) Move computation to the server
If data processing is heavy and not privacy-sensitive, do it server-side where you have more CPU and parallelism.
2) Use a worker pool
For Node services, a worker pool can handle CPU tasks without blocking the main thread. This is more predictable than spawning a new worker per request.
3) Native modules
If you need high-performance compute, native extensions can give you speed and parallelism, but they add deployment complexity.
4) WebAssembly
WASM can help for CPU-heavy tasks in the browser, but it’s still single-threaded unless you use threads and shared memory. It’s not a silver bullet, but it can be a big win.
The “Non-Blocking” Myth and the Reality
It’s important to be honest: JavaScript is not magically non-blocking. It is cooperative. You only stay non-blocking if you choose patterns that yield to the event loop. The runtime helps, but it can’t fix a tight loop or a giant JSON parse in your code.
When someone says “JavaScript is non-blocking,” I mentally translate it as: “JavaScript can be non-blocking if you use it correctly.”
That’s a much more accurate description, and it leads to better engineering decisions.
A Debugging Checklist for Event Loop Bugs
When I troubleshoot a “frozen UI” or “slow server” issue, I walk through this checklist:
1) Is there a long synchronous loop? If yes, chunk or offload.
2) Are we parsing large data on the main thread? If yes, stream or worker.
3) Is the microtask queue flooded? If yes, yield with macrotasks.
4) Are we doing layout thrashing? If yes, batch reads/writes.
5) Is GC spiking due to allocations? If yes, reduce churn.
This systematic approach has saved me hours of guesswork.
The “Why” in One Sentence
JavaScript is single-threaded so your code executes predictably and safely, but it stays non-blocking because the runtime delegates slow or parallel work to systems outside your JavaScript thread and uses the event loop to deliver results back at the right time.
Once you internalize this, you stop fighting the language and start designing with it.
Final Takeaways (The Version I Wish I Had Early On)
If I had to condense everything into a simple guide I’d give to a new teammate, it would be this:
- JavaScript runs one thing at a time, so don’t make that one thing heavy.
- Async doesn’t mean parallel; it means “wait without blocking.”
- The event loop is your traffic controller—learn how it schedules tasks.
- If you need parallel CPU work, use Workers or separate services.
- Performance is mostly about slicing work and yielding often.
If you want me to tailor this to a specific app, send me the slowest interaction or endpoint and I’ll map out exactly how I’d make it non-blocking.


