You can write JavaScript that “works” and still ship bugs that feel supernatural: a value is suddenly undefined, a loop skips an item, an error disappears, or a condition flips when you add a harmless log. In my experience, those bugs rarely come from exotic language features—they come from everyday statements interacting in ways you didn’t intend.
Statements are the instructions your program executes. That sounds basic, but the practical payoff is huge: when you understand what counts as a statement, how statements compose into blocks, and how control-flow and error-flow statements shape execution, your code becomes easier to reason about under pressure.
I’m going to focus on the statements you actually use in production: declarations (let, const, function), assignments, expression statements (function calls, arithmetic, await), control flow (if, switch, loops), and error flow (try, catch, throw). Along the way, I’ll show the modern patterns I recommend in 2026—guard clauses, data-driven branching, and lint-friendly structure—so you can write code that stays readable even as requirements change.
Statements vs Expressions (and Why That Distinction Saves You)
A quick mental model I rely on:
- Expressions produce a value.
- Statements perform an action in the program’s execution.
JavaScript blurs this line because many expressions can stand alone as expression statements:
// Expression statements (valid statements that are expressions)
console.log("Starting checkout...");
user.total = user.subtotal + user.tax;
await saveOrder(user.order);
That flexibility is powerful, but it’s also where readability can fall apart. When every line is “just an expression statement,” it’s easy to hide meaningful control flow inside clever expressions.
Here’s the rule I use when reviewing code: if a line changes what happens next (branching, looping, early exit, error path, async sequencing), I want to see that as a clear statement (if, return, throw, break, continue, await) rather than implied behavior buried inside an expression.
The semi-colon reality: ASI and “statement boundaries”
JavaScript has Automatic Semicolon Insertion (ASI). Most of the time it helps, but sometimes it changes where one statement ends and the next begins.
The classic footgun looks like this:
function buildUser() {
return
{
id: 123,
name: "Amina"
};
}
console.log(buildUser()); // undefined
Because of ASI, return becomes its own statement and returns undefined. I don’t rely on ASI around return, break, continue, or throw. I keep the returned expression on the same line:
function buildUser() {
return {
id: 123,
name: "Amina"
};
}
There are a couple more ASI traps worth recognizing because they show up in real codebases:
1) throw must be on the same line as the error expression:
function fail() {
// Don’t do this:
// throw
// new Error("Nope");
throw new Error("Nope");
}
2) A line starting with [ or ( can attach to the previous statement if you omit semicolons. This is one reason some teams adopt “always semicolons” even though the language doesn’t require it.
My pragmatic stance: pick one style (semicolons or no semicolons), enforce it with a formatter, and still avoid the known ASI hazard zones. Even with semicolons, keeping return/throw expressions on the same line is simply clearer.
Statement blocks: {} are not “just braces”
Blocks group statements and create scope for let/const. A block is an execution boundary you can use deliberately:
let status = "idle";
if (Math.random() > 0.5) {
const requestId = crypto.randomUUID();
status = loading:${requestId};
}
// requestId is not accessible here (block-scoped)
console.log(status);
When you treat blocks as first-class tools, your code gets safer: temporary values stay temporary.
A pattern I use a lot is the “tight block” to prevent accidental reuse:
let result;
{
const raw = await fetch("/api/settings");
const json = await raw.json();
result = normalizeSettings(json);
}
// raw/json can‘t leak into later edits
return result;
It’s a small thing, but it prevents the common situation where a future change accidentally depends on some intermediate variable that was never meant to become part of the function’s long-term state.
Declaration Statements: let, const, var, and Friends
Declarations are statements that create bindings (names) in a scope. In modern JavaScript, most of your day-to-day declarations should be const and let.
var vs let vs const (what I recommend)
Here’s the guidance I actually follow:
Scope
Hoisting behavior
—
—
var function
hoisted + initialized to undefined
let block
hoisted but in TDZ
const block
hoisted but in TDZ
TDZ (temporal dead zone) means you can’t read the variable before its declaration line executes.
function example() {
// console.log(token); // ReferenceError (TDZ)
const token = "abc";
console.log(token);
}
That “fail fast” behavior is one reason let/const make bugs easier to catch than var.
A subtle but very practical point: var being initialized to undefined means errors can become “soft.” Instead of crashing early, your program keeps going with undefined and explodes later in a completely different place.
function readConfig() {
if (Math.random() > 0.5) {
var mode = "safe";
}
// mode exists even when the branch didn‘t run
// mode is undefined here half the time
return mode;
}
With let, you would either structure the code to always assign or you’d get a clear error sooner if you tried to use it before initialization.
const does not mean “immutable value”
It means you can’t reassign the binding. Objects can still be mutated:
const profile = { name: "Diego", tags: [] };
profile.tags.push("premium");
// profile = {} would throw, but mutation is allowed
If you need immutability, you enforce it by convention, by copying, by freezing (rarely), or by using immutable data patterns.
In code review, I look for two common mistakes:
- Assuming
constprevents mutation (it does not). - Using mutation in a way that hides state changes (for example, mutating an argument object passed into a function, which creates action-at-a-distance for callers).
When I’m writing “transformer” functions, I prefer returning a new object:
function markPaid(order, paymentId) {
return {
...order,
status: "paid",
paymentId
};
}
Function declarations are statements too
A function declaration is a statement that creates a function binding:
function formatPrice(cents) {
return $${(cents / 100).toFixed(2)};
}
console.log(formatPrice(2599));
Function declarations are hoisted (the binding is available before the declaration line). That can be convenient, but I still prefer writing code top-down for readability.
One practical guideline: if a helper is used in exactly one place, I often inline it as an arrow function const helper = () => {} near the call site. If it’s a reusable unit with its own identity, I use a function declaration.
Other declaration statements you’ll see
Depending on your environment, you’ll also encounter:
classdeclarationsimport/exportdeclarations (module-level)async functionandfunction*(still declarations)
Example:
class RateLimiter {
constructor(limitPerMinute) {
this.limitPerMinute = limitPerMinute;
}
allow() {
return true;
}
}
export function createLimiter() {
return new RateLimiter(60);
}
Even if you don’t “think in statements,” your runtime does: it executes these declarations in a particular order, and that order affects what’s available when.
A module-specific note: import statements are hoisted and executed before the rest of the module body. That’s great for determinism, but it also means “top-level side effects” are very real. If an imported module does work at import time, you’ve created execution that happens before any of your other statements run.
Assignment Statements: The Smallest Statement With the Biggest Consequences
Assignment is where state changes happen. If you want predictable code, you should make assignments obvious and localized.
Basic assignment and compound assignment
let retries = 0;
retries += 1;
retries *= 2;
Compound assignments (+=, *=, etc.) are fine when the operation is obvious. If I have to pause and parse it, I rewrite it.
One boundary I like: avoid compound assignment when it mixes concerns (like math + rounding + clamping). In that case, I’d rather see a named function:
retries = clampRetries(retries + 1);
Destructuring assignment (great when it clarifies ownership)
const order = {
id: "ord_901",
customer: { name: "Riya", email: "[email protected]" },
totalCents: 4250
};
const {
id,
customer: { email },
totalCents
} = order;
console.log(id, email, totalCents);
I like destructuring when it reduces noise and makes data shape explicit. I avoid it when it hides where values come from (especially in large parameter lists).
Two practical pitfalls:
1) Destructuring from possibly-null values. You either need a guard clause or a default:
function getEmail(user) {
if (!user) return null;
const { email = null } = user;
return email;
}
2) Renaming everything. Occasional renames are fine, but if you destructure ten properties and rename eight of them, I usually stop and ask: is this function doing too much?
Logical assignment operators (use sparingly)
Modern JS supports:
||=assign if falsy&&=assign if truthy??=assign if nullish
The safe one for defaults is usually ??=:
const config = { timeoutMs: 0 };
config.timeoutMs ??= 5000; // keeps 0
console.log(config.timeoutMs); // 0
If you used ||=, you’d accidentally replace 0:
const config2 = { timeoutMs: 0 };
config2.timeoutMs ||= 5000;
console.log(config2.timeoutMs); // 5000 (often wrong)
My rule: ??= for defaults, ||= only when “falsy means missing” is truly intended, and &&= rarely (it’s easy to misread).
Assignment as an expression (don’t smuggle it into conditions)
JavaScript lets you write:
let status;
if (status = "ready") {
console.log("This always runs");
}
This is valid, and it’s also a classic bug. I never intentionally assign inside if conditions. If you truly need it, you can make it explicit with parentheses and a comment, but I recommend avoiding the pattern entirely.
The bigger idea: if a statement’s job is to branch, keep it branching. If its job is to assign, keep it assigning. Mixing those jobs is how you get “it looked right in my head” bugs.
Expression Statements: When “Doing Something” Is the Point
Expression statements are the bread and butter of most code: function calls, method calls, await, arithmetic, and so on.
Function call statements
sendEmail({
to: "[email protected]",
subject: "Your receipt",
body: "Thanks for your purchase."
});
This is the simplest form: “do this side effect.” The trap is letting side effects spread everywhere. My rule: if a function call changes state, name it like it changes state (saveOrder, enqueueJob, invalidateCache).
I also like making “pure” vs “side-effect” obvious by return values:
- A function that returns a value and has no side effects should be safe to call anytime.
- A function that returns
void(or an ignored promise) is a code smell unless its purpose is an effect.
await as a statement
In async code, await is often your real control-flow:
async function finalizeCheckout(cart) {
// 1) Persist
const order = await createOrder(cart);
// 2) Pay
const payment = await chargeCard(order.totalCents);
// 3) Confirm
await markOrderPaid(order.id, payment.id);
return order.id;
}
When you write it this way, you can literally read the execution path. That’s the point.
A practical tip: I prefer a short “statement per step” style over clever chaining. It makes debugging, instrumentation, and error handling straightforward.
A performance note: await in loops
The mistake I see most is serializing work by accident:
// Slow: runs one request at a time
for (const userId of userIds) {
const profile = await fetchProfile(userId);
profiles.push(profile);
}
When requests are independent, batch them:
// Faster: runs requests concurrently (within reasonable limits)
const profiles = await Promise.all(
userIds.map((userId) => fetchProfile(userId))
);
In typical network-bound systems, this can turn multi-second waits into a few hundred milliseconds, but you should still cap concurrency when the downstream service has rate limits.
Here’s a simple concurrency-limited pattern I’ve used when I don’t want to introduce a library:
async function mapWithConcurrency(items, limit, mapper) {
const results = new Array(items.length);
let nextIndex = 0;
async function worker() {
while (nextIndex < items.length) {
const current = nextIndex;
nextIndex += 1;
results[current] = await mapper(items[current], current);
}
}
const workers = Array.from({ length: Math.min(limit, items.length) }, worker);
await Promise.all(workers);
return results;
}
const profiles = await mapWithConcurrency(userIds, 5, fetchProfile);
That function is just a structured way of saying: “these statements can run concurrently, but only up to N at a time.” It keeps the performance intent visible.
Control Flow Statements: if, switch, Loops, and Labels
Control flow statements decide what runs, when it runs, and how often. If statements are the steering wheel of your program, loops are the engine.
if statements: I prefer guard clauses
When you have nested ifs, your brain has to hold multiple states at once. Guard clauses reduce indentation and make “exit paths” explicit.
function applyDiscount(order) {
if (!order) {
throw new Error("order is required");
}
if (order.status !== "open") {
return order; // nothing to do
}
if (order.totalCents < 5000) {
return order;
}
return {
...order,
totalCents: Math.round(order.totalCents * 0.9)
};
}
You can read this like a checklist: invalid input, not applicable, not eligible, apply transformation.
Two guard-clause micro-patterns I rely on:
- Validate inputs first (and throw early).
- Return early for “no-op” cases (so the core path stays flat).
This style is also friendlier to logging and metrics: each branch is explicit.
switch statements: use them when you want exhaustiveness
switch is great for closed sets of values (statuses, modes, event types). It’s not great for complex boolean logic.
function labelOrderStatus(status) {
switch (status) {
case "open":
return "Awaiting payment";
case "paid":
return "Paid";
case "shipped":
return "On the way";
case "canceled":
return "Canceled";
default:
// I always include default for runtime safety
return "Unknown";
}
}
#### Common switch mistake: fall-through
If you forget break (or return) you can fall through into the next case. Sometimes that’s intentional, but most of the time it’s a bug.
I recommend a style where each case returns, which removes break entirely.
If you really do want intentional fall-through, make it unmistakable with an explicit comment and keep the shared code small.
Loops: for, for...of, for...in, while, do...while
Here’s how I decide:
Use it for
—
for (let i = 0; i < n; i++) index-based work, in-place arrays
for...of iterating values (arrays, strings, maps)
for...in iterating enumerable keys (rare)
while unknown iteration count
Example of a clean for...of:
const events = [
{ type: "click", id: "btn_pay" },
{ type: "view", id: "page_checkout" }
];
for (const event of events) {
console.log(event.type, event.id);
}
A real-world detail: for...in iterates keys, including inherited enumerable properties. That’s why I almost never use it unless I’m intentionally dealing with plain dictionary-like objects and I’m also guarding with Object.hasOwn.
for (const key in obj) {
if (!Object.hasOwn(obj, key)) continue;
console.log(key, obj[key]);
}
break and continue: powerful, but keep them visible
I’m fine with continue when it removes nesting:
for (const item of cartItems) {
if (item.quantity <= 0) continue;
if (item.isGiftCard) continue;
totalCents += item.priceCents * item.quantity;
}
break is great when you truly want “first match wins.”
A trick I use to keep loop intent readable is to name the goal in a variable:
let firstValidCoupon = null;
for (const coupon of coupons) {
if (!coupon.active) continue;
if (coupon.expiresAt < Date.now()) continue;
firstValidCoupon = coupon;
break;
}
Now the “why” of the break is obvious.
Labelled statements (rare, but useful in multi-loop exits)
Labels exist and can be the cleanest solution in specific cases:
const grid = [
[".", ".", "."],
[".", "X", "."],
[".", ".", "."]
];
let foundAt = null;
search:
for (let row = 0; row < grid.length; row++) {
for (let col = 0; col < grid[row].length; col++) {
if (grid[row][col] === "X") {
foundAt = { row, col };
break search;
}
}
}
console.log(foundAt);
I don’t reach for labels often, but when I do, I name them clearly (like search) so the intent is obvious.
Function-Related Statements: return, yield, and Structuring Execution
Functions aren’t just containers; they’re statement sequences with defined entry/exit behavior.
return: make exits boring and predictable
A function with one exit point is a nice idea, but in JavaScript it can lead to heavy nesting. I prefer multiple returns when they clarify the flow (especially guard clauses).
One mistake to avoid: returning different types based on branches unless your callers truly expect it.
function findUserEmail(user) {
if (!user) return null;
if (!user.email) return null;
return user.email;
}
This always returns either a string or null. That’s easy to handle.
If you’re writing library code, consider being even more explicit: either always throw for invalid input, or always return a sentinel (null/undefined), but don’t mix both styles unless you have a compelling reason.
Generator functions and yield
Generators are niche, but they can be elegant for streaming values without building large arrays.
function* paginate(items, pageSize) {
for (let i = 0; i < items.length; i += pageSize) {
yield items.slice(i, i + pageSize);
}
}
const orders = ["ord1", "ord2", "ord3", "ord4", "ord_5"];
for (const page of paginate(orders, 2)) {
console.log(page);
}
I like generators when they simplify memory use or create clean iteration APIs.
In 2026, the more common “streaming” story is async iterators rather than generators, especially for paginated APIs:
async function* fetchPages(fetchPage) {
let cursor = null;
while (true) {
const page = await fetchPage(cursor);
yield page.items;
if (!page.nextCursor) return;
cursor = page.nextCursor;
}
}
for await (const items of fetchPages(apiFetchPage)) {
for (const item of items) {
// process item
}
}
That for await...of is a statement that controls async flow as cleanly as await does.
await + return: don’t add ceremony
If you’re returning the promise result, you can usually return it directly:
async function getCustomer(id) {
return fetch(/api/customers/${id}).then((r) => r.json());
}
But I prefer await when I’m going to handle errors or add logging, because it keeps the statement sequence easy to read:
async function getCustomer(id) {
const response = await fetch(/api/customers/${id});
if (!response.ok) {
throw new Error(Customer fetch failed: ${response.status});
}
return response.json();
}
The key isn’t “always await” or “never await.” The key is whether the statement sequence communicates the behavior you want.
Error Flow Statements: throw, try...catch...finally (and Modern Patterns)
Error handling is control flow. Treat it that way.
throw: be specific, and attach context
function requirePositiveInteger(value, fieldName) {
if (!Number.isInteger(value) || value <= 0) {
throw new Error(${fieldName} must be a positive integer);
}
}
requirePositiveInteger(0, "pageSize");
In production systems, I often use custom error classes to make catch blocks precise:
class ValidationError extends Error {
constructor(message, details) {
super(message);
this.name = "ValidationError";
this.details = details;
}
}
function validateEmail(email) {
if (typeof email !== "string" || !email.includes("@")) {
throw new ValidationError("Invalid email", { email });
}
}
Two modern practices I like:
- Attach structured context (
details) for logging. - Preserve the original error where possible. Newer runtimes support
new Error(message, { cause }), which is worth using when you’re wrapping errors.
try...catch: scope it tightly
I see code like this too often:
try {
// 40 lines of logic
} catch (error) {
// shrug
}
That style catches too much and makes debugging painful. I recommend wrapping only the statements that can throw and that you can meaningfully handle.
async function loadDashboard(userId) {
const user = await fetchUser(userId);
try {
const widgets = await fetchWidgets(user.plan);
return { user, widgets };
} catch (error) {
// Degrade gracefully: dashboard still loads, but with no widgets
// In a real app, you’d log error + context
console.warn("Failed to load widgets", { userId, plan: user.plan, error });
return { user, widgets: [] };
}
}
Notice what I’m doing here: I’m explicitly deciding what the error means for the product. Widget failure is non-fatal, so I return a safe fallback.
When the error is fatal, I prefer catching only to add context and then rethrow:
async function loadBillingHistory(userId) {
try {
return await fetchBillingHistory(userId);
} catch (error) {
throw new Error("Failed to load billing history", { cause: error });
}
}
finally: cleanup is a statement too
finally runs whether the try block succeeded or threw. It’s perfect for cleanup: closing resources, resetting flags, releasing locks.
async function withLoadingState(setLoading, action) {
setLoading(true);
try {
return await action();
} finally {
setLoading(false);
}
}
Two important gotchas:
1) If you return inside finally, you override any previous return or throw. That’s almost always a bug.
function bad() {
try {
throw new Error("Boom");
} finally {
return "ok"; // hides the error
}
}
2) If finally throws, it also overrides what happened before. Keep finally boring: cleanup only.
When not to catch
Catching errors is not automatically “responsible.” If you can’t meaningfully handle the error (or if the correct behavior is to fail fast), don’t catch it.
Common anti-patterns I avoid:
- Catching and returning
nullwithout context. - Catching and doing nothing.
- Catching just to keep the app “from crashing” when the crash is what exposes a real defect.
A better compromise is to catch, log with context, and rethrow.
The Statement Lifecycle: Parsing, Hoisting, and Execution Order
To write reliable code, it helps to know that JavaScript doesn’t execute your file as a stream of characters. It parses, sets up scope, and then runs.
At a practical level, this means:
- Some statements create bindings that exist “before” their line executes (hoisting).
- Some bindings exist but cannot be used yet (TDZ).
- In modules, imports are processed first.
Hoisting you can use safely
I’m generally okay with function declarations being hoisted because they read nicely when used as helpers:
main();
function main() {
console.log("Starting...");
}
But I’m wary of relying on hoisting for variables, because it encourages code that’s harder to reason about.
TDZ as a feature
The TDZ is one of the best “statement safety features” in modern JavaScript. It turns a silent undefined bug into a loud, immediate ReferenceError.
If you ever find yourself fighting the TDZ, it’s usually a sign the function is doing too much or the statement ordering is unclear.
Modules vs scripts
In modules, import and export statements are part of the module structure. The biggest implication for everyday development is that side effects in module scope happen at load time.
If you want predictable systems, keep module-level statements mostly declarative: exports, constants, type definitions (in TS), and pure helpers. Put runtime effects behind explicit function calls.
Blocks, Scope, and Lifetime Management (A Practical Mindset)
Earlier, I said blocks are execution boundaries. Here’s the mindset shift: treat scopes as a tool to limit how much state a reader must hold.
Prefer the smallest scope that stays readable
When a variable is used only in one branch, declare it in that branch.
if (user.plan === "pro") {
const invoice = await createInvoice(user.id);
await emailInvoice(user.email, invoice);
}
Avoid:
let invoice;
if (user.plan === "pro") {
invoice = await createInvoice(user.id);
}
if (invoice) {
await emailInvoice(user.email, invoice);
}
The second version forces a reader to remember invoice might exist later, and it invites future edits to rely on that accidental state.
Use blocks to avoid “sticky” temporary variables
I mentioned tight blocks earlier. Here’s a more realistic example:
async function updateProfile(userId, input) {
const user = await fetchUser(userId);
{
const sanitized = sanitizeProfileInput(input);
const updated = { ...user.profile, ...sanitized };
await saveProfile(userId, updated);
}
return fetchUser(userId);
}
Everything inside the block is “implementation detail.” The function’s outside statements tell the story: load, update, return.
const as documentation
Using const isn’t just about preventing reassignment; it’s a signal to future-you:
- “This value won’t be replaced.”
- “If this changes, you’ll see it explicitly.”
In complex functions, that signal reduces mental load.
Data-Driven Branching: Replacing if Chains Without Getting Clever
Sometimes an if chain is perfectly fine. But there are cases where you’re really doing a lookup, and using statements to spell it out becomes noisy.
A map table instead of a switch
For simple mappings, I often use an object lookup:
const STATUS_LABEL = {
open: "Awaiting payment",
paid: "Paid",
shipped: "On the way",
canceled: "Canceled"
};
function labelOrderStatus(status) {
return STATUS_LABEL[status] ?? "Unknown";
}
This isn’t “better” than switch universally. It’s better when:
- You’re mapping values to values.
- You don’t need complex per-branch statements.
If each case needs multiple statements (validation, logging, side effects), I go back to switch or if blocks.
Strategy objects for event handling
A common real-world statement mess is event handlers:
function handleEvent(event) {
if (event.type === "click") {
trackClick(event);
return;
}
if (event.type === "view") {
trackView(event);
return;
}
if (event.type === "purchase") {
trackPurchase(event);
return;
}
}
A strategy table keeps statements localized:
const handlers = {
click: trackClick,
view: trackView,
purchase: trackPurchase
};
function handleEvent(event) {
const handler = handlers[event.type];
if (!handler) {
console.warn("Unknown event type", event.type);
return;
}
handler(event);
}
The control-flow statement (if (!handler)) stays explicit, but you avoid repeating branching boilerplate.
Iteration Statements in Production: Concurrency, Cancellation, and Backpressure
Loops aren’t just about “repeat.” In production systems, loops often interact with networks, rate limits, and cancellation.
for await...of for streams and paginated APIs
When you’re consuming an async iterator, for await...of expresses intent cleanly:
async function* streamOrders(api) {
let cursor = null;
while (true) {
const page = await api.fetchOrders({ cursor });
for (const order of page.items) {
yield order;
}
if (!page.nextCursor) return;
cursor = page.nextCursor;
}
}
for await (const order of streamOrders(api)) {
await processOrder(order);
}
This is a statement-level way to say: “keep going until the source ends.”
Cancellation with AbortController
Cancellation is control flow too. I like making it explicit at the statement level.
async function fetchWithTimeout(url, timeoutMs) {
const controller = new AbortController();
const timeoutId = setTimeout(() => controller.abort(), timeoutMs);
try {
const response = await fetch(url, { signal: controller.signal });
return response;
} finally {
clearTimeout(timeoutId);
}
}
That try...finally is doing important statement work: no matter what happens, the timeout is cleaned up.
Avoiding accidental quadratic loops
A lot of performance pain comes from statements that look harmless:
for (const item of items) {
// O(n) search inside an O(n) loop
if (items.includes(item.parentId)) {
// ...
}
}
If you need membership checks, turn the collection into a Set once, outside the loop:
const ids = new Set(items.map((x) => x.id));
for (const item of items) {
if (ids.has(item.parentId)) {
// ...
}
}
Same language, same statements—but the execution behavior changes dramatically.
switch With Real Exhaustiveness (Without Lying to Yourself)
I said earlier that switch is great for closed sets. The hard part is enforcing “closed” when the runtime accepts anything.
Runtime safety: default branch
Even if you think you covered every case, include default unless you have strong guarantees.
Developer safety: explicit unreachable
In TypeScript, I often use an assertNever helper to force exhaustiveness in a way the compiler understands. Even in plain JS, I sometimes throw on impossible states.
function assertUnreachable(value) {
throw new Error(Unreachable case: ${String(value)});
}
function reducer(state, action) {
switch (action.type) {
case "init":
return { ...state, ready: true };
case "reset":
return { ...state, ready: false };
default:
return assertUnreachable(action.type);
}
}
Yes, it throws at runtime if you missed a case. That’s the point: if an “impossible” case becomes possible, I want a loud signal.
Edge-Case Statements You Should Recognize (So They Don’t Surprise You)
Most of your code will be the statements we’ve covered. But there are a few statement types that show up occasionally or matter for debugging.
Empty statement
This is valid JavaScript:
if (condition);
That trailing semicolon is an empty statement. It often appears by accident and creates weird behavior. Linters can catch it.
debugger statement
This is a real statement that triggers a breakpoint if devtools are open.
debugger;
I use it locally, never in committed code.
Directive prologue ("use strict")
At the top of a script or function, a string literal can act as a directive:
"use strict";
In modules, strict mode is effectively the default. In older script contexts, strict mode changes statement semantics (for example, it prevents certain silent errors). If you’re maintaining legacy code, it matters.
with statement (avoid)
with exists historically, but it makes name resolution unpredictable. In strict mode it’s disallowed. In modern code, I treat it as a “never.”
Labels (already covered)
Labels are rare but legitimate. The important part is readability: if you use them, use them for clarity, not cleverness.
Lint-Friendly Statement Style (My 2026 Defaults)
Tooling matters because it shapes how statements get written. Modern teams rely on formatters and linters to keep statement boundaries and control flow consistent.
Here are the defaults I recommend because they prevent real statement-level bugs:
curly: require braces for multi-line safety (if (x) { ... }).no-cond-assign: prevent accidentalif (x = y).no-fallthrough: catch switch fall-through.no-unsafe-finally: preventreturn/throwinfinallyfrom masking errors.eqeqeq: avoid loose equality surprises.no-var: push toward block scoping.prefer-const: document immutability and reduce reassignment.consistent-return: make return types predictable.
I also like formatting rules that keep statement boundaries obvious:
- Always format
returnexpressions on the same line. - Avoid “dangling else” ambiguity by using braces.
- Keep
tryblocks small and specific.
This isn’t about being strict for its own sake. It’s about making the control-flow and error-flow statements obvious at a glance.
Refactoring Recipes: Turning Tangled Statements Into Readable Flow
When code gets messy, it’s usually because statement intent is mixed together. Here are a few refactors I use constantly.
Recipe 1: Replace nested if with guard clauses
If you see this:
function ship(order) {
if (order) {
if (order.status === "paid") {
if (!order.address) {
throw new Error("Missing address");
}
return createShipment(order);
}
}
return null;
}
I refactor to:
function ship(order) {
if (!order) return null;
if (order.status !== "paid") return null;
if (!order.address) throw new Error("Missing address");
return createShipment(order);
}
Same logic, fewer mental stacks.
Recipe 2: Isolate side effects
If a function mixes calculation and effects, I pull the calculation into a pure helper.
function calculateTotal(items) {
let total = 0;
for (const item of items) {
if (item.quantity <= 0) continue;
total += item.priceCents * item.quantity;
}
return total;
}
async function checkout(cart) {
const totalCents = calculateTotal(cart.items);
const order = await createOrder({ ...cart, totalCents });
await chargeCard(totalCents);
return order;
}
Now your statements tell a clean story: compute, create, charge.
Recipe 3: Turn repeated branching into a table
If you’re branching on a string and calling a function, it’s probably a table.
const commands = {
help: showHelp,
login: loginUser,
logout: logoutUser
};
function runCommand(name, args) {
const cmd = commands[name];
if (!cmd) {
console.error("Unknown command", name);
return;
}
cmd(args);
}
The if (!cmd) statement is still there. You didn’t hide control flow—you clarified the structure.
Checklist: When a Bug Feels “Supernatural,” Look at Statements First
When something makes no sense, I run through this list:
1) Is ASI changing my statement boundaries? (return, throw, lines starting with ( or [).
2) Is a variable scoped differently than I think? (var vs let/const, block boundaries).
3) Did I accidentally assign inside a condition? (= vs ===).
4) Is a loop skipping work due to continue or break? (especially in nested loops).
5) Did a catch swallow an error? (empty handler, overly broad try).
6) Did finally override an error? (return/throw inside finally).
7) Did I serialize async work by awaiting inside a loop? (should it be concurrent?).
Most “haunted” JavaScript bugs become very normal once you see the statement flow clearly.
Closing Thought
Statements are the skeleton of your program: they define execution, structure, and the paths your code can take when everything goes right—and when it doesn’t.
If you get one thing from this, I hope it’s this: write statements that make your intent obvious. Prefer guard clauses over deep nesting, keep block scopes tight, isolate side effects, and handle errors deliberately. That’s how you end up with JavaScript that doesn’t just run—it stays understandable and correct as it grows.


