Your UI feels fast until the data arrives late, arrives wrong, or never arrives at all. I’ve watched a cart page show “Success” while the payment API timed out, and I’ve watched a profile screen show stale data because a slow request overwrote a fast one. JavaScript requests are the thin seam between your interface and reality. When that seam tears, users notice immediately.
I’m going to walk you through the practical ways I send requests today—XMLHttpRequest, fetch(), Axios, and async/await patterns around them—and how I choose the right approach in 2026. I’ll show complete examples you can run in the browser or Node, explain the trade‑offs in plain language, and point out mistakes I still see in production code. If you’re building anything that talks to an API, this is the stuff that decides whether your app feels trustworthy.
The mental model I use: requests as a conversation
Before I touch code, I treat a request like a short conversation with a server: you ask a question, you wait, you interpret the answer, and you decide what to do next. Every API call includes four core pieces:
1) The method (GET, POST, PUT, PATCH, DELETE) — how you’re asking.
2) The URL — who you’re talking to.
3) Headers — how you describe the request and authenticate.
4) The body — what you’re sending, if anything.
In practice, I teach teams to think of requests as “one-shot pipes.” You open the pipe, you send the request, you read the response, you close the pipe. That pipe can break in predictable ways: network errors, slow responses, unexpected status codes, or JSON that doesn’t match what you expected. The tool you choose—XHR, fetch, or Axios—decides how much control you get over each part of that conversation and how much work you have to do yourself.
I also keep two ideas in mind:
- The UI is optimistic by default. If you show a change before the server confirms it, you need a plan to roll it back.
- A response is not data until you validate it. If you treat any JSON as valid, you’re letting the server control your UI.
With that mental model, the choice of request method becomes a tooling decision rather than a religious debate.
XMLHttpRequest: still around, and why I still respect it
XMLHttpRequest (XHR) is the old workhorse. It predates modern promises, but it still runs everywhere and exposes low‑level details you sometimes need. I reach for it when I need legacy compatibility, fine‑grained upload progress, or a codebase that already depends on it.
Here’s a complete, runnable example in the browser. It fetches a post, parses JSON, and handles failure. I’m explicit about the status check because XHR won’t reject on HTTP errors.
const xhr = new XMLHttpRequest();
const url = "https://jsonplaceholder.typicode.com/posts/1";
xhr.open("GET", url, true);
xhr.onload = () => {
if (xhr.status === 200) {
try {
const data = JSON.parse(xhr.responseText);
console.log("Post title:", data.title);
} catch (parseError) {
console.error("Failed to parse JSON:", parseError);
}
} else {
console.error("Request failed with status:", xhr.status);
}
};
xhr.onerror = () => {
console.error("Network error or CORS blocked the request");
};
xhr.send();
If you’re in Node, you don’t get XHR by default. You can install a package that implements it, but I rarely do that in 2026 because Node ships with fetch. Still, it’s important to understand XHR because older enterprise apps still use it, and you might have to maintain those apps.
When I do use XHR in modern code, I follow three rules:
- Always add
onerrorso network failures don’t get lost. - Always check
statusbefore assuming success. - Always parse JSON inside a try/catch.
XHR also gives you upload progress via xhr.upload.onprogress, which is handy for large file uploads. Fetch has progress tools too, but XHR is still straightforward for that particular use case.
fetch(): the default choice with modern trade‑offs
For browser work in 2026, fetch() is my default. It’s built in, promise‑based, and now available in modern Node runtimes as well. It also mirrors the Request/Response model, which makes code more consistent across environments.
Here’s a basic GET request with clear error handling. Note that fetch only rejects on network failures, not on HTTP error statuses. So I always check response.ok or the status range myself.
const url = "https://jsonplaceholder.typicode.com/posts/1";
fetch(url)
.then((response) => {
if (!response.ok) {
throw new Error(HTTP ${response.status} while fetching ${url});
}
return response.json();
})
.then((data) => {
console.log("Post title:", data.title);
})
.catch((error) => {
console.error("Request failed:", error.message);
});
For POST requests, I pass headers and a JSON body. I also include a request ID so I can trace issues on the server. In real systems, this kind of trace ID saves hours during incident response.
const createPost = async () => {
const url = "https://jsonplaceholder.typicode.com/posts";
const payload = {
title: "Request patterns in 2026",
body: "Real-world request handling tips",
userId: 42,
};
const response = await fetch(url, {
method: "POST",
headers: {
"Content-Type": "application/json",
"X-Request-Id": crypto.randomUUID(),
},
body: JSON.stringify(payload),
});
if (!response.ok) {
throw new Error(HTTP ${response.status} while creating post);
}
const created = await response.json();
console.log("Created post id:", created.id);
};
createPost().catch(console.error);
Two fetch‑specific pitfalls I see often:
- No timeout: fetch waits forever by default. I wrap it with
AbortControllerand a timer. - Overwriting UI with stale responses: if you fire multiple requests, a slow one can return last and overwrite newer data. I fix this by tracking request IDs in state.
Here’s a timeout wrapper I use frequently:
const fetchWithTimeout = async (url, options = {}, timeoutMs = 8000) => {
const controller = new AbortController();
const timer = setTimeout(() => controller.abort(), timeoutMs);
try {
const response = await fetch(url, {
...options,
signal: controller.signal,
});
return response;
} finally {
clearTimeout(timer);
}
};
fetchWithTimeout("https://jsonplaceholder.typicode.com/posts/1")
.then((response) => response.json())
.then((data) => console.log("Post title:", data.title))
.catch((error) => {
if (error.name === "AbortError") {
console.error("Request timed out");
} else {
console.error("Request failed:", error.message);
}
});
This is the pattern I want you to internalize: fetch is a foundation, but you still need to build your own guardrails.
Axios: batteries included, especially for teams
Axios wraps XMLHttpRequest in the browser and Node’s HTTP stack on the server. I choose it when I want a consistent interface, automatic JSON parsing, and request/response interceptors. It’s also widely used in front‑end frameworks, especially in projects with existing Axios utilities.
Here’s a basic Axios GET:
import axios from "axios";
const url = "https://jsonplaceholder.typicode.com/posts/1";
axios
.get(url)
.then((response) => {
console.log("Post title:", response.data.title);
})
.catch((error) => {
if (error.response) {
console.error("HTTP error:", error.response.status);
} else {
console.error("Network or setup error:", error.message);
}
});
One reason I still recommend Axios for teams is interceptors. They let you define request behavior in one place, which matters when you have multiple services and consistent auth requirements. Here’s a simple example that injects a token and logs latency. The comments highlight the non‑obvious bits.
import axios from "axios";
const api = axios.create({
baseURL: "https://jsonplaceholder.typicode.com",
timeout: 8000,
});
api.interceptors.request.use((config) => {
// Attach a bearer token if available
const token = localStorage.getItem("access_token");
if (token) {
config.headers.Authorization = Bearer ${token};
}
// Add timing metadata for performance logs
config.metadata = { startTime: Date.now() };
return config;
});
api.interceptors.response.use(
(response) => {
const duration = Date.now() - response.config.metadata.startTime;
console.log(Request took ~${duration}ms);
return response;
},
(error) => {
if (error.config?.metadata?.startTime) {
const duration = Date.now() - error.config.metadata.startTime;
console.log(Failed request after ~${duration}ms);
}
return Promise.reject(error);
}
);
api.get("/posts/1")
.then((response) => console.log("Post title:", response.data.title))
.catch((error) => console.error("Request failed:", error.message));
Axios makes timeouts and JSON parsing feel easy, but it’s still a dependency. In 2026 I balance the convenience against bundle size and the extra surface area for security updates. For a small project, fetch might be enough. For a team with shared patterns, Axios often pays for itself in saved time.
Async/await: the control flow I actually read
Async/await isn’t a request method; it’s the best way to express asynchronous requests. It turns promise chains into linear code, which makes error handling and cleanup far more readable. I use async/await for anything beyond a one‑off request.
Here’s a complete example that requests a resource, checks errors, parses JSON, and handles failure. It uses fetch because that’s the base runtime, but the pattern applies to Axios as well.
const getPostTitle = async (postId) => {
const url = https://jsonplaceholder.typicode.com/posts/${postId};
try {
const response = await fetch(url);
if (!response.ok) {
throw new Error(HTTP ${response.status} while loading post ${postId});
}
const data = await response.json();
return data.title;
} catch (error) {
console.error("Request failed:", error.message);
throw error;
}
};
getPostTitle(1)
.then((title) => console.log("Post title:", title))
.catch(() => {
// Upstream caller can decide how to display errors
});
I also use async/await to control concurrent requests. In a dashboard, I might fetch user info, notifications, and billing status at the same time, then wait for all of them to finish.
const loadDashboard = async () => {
const userUrl = "https://jsonplaceholder.typicode.com/users/1";
const postsUrl = "https://jsonplaceholder.typicode.com/posts?userId=1";
try {
const [userResponse, postsResponse] = await Promise.all([
fetch(userUrl),
fetch(postsUrl),
]);
if (!userResponse.ok || !postsResponse.ok) {
throw new Error("One or more requests failed");
}
const user = await userResponse.json();
const posts = await postsResponse.json();
console.log("User name:", user.name);
console.log("Post count:", posts.length);
} catch (error) {
console.error("Dashboard load failed:", error.message);
}
};
loadDashboard();
This is where async/await shines: you get predictable control flow without nested callbacks or long promise chains. It also lets you plug in retry logic and backoff in a very direct way.
Choosing the right tool in 2026
I don’t choose tools based on habit. I choose them based on the risk and complexity of the request pipeline. Here’s how I break it down in practice.
Quick guidance I use
- Small app or single page: fetch + async/await. Less dependency, more control.
- Team projects with shared behavior: Axios for interceptors, shared error handling, and consistent timeouts.
- Legacy or special cases: XMLHttpRequest for compatibility or upload progress.
Traditional vs modern patterns
When people say “traditional vs modern,” they usually mean “callback-based vs promise-based.” Here’s the concrete difference I show to teams.
Traditional approach
—
Check status and parse manually
response.ok + try/catch around await Nested callbacks
Manual timers around XHR
AbortController with fetch or Axios timeout Repeated per-call logic
Hard to trace multiple callbacks
My rule of thumb on complexity
If a request needs retries, tracing, auth, or careful caching, I treat it as a small subsystem. That’s where Axios or a custom fetch wrapper makes sense. For simple reads, fetch is still great. For older apps, XHR is often the least risky change.
Common mistakes and how I avoid them
I still see the same request mistakes across teams, and they’re all avoidable with small habits.
1) Forgetting that fetch doesn’t reject on HTTP errors.
I always check response.ok or specific status codes before parsing JSON.
2) Letting slow requests overwrite fast ones.
I track the most recent request ID in state and ignore older responses. It’s the same idea as ignoring stale text messages.
3) No timeout or retry strategy.
I set a timeout for user-facing requests. In consumer apps, I usually keep it in the 6–10 second range. In internal tools, I might allow 10–15 seconds.
4) Parsing JSON without validation.
I use a schema validator in front-end code. In 2026, teams often generate these schemas with AI assistance, then review them. The goal is to protect the UI from unexpected shapes.
5) Mixing concerns in UI code.
I keep request logic in a service module and call it from the UI. That way, caching, retries, and tracing don’t clutter components.
6) Ignoring CORS errors.
If a request fails in the browser but works in Postman, I check the server’s CORS settings first. I don’t waste time debugging the client before I verify the server headers.
Here’s a simple guard against stale responses using a request token. It fits in a React hook or a vanilla module.
let latestRequestId = 0;
const loadUserProfile = async (userId) => {
const requestId = ++latestRequestId;
const url = https://jsonplaceholder.typicode.com/users/${userId};
const response = await fetch(url);
if (!response.ok) throw new Error("Profile request failed");
const data = await response.json();
// Only use this data if it is the newest request
if (requestId === latestRequestId) {
return data;
}
return null; // Stale response, ignore it
};
That simple pattern prevents a whole class of subtle UI bugs.
Performance, resilience, and edge cases I plan for
Requests are usually fast in local testing and slower in the real world. I plan for variability instead of hoping it won’t happen. The three pillars I design for are latency, reliability, and correctness.
- Latency: I assume “fast” means 200–800ms, “okay” means 800–2000ms, and anything above that needs a loading state that feels intentional.
- Reliability: I assume 1–3% of requests will fail at scale, even with great infrastructure. I want my UI to degrade gracefully.
- Correctness: I assume the server can send unexpected shapes or outdated data. I don’t trust any response blindly.
Here’s how those pillars translate into practical behaviors:
- I show a skeleton UI for reads that often exceed 500–800ms.
- I show an inline error for a user action that fails, and I let them retry without losing form state.
- I log response time buckets (0–500ms, 500–1500ms, 1500ms+) so I can see when the backend is slipping.
I also plan for a couple of edge cases that bite teams repeatedly:
- 429 rate limiting: I treat it like a signal, not a crash. Back off and retry.
- 204 with no body: I don’t call
response.json()if there’s no content. - 502/503 spikes: I avoid infinite retries; I surface a “temporarily unavailable” message and stop.
- Malformed JSON: I catch parsing errors and show a safe fallback.
Here’s a practical response handler that covers those issues without turning into a monster:
const parseJsonIfAny = async (response) => {
if (response.status === 204) return null;
const text = await response.text();
if (!text) return null;
try {
return JSON.parse(text);
} catch (error) {
throw new Error("Invalid JSON in response");
}
};
const request = async (url, options = {}) => {
const response = await fetch(url, options);
if (!response.ok) {
if (response.status === 429) {
throw new Error("Rate limited");
}
throw new Error(HTTP ${response.status});
}
return parseJsonIfAny(response);
};
Retrying, backoff, and idempotency
Retries look simple until you accidentally charge a customer twice. I only retry idempotent requests by default (GET, HEAD, and sometimes PUT or DELETE if the server guarantees idempotency). For POST requests that create resources or charge money, I use a server-side idempotency key, or I don’t retry at all.
Here’s a retry wrapper that uses exponential backoff and respects idempotency:
const sleep = (ms) => new Promise((resolve) => setTimeout(resolve, ms));
const requestWithRetry = async (url, options = {}, maxRetries = 2) => {
const method = (options.method || "GET").toUpperCase();
const idempotent = ["GET", "HEAD", "PUT", "DELETE"].includes(method);
let attempt = 0;
while (true) {
try {
const response = await fetch(url, options);
if (response.ok) return response;
if (response.status === 429 || response.status >= 500) {
throw new Error(Retryable status ${response.status});
}
return response; // Non-retryable error
} catch (error) {
if (!idempotent || attempt >= maxRetries) {
throw error;
}
const backoff = 300 * Math.pow(2, attempt); // 300ms, 600ms, 1200ms
await sleep(backoff);
attempt += 1;
}
}
};
If I do need to retry a POST, I add an idempotency key header so the server can safely ignore duplicates:
const createOrder = async (payload) => {
const response = await fetch("/api/orders", {
method: "POST",
headers: {
"Content-Type": "application/json",
"Idempotency-Key": crypto.randomUUID(),
},
body: JSON.stringify(payload),
});
if (!response.ok) throw new Error("Order creation failed");
return response.json();
};
Caching, ETags, and conditional requests
Caching is a performance multiplier, but only when I can trust cache freshness. I lean on three layers:
1) Browser cache: automatic for GET requests with cache headers.
2) In-memory cache: fast for repeat requests during a session.
3) Server-side cache: CDN or gateway for heavy traffic.
I use conditional requests (ETag or Last-Modified) to avoid refetching unchanged data. It’s a small change with a big win on repeat loads.
const fetchWithEtag = async (url, cached) => {
const headers = {};
if (cached?.etag) {
headers["If-None-Match"] = cached.etag;
}
const response = await fetch(url, { headers });
if (response.status === 304) {
return { data: cached.data, etag: cached.etag };
}
if (!response.ok) throw new Error(HTTP ${response.status});
const data = await response.json();
const etag = response.headers.get("ETag");
return { data, etag };
};
I also avoid caching sensitive or user-specific data in shared caches. I treat anything tied to auth as “private cache only.”
Pagination, infinite scroll, and partial loading
Pagination is a request problem disguised as a UI problem. I choose between page-based and cursor-based pagination based on the API:
- Page-based: easy to implement but can skip or duplicate items when data changes.
- Cursor-based: more consistent for feeds or timelines, but requires server support.
Here’s a cursor-based pattern I use for infinite scroll:
let cursor = null;
let loading = false;
const loadNextPage = async () => {
if (loading) return;
loading = true;
const url = cursor ? /api/feed?cursor=${cursor} : "/api/feed";
const response = await fetch(url);
if (!response.ok) throw new Error("Feed request failed");
const data = await response.json();
cursor = data.nextCursor; // server-provided
loading = false;
return data.items;
};
When I can’t get cursor-based support, I add a defensive check to avoid duplicates on page-based pagination. It’s not perfect, but it prevents the worst UI artifacts.
Streaming responses and large payloads
Large payloads are a real-world edge case. When I need to download big JSON or a CSV, I prefer streaming so the UI stays responsive. Fetch supports readable streams in the browser and Node (with some differences). Here’s a minimal example that reads a response as text in chunks:
const streamText = async (url) => {
const response = await fetch(url);
if (!response.ok) throw new Error("Stream request failed");
const reader = response.body.getReader();
const decoder = new TextDecoder();
let result = "";
while (true) {
const { done, value } = await reader.read();
if (done) break;
result += decoder.decode(value, { stream: true });
}
return result;
};
I don’t stream for everything. Streaming adds complexity, and you need to handle partial parsing. I use it when the payload can exceed a few megabytes or when I want to show incremental progress to the user.
File uploads, progress, and cancellation
Uploads are the most user-visible kind of request. If they’re slow, the UI needs to communicate that with progress. XHR is still the simplest way to get upload progress in the browser:
const uploadFile = (file) => {
return new Promise((resolve, reject) => {
const xhr = new XMLHttpRequest();
xhr.open("POST", "/api/upload", true);
xhr.upload.onprogress = (event) => {
if (event.lengthComputable) {
const percent = Math.round((event.loaded / event.total) * 100);
console.log(Upload ${percent}%);
}
};
xhr.onload = () => {
if (xhr.status === 200) resolve(xhr.responseText);
else reject(new Error(Upload failed ${xhr.status}));
};
xhr.onerror = () => reject(new Error("Network error"));
const formData = new FormData();
formData.append("file", file);
xhr.send(formData);
});
};
For fetch-based uploads, I add cancellation support with AbortController so users can bail out if they picked the wrong file. A cancel button should work on every upload.
Authentication: tokens, cookies, and refresh flows
Auth is where request bugs hurt the most. I treat auth like a pipeline with clear entry points:
- Access token: short-lived, used in each request.
- Refresh token: long-lived, used only to renew access.
- CSRF protection: for cookie-based sessions.
Here’s a simple fetch wrapper that refreshes tokens on 401 while avoiding infinite loops:
let refreshing = false;
let refreshPromise = null;
const refreshAccessToken = async () => {
if (refreshing) return refreshPromise;
refreshing = true;
refreshPromise = fetch("/auth/refresh", { method: "POST", credentials: "include" })
.then((r) => {
if (!r.ok) throw new Error("Refresh failed");
return r.json();
})
.finally(() => {
refreshing = false;
});
return refreshPromise;
};
const authFetch = async (url, options = {}) => {
const token = localStorage.getItem("access_token");
const headers = { ...(options.headers || {}) };
if (token) headers.Authorization = Bearer ${token};
let response = await fetch(url, { ...options, headers });
if (response.status === 401) {
const refreshed = await refreshAccessToken();
localStorage.setItem("access_token", refreshed.accessToken);
headers.Authorization = Bearer ${refreshed.accessToken};
response = await fetch(url, { ...options, headers });
}
return response;
};
If I’m using cookie-based auth, I add credentials: "include" and ensure the server sets SameSite correctly. I also use CSRF tokens for state-changing requests. The key is: the browser doesn’t automatically protect you against cross-site writes unless you design for it.
Rate limiting and client-side throttling
When the server tells me to slow down, I treat it as a system constraint. I also proactively limit how many requests a user can trigger in a burst. For example, I debounce search input so I don’t send a request on every keystroke.
const debounce = (fn, delay) => {
let timer;
return (...args) => {
clearTimeout(timer);
timer = setTimeout(() => fn(...args), delay);
};
};
const search = async (query) => {
const response = await fetch(/api/search?q=${encodeURIComponent(query)});
if (!response.ok) throw new Error("Search failed");
return response.json();
};
const debouncedSearch = debounce((q) => {
search(q).then((data) => console.log(data));
}, 300);
I also cap concurrency for expensive pages. If a dashboard is about to fire 12 requests at once, I queue them so the server and the browser stay healthy.
CORS, preflight, and why the browser is stricter than your tools
A request that works in a desktop client or Postman can fail in the browser because the browser enforces CORS. I look for three things:
1) The server sends Access-Control-Allow-Origin for the requesting origin.
2) If custom headers or non-simple methods are used, preflight OPTIONS returns valid headers.
3) Access-Control-Allow-Credentials is set if I’m sending cookies.
When a request fails with a CORS error, I don’t waste time debugging the client. I check server headers first. That habit saves hours.
Browser vs Node: subtle differences that matter
Since fetch is now available in Node, it’s tempting to treat browser and server requests as identical. But there are key differences:
- Cookies: browsers send them automatically (if allowed), Node doesn’t.
- CORS: browsers enforce it; Node doesn’t care.
- Streaming: Node’s streams and browser streams differ in APIs and behaviors.
- Default timeouts: many Node HTTP clients have defaults; browser fetch doesn’t.
When I build shared request utilities, I explicitly separate “browser fetch” from “server fetch.” It avoids surprising behavior in SSR or backend scripts.
Building a small request layer I actually trust
Most bugs come from duplicated request logic. That’s why I build a tiny request layer even for small apps. It usually includes:
- Default base URL
- JSON parsing with content checks
- Error normalization
- Timeout handling
- Optional auth header injection
Here’s a compact example that stays readable:
const requestJson = async (url, options = {}, timeoutMs = 8000) => {
const controller = new AbortController();
const timer = setTimeout(() => controller.abort(), timeoutMs);
try {
const response = await fetch(url, {
...options,
headers: {
"Content-Type": "application/json",
...(options.headers || {}),
},
signal: controller.signal,
});
const data = await parseJsonIfAny(response);
if (!response.ok) {
const message = data?.message || HTTP ${response.status};
const error = new Error(message);
error.status = response.status;
error.data = data;
throw error;
}
return data;
} finally {
clearTimeout(timer);
}
};
I treat this wrapper as a tiny internal API. The UI calls it, but it never knows about headers, parsing, or timeout details.
Observability: logs, traces, and how I debug production
When a request fails in production, I need visibility. I add three kinds of signals:
- Request IDs: unique per request and returned in server logs.
- Latency logs: simple timings for every call.
- Error categories: network, timeout, auth, server, parse.
Here’s a simple error normalizer I use so the UI can show predictable messages:
const normalizeError = (error) => {
if (error.name === "AbortError") return { type: "timeout", message: "Request timed out" };
if (error.status === 401) return { type: "auth", message: "Please sign in again" };
if (error.status >= 500) return { type: "server", message: "Server error" };
return { type: "network", message: error.message || "Request failed" };
};
This little function pays off every time you add a new API call, because the UI doesn’t have to guess what went wrong.
Testing and mocking requests without lying to yourself
Testing request code is about consistency. I rely on three levels:
1) Unit tests for parsing and error logic.
2) Integration tests against a mock server.
3) End-to-end tests for real flows.
I avoid mocking fetch directly in complex ways because it often diverges from reality. Instead, I mock at the network layer or run a lightweight mock server. The key is to test the behaviors you actually care about: retries, timeouts, and error handling.
Practical scenarios: when I use what
Here are real scenarios and what I choose:
- Login flow: fetch wrapper + auth refresh logic. No retries on POST unless idempotent key is used.
- Search box: fetch + debounce + cancel previous request with AbortController.
- File upload: XHR for upload progress; fetch for metadata or smaller files.
- Dashboard: Promise.all for parallel reads, plus caching for stable data.
- Payments: Axios or fetch with explicit error handling, no retries, and server-side idempotency.
If you’re unsure, choose the simplest tool that lets you control failure states. Complexity is the tax you pay for reliability, so only pay it when the feature deserves it.
A deeper look at stale data and race conditions
This is the bug that makes users distrust your app: they see data “jump back” to an older state. The fix isn’t just ignoring old responses; it’s tying responses to the state that produced them.
Here’s a pattern I use for search results:
let activeQuery = "";
const searchWithGuard = async (query) => {
activeQuery = query;
const response = await fetch(/api/search?q=${encodeURIComponent(query)});
if (!response.ok) throw new Error("Search failed");
const data = await response.json();
if (query !== activeQuery) {
return null; // ignore stale result
}
return data;
};
If the user types quickly, only the last response is used. It’s a tiny guard that prevents a big UX problem.
Error messages that users actually understand
A request error is not a technical problem to the user. It’s a broken promise. I keep messages human and specific:
- “We couldn’t save your changes. Try again.”
- “Your session expired. Please sign in again.”
- “The server is busy. Please retry in a moment.”
The technical details belong in logs, not in the UI.
When I avoid JavaScript requests entirely
Sometimes, the best request is no request. I avoid sending a network call when:
- The data is already on the page (SSR or embedded JSON).
- The user hasn’t finished their input (debounce first).
- A background sync can run later (offline queue).
This is one of the easiest ways to make an app feel faster: don’t ask the network for things you already have.
Alternative approaches: GraphQL, RPC, and realtime
Even though this guide focuses on JavaScript requests, the ecosystem has evolved. I often choose different request shapes based on the problem:
- GraphQL: great when clients need flexible data shapes, but it adds caching complexity.
- RPC (like gRPC-web): strong typing and performance, but more setup.
- WebSockets or SSE: best for realtime updates or live dashboards.
These aren’t replacements for fetch or Axios—they still use HTTP under the hood—but they change how you think about the request lifecycle. I only reach for them when the project needs that flexibility or realtime behavior.
My compact checklist for production-ready requests
When I ship request-heavy features, I run this checklist:
- Do I handle network errors and HTTP errors separately?
- Is there a timeout, and does the UI show a useful message?
- Are stale responses ignored or cancelled?
- Are POST requests idempotent or protected from retries?
- Is JSON validated or at least shape-checked?
- Are tokens refreshed safely without infinite loops?
- Are logs and request IDs in place for debugging?
If I can’t answer yes to those, I don’t ship.
Closing thought: treat requests as product features
The code that sends requests isn’t just plumbing; it’s part of the user experience. A slow request feels like a slow app. A flaky request feels like an unreliable product. A silent failure feels like a broken promise.
When I design request code with the same care I put into UI, users notice. They trust the product. And that trust is what keeps them coming back.
If you only take one thing from this guide, take this: requests are a conversation. When you respect the conversation—timeouts, validation, retries, and clear error handling—your app feels calm and dependable, even when the network isn’t.



