I still remember the first time a production dashboard went blank because a data request failed silently. The UI looked fine, the API was healthy, and yet I had a blank screen staring back at me. The root cause was a fetch call that didn’t check the HTTP status, so a 404 turned into a confusing JSON parse error. Since then, I’ve treated data fetching as a first‑class feature, not a quick utility. In this post, I’ll show you how I use the Fetch API to get data reliably, readably, and safely. You’ll see how I structure fetch calls, handle HTTP status codes, manage timeouts, cancel in‑flight requests, parse JSON safely, and design small, testable helper functions that scale with a codebase. I’ll also compare older approaches to modern patterns, point out common mistakes I see in code reviews, and share practical rules of thumb for performance and error handling. If you want fetch calls that are easy to maintain and predictable when things go wrong, this will get you there.
The mental model: request, response, and a promise
When I use fetch, I treat it like sending a letter and getting a sealed envelope back. The act of sending returns a promise immediately. The envelope arrives later. You still need to open it and read it, and you still need to check whether the sender wrote “success” or “error” on the outside.
Fetch returns a Promise that resolves to a Response object. That promise only rejects on network-level failures (like DNS issues or blocked requests). HTTP status codes like 404 or 500 still resolve successfully, so you must check response.ok yourself. This design surprises people at first, so I make it explicit in every utility.
Here’s the bare minimum I consider acceptable in a production codebase:
fetch("https://api.example.com/products/1")
.then((response) => {
if (!response.ok) {
throw new Error(HTTP ${response.status});
}
return response.json();
})
.then((data) => {
console.log(data);
})
.catch((error) => {
console.error("Request failed:", error);
});
This is not just defensive programming. It’s about telling future you exactly where failures are handled. I use the same pattern whether I’m fetching data for a UI or loading a configuration file.
A clean, modern baseline with async/await
Promises are fine, but in my experience async/await makes error handling and sequencing much clearer. I also find it easier to add logging and timing when a function reads like synchronous code.
async function getProduct(productId) {
const url = https://api.example.com/products/${productId};
try {
const response = await fetch(url);
if (!response.ok) {
throw new Error(HTTP ${response.status});
}
const data = await response.json();
return data;
} catch (error) {
console.error("Failed to fetch product:", error);
throw error; // Let callers decide how to display the error
}
}
getProduct(1).then((product) => console.log(product));
Why do I rethrow the error? Because data‑fetching helpers should not decide UI behavior. The caller might show a toast, render a fallback, or retry. Keeping that boundary clean is one of the easiest ways to avoid “spaghetti error handling.”
Handling HTTP status codes with intention
The Fetch API gives you the response status, but you decide how to map that into user‑level behavior. I prefer to normalize status handling so the rest of the app can act on explicit error types.
Here’s a pattern I use when I want to distinguish “not found” from “server down” without littering code with status checks:
class HttpError extends Error {
constructor(message, status) {
super(message);
this.name = "HttpError";
this.status = status;
}
}
async function fetchJson(url, options = {}) {
const response = await fetch(url, options);
if (!response.ok) {
throw new HttpError(HTTP ${response.status}, response.status);
}
return response.json();
}
async function run() {
try {
const product = await fetchJson("https://api.example.com/products/1");
console.log(product);
} catch (error) {
if (error instanceof HttpError && error.status === 404) {
console.error("Product not found");
return;
}
console.error("Unexpected error:", error);
}
}
run();
This is one of those small investments that pays off fast. When new contributors join a project, they don’t have to guess how errors flow. They can pattern‑match against a single error type.
Parsing and validating JSON without surprises
The response.json() call can throw if the body isn’t valid JSON. That includes server errors that return HTML error pages. If you’ve ever seen “Unexpected token < in JSON,” you’ve lived this pain.
Here’s a technique I use when I want more robust parsing with human‑readable errors:
async function safeJson(response) {
const text = await response.text();
try {
return JSON.parse(text);
} catch (error) {
throw new Error("Invalid JSON in response body");
}
}
async function fetchJsonSafe(url) {
const response = await fetch(url);
if (!response.ok) {
throw new Error(HTTP ${response.status});
}
return safeJson(response);
}
I don’t always do this, but when I’m integrating with third‑party APIs, it’s worth the extra few lines. It’s the difference between “something went wrong” and “the response format is invalid.”
If you need to validate the shape of the JSON, you can add a lightweight check. I typically avoid heavy schema validation in the client unless the data is critical. A pragmatic approach is to check for required fields and types, then log or throw if they’re missing.
Timeouts and request cancellation
Fetch does not include a built‑in timeout, so if a server hangs, the request can stay open indefinitely. I like to wrap fetch with AbortController so I can cancel requests when they run too long or when a user navigates away.
async function fetchWithTimeout(url, { timeoutMs = 8000, ...options } = {}) {
const controller = new AbortController();
const timeoutId = setTimeout(() => controller.abort(), timeoutMs);
try {
const response = await fetch(url, {
...options,
signal: controller.signal,
});
if (!response.ok) {
throw new Error(HTTP ${response.status});
}
return response.json();
} catch (error) {
if (error.name === "AbortError") {
throw new Error("Request timed out");
}
throw error;
} finally {
clearTimeout(timeoutId);
}
}
In UI code, I also use AbortController to prevent stale updates. If a user types fast and triggers multiple requests, I abort the previous one so only the most recent result updates the screen.
let activeController = null;
async function searchProducts(query) {
if (activeController) {
activeController.abort();
}
activeController = new AbortController();
const response = await fetch(https://api.example.com/search?q=${encodeURIComponent(query)}, {
signal: activeController.signal,
});
if (!response.ok) {
throw new Error(HTTP ${response.status});
}
return response.json();
}
If you’ve ever had a UI show results for the wrong query, this pattern fixes it instantly.
Request configuration: headers, methods, and body
Although most data reads are GET requests, you often need to send headers for authentication, caching, or content negotiation. I keep a tiny helper to avoid repetition:
function buildOptions({ method = "GET", token, body, headers = {} } = {}) {
const finalHeaders = {
...headers,
};
if (token) {
finalHeaders.Authorization = Bearer ${token};
}
if (body) {
finalHeaders["Content-Type"] = "application/json";
}
return {
method,
headers: finalHeaders,
body: body ? JSON.stringify(body) : undefined,
};
}
async function updateProduct(productId, payload, token) {
const url = https://api.example.com/products/${productId};
const options = buildOptions({ method: "PUT", token, body: payload });
const response = await fetch(url, options);
if (!response.ok) {
throw new Error(HTTP ${response.status});
}
return response.json();
}
The helper keeps the code readable and prevents mistakes like forgetting to set the content type or accidentally including undefined bodies.
Streaming and large responses
Sometimes you don’t want to read an entire response into memory. Fetch supports streaming via response.body, which is a readable stream. In practice, I only use streaming in a few cases: large downloads, server‑sent data, or when I need progress updates.
Here’s a simplified pattern to read a streaming response into text progressively:
async function streamToString(response) {
const reader = response.body.getReader();
const decoder = new TextDecoder();
let result = "";
while (true) {
const { value, done } = await reader.read();
if (done) break;
result += decoder.decode(value, { stream: true });
}
result += decoder.decode();
return result;
}
This is not your everyday fetch usage, but it’s good to know it exists. I treat it as a specialized tool, not a default.
Common mistakes I see in code reviews
If you want to level up fast, avoid these patterns. I’ve personally made all of these mistakes at some point.
- Forgetting to check
response.ok. This creates confusing bugs and hides server errors. - Calling
response.json()without awaiting it. That returns a promise, not data. - Returning inside a
tryblock but swallowing errors incatchand not rethrowing. This hides failures from the caller. - Ignoring timeouts. A hung request can degrade UI responsiveness and leave spinners stuck.
- Passing unescaped user input into URLs. Use
encodeURIComponentfor query values. - Assuming JSON responses for every request. Some APIs return empty bodies or plain text.
I use code review time to build shared habits. Even a small helper function that enforces good defaults can eliminate most of these mistakes.
When to use fetch and when not to
Fetch is a solid default for browser requests, but it’s not always the best tool in every environment. Here’s how I decide:
Use fetch when:
- You’re in a modern browser or a runtime that provides a fetch implementation.
- You want a lightweight, standards‑based API with minimal dependencies.
- You need a straightforward way to make HTTP requests without heavy abstractions.
Avoid fetch when:
- You need built‑in retries, caching, or advanced middlewares out of the box.
- You’re in a codebase that already standardizes on a different client.
- You require consistent behavior across older browsers without polyfills.
In enterprise apps, I often wrap fetch in a small “client” module so the rest of the app doesn’t depend on the global API. That makes it easy to swap in a different HTTP client later if requirements change.
Performance and user‑perceived speed
Fetch itself is fast; the real performance wins come from what you do around it. Here are patterns I’ve found practical:
- Avoid sending requests you don’t need. Debounce search input and cancel stale requests.
- Reuse cached data for short periods. In my experience, a cache TTL of 30–60 seconds dramatically reduces repeat calls without noticeable staleness.
- Parallelize independent requests with
Promise.all. This typically saves 50–200ms on common dashboards. - Serialize dependent requests, but only where needed. Don’t chain just because it “feels” ordered.
- Use response compression on the server (gzip or brotli) to reduce payload size. That often saves 20–60% on JSON responses.
If you’re building a UI, I care more about time to first meaningful data than total transfer size. A small “summary” endpoint plus a follow‑up detail request often feels faster than one giant payload.
Traditional vs modern data fetching approaches
I still see legacy patterns in older codebases. Here’s a straightforward comparison based on what I encounter in real projects.
Typical setup effort
Best use today
—
—
Medium and verbose
Legacy browsers or old codebases
Low if jQuery is already present
Existing apps already on jQuery
Low and standards‑based
Modern web apps and light clientsIf you’re starting fresh in 2026, I recommend fetch for browser apps, with a light wrapper that enforces your team’s conventions.
A small fetch client I actually use
If you want a reusable pattern, here’s a compact “client” that I would happily ship. It handles timeouts, JSON parsing, and standard error handling in one place. You can drop it into a httpClient.js file and build on it.
class HttpError extends Error {
constructor(message, status, payload) {
super(message);
this.name = "HttpError";
this.status = status;
this.payload = payload;
}
}
async function parseJsonOrNull(response) {
const text = await response.text();
if (!text) return null;
try {
return JSON.parse(text);
} catch (error) {
throw new Error("Invalid JSON in response body");
}
}
async function request(url, { method = "GET", headers = {}, body, timeoutMs = 8000 } = {}) {
const controller = new AbortController();
const timeoutId = setTimeout(() => controller.abort(), timeoutMs);
try {
const response = await fetch(url, {
method,
headers: {
"Accept": "application/json",
...headers,
},
body: body ? JSON.stringify(body) : undefined,
signal: controller.signal,
});
const payload = await parseJsonOrNull(response);
if (!response.ok) {
throw new HttpError(HTTP ${response.status}, response.status, payload);
}
return payload;
} catch (error) {
if (error.name === "AbortError") {
throw new Error("Request timed out");
}
throw error;
} finally {
clearTimeout(timeoutId);
}
}
// Example usage
async function getUserProfile(userId) {
return request(https://api.example.com/users/${userId});
}
This gives you a single entry point for data fetching. I like it because I can add metrics, tracing headers, or authentication in one place later.
Edge cases and real‑world scenarios
Here are some situations where fetch behaves differently than you might expect:
- Empty response bodies. A 204 response has no body, so
response.json()will throw. I avoid this by usingparseJsonOrNulland checking fornull. - Redirects. Fetch follows redirects by default. If you need to detect them, check
response.redirectedor inspectresponse.url. - CORS. If a request fails due to CORS, the browser blocks access to response details. The fix is server‑side: ensure correct CORS headers.
- Cookies and credentials. Fetch does not send cookies by default for cross‑origin requests. Use
credentials: "include"when appropriate. - Cache behavior. The browser cache can make a response appear “instant,” but stale if you don’t control caching headers. Be intentional about it.
I like to treat these as “known gotchas.” If a bug looks mysterious, I check this list first.
A more complete data‑fetching helper with typed results
When I’m working in a team, I want helpers that make success and failure explicit without forcing exceptions everywhere. One pattern I like is returning a result object with ok, data, and error fields. It’s a little more verbose, but it reads well in application code and keeps error handling consistent.
async function requestResult(url, options = {}) {
try {
const response = await fetch(url, options);
const contentType = response.headers.get("content-type") || "";
const isJson = contentType.includes("application/json");
const payload = isJson ? await response.json() : await response.text();
if (!response.ok) {
return {
ok: false,
status: response.status,
error: payload || HTTP ${response.status},
};
}
return { ok: true, status: response.status, data: payload };
} catch (error) {
return { ok: false, status: 0, error: error.message || "Network error" };
}
}
async function runExample() {
const result = await requestResult("https://api.example.com/products/1");
if (!result.ok) {
console.error("Failed:", result.error);
return;
}
console.log("Product:", result.data);
}
I’m not saying this is better than exceptions for every case, but it’s a nice alternative when you want predictable control flow and minimal try/catch blocks.
A practical pattern for GET, POST, PUT, DELETE
I often standardize a small set of helpers so the rest of the team can write clean calls. Here’s a simple layout that scales without becoming a framework.
const api = {
get: (url, options = {}) => request(url, { ...options, method: "GET" }),
post: (url, body, options = {}) =>
request(url, {
...options,
method: "POST",
headers: { "Content-Type": "application/json", ...(options.headers || {}) },
body: JSON.stringify(body),
}),
put: (url, body, options = {}) =>
request(url, {
...options,
method: "PUT",
headers: { "Content-Type": "application/json", ...(options.headers || {}) },
body: JSON.stringify(body),
}),
del: (url, options = {}) => request(url, { ...options, method: "DELETE" }),
};
// Usage
async function updateUser(userId, payload) {
return api.put(https://api.example.com/users/${userId}, payload);
}
This adds minimal abstraction but gives you a consistent feel across the codebase. The more important point is that the same request logic is used for every method, so errors and parsing behave the same.
Handling authentication and tokens safely
Most production apps use some form of authentication. With fetch, you typically set an Authorization header or rely on cookies. The key is to avoid sprinkling auth logic everywhere.
I tend to centralize token injection and refresh logic in the fetch wrapper. Here’s a simplified pattern:
let authToken = null;
function setAuthToken(token) {
authToken = token;
}
async function authRequest(url, options = {}) {
const headers = {
...options.headers,
};
if (authToken) {
headers.Authorization = Bearer ${authToken};
}
return request(url, { ...options, headers });
}
If you need to refresh tokens, you can extend the wrapper to detect 401 responses and try a refresh flow once. I usually do this carefully to avoid infinite loops:
- If a request returns 401, attempt refresh once.
- If refresh fails, clear auth and redirect to login.
- If refresh succeeds, retry the original request with the new token.
This is a good example of why I like a central request layer. You can add these behaviors once instead of in every call.
Working with query parameters cleanly
Building URLs by string concatenation is error‑prone. I prefer to use URL and URLSearchParams so the browser handles encoding.
function buildUrl(base, params = {}) {
const url = new URL(base);
const search = new URLSearchParams(params);
url.search = search.toString();
return url.toString();
}
async function searchProducts(query, page = 1) {
const url = buildUrl("https://api.example.com/search", {
q: query,
page,
});
return request(url);
}
This avoids all the little bugs around ?, &, and escaping special characters. It also reads more clearly when you have multiple parameters.
Caching strategies I actually use
Caching is one of those topics that can quickly get over‑engineered. For fetch, I keep it simple and match the problem.
1) In‑memory cache for short TTL
Great for dashboards or repeated lookups.
const cache = new Map();
function getCacheKey(url, options) {
return JSON.stringify({ url, options });
}
async function cachedRequest(url, options = {}, ttlMs = 60000) {
const key = getCacheKey(url, options);
const cached = cache.get(key);
if (cached && cached.expiresAt > Date.now()) {
return cached.value;
}
const value = await request(url, options);
cache.set(key, { value, expiresAt: Date.now() + ttlMs });
return value;
}
2) Stale‑while‑revalidate
This pattern returns cached data immediately, then refreshes in the background and updates the cache. It feels fast to users and keeps data reasonably fresh.
async function staleWhileRevalidate(url, options = {}, ttlMs = 60000) {
const key = getCacheKey(url, options);
const cached = cache.get(key);
if (cached && cached.expiresAt > Date.now()) {
// Kick off refresh without blocking
request(url, options)
.then((fresh) => cache.set(key, { value: fresh, expiresAt: Date.now() + ttlMs }))
.catch(() => {});
return cached.value;
}
const fresh = await request(url, options);
cache.set(key, { value: fresh, expiresAt: Date.now() + ttlMs });
return fresh;
}
I only add caching if it solves a real problem. I’ve seen teams add cache layers that make debugging harder with little performance benefit.
Retrying failed requests without being reckless
Fetch doesn’t retry by default, which is actually a good thing. Retries can hide issues or overload a struggling server. But in practice, a small retry for transient failures can improve reliability.
Here’s a careful approach I use:
- Only retry on network failures or 5xx responses.
- Use a small backoff (like 200–800ms range).
- Limit to 1–2 retries for user‑facing requests.
async function requestWithRetry(url, options = {}, maxRetries = 2) {
let attempt = 0;
while (true) {
try {
const response = await fetch(url, options);
if (!response.ok && response.status >= 500 && response.status < 600) {
throw new Error(Server error ${response.status});
}
return response;
} catch (error) {
attempt += 1;
if (attempt > maxRetries) throw error;
const delay = 200 attempt + Math.random() 200;
await new Promise((resolve) => setTimeout(resolve, delay));
}
}
}
If you adopt retries, document them. Silent retries can confuse debugging if a request seems “slow” or “randomly delayed.”
Handling file uploads and non‑JSON payloads
Not every request is JSON. When uploading files, you usually want FormData so the browser sets the proper Content-Type boundary for you.
async function uploadAvatar(file) {
const form = new FormData();
form.append("avatar", file);
const response = await fetch("https://api.example.com/avatar", {
method: "POST",
body: form,
});
if (!response.ok) {
throw new Error(HTTP ${response.status});
}
return response.json();
}
Key rule: do not manually set Content-Type when using FormData. The browser will handle it, and manual setting often breaks the request.
Dealing with 204 No Content and empty responses
One of the most common issues I see is code that assumes every response has JSON. A 204 response is success with no body, and calling response.json() will throw.
Here’s a robust helper that handles it gracefully:
async function parseBody(response) {
if (response.status === 204) return null;
const contentType = response.headers.get("content-type") || "";
if (contentType.includes("application/json")) {
return response.json();
}
return response.text();
}
This tiny helper prevents a surprising number of bugs, especially in APIs that use 204 for deletes and updates.
Handling CORS in the real world
CORS is one of those topics that feels mysterious until you realize it’s a browser security policy, not a fetch feature. If CORS blocks a request, you might see a network error without response details, because the browser hides the response.
My checklist:
- Confirm the request is sent to the correct domain and protocol.
- Ensure the server includes
Access-Control-Allow-Originfor your site. - If you need cookies, ensure
Access-Control-Allow-Credentials: trueis present, and usecredentials: "include"in fetch. - If the request is “non‑simple” (custom headers or non‑GET/POST), the browser triggers a preflight OPTIONS request. Make sure the server responds to that preflight correctly.
When CORS is wrong, no amount of client‑side fetch code will fix it. It’s a server configuration problem.
Working with cookies and credentials
If your auth uses cookies, you need to opt into sending them. By default, fetch will not include cookies for cross‑origin requests.
fetch("https://api.example.com/profile", {
credentials: "include",
})
.then((res) => res.json())
.then((data) => console.log(data));
Use this carefully. If you send credentials to third‑party domains, you can create security risks and CORS headaches.
Progress updates for downloads
Sometimes you want to show a progress bar for large downloads. Fetch doesn’t provide download progress events directly, but you can read the stream and track bytes manually.
async function downloadWithProgress(url, onProgress) {
const response = await fetch(url);
if (!response.ok) throw new Error(HTTP ${response.status});
const contentLength = response.headers.get("content-length");
const total = contentLength ? parseInt(contentLength, 10) : null;
const reader = response.body.getReader();
let received = 0;
const chunks = [];
while (true) {
const { done, value } = await reader.read();
if (done) break;
chunks.push(value);
received += value.length;
if (total) {
onProgress(Math.round((received / total) * 100));
} else {
onProgress(null); // Unknown total length
}
}
const blob = new Blob(chunks);
return blob;
}
This isn’t everyday code, but it’s a lifesaver when you’re building download experiences that feel professional.
Making fetch testable
Testing data‑fetching code is often overlooked. I like to make my fetch helpers accept a fetchImpl parameter so I can inject a mock in tests.
function createClient(fetchImpl = fetch) {
return async function client(url, options = {}) {
const response = await fetchImpl(url, options);
if (!response.ok) {
throw new Error(HTTP ${response.status});
}
return response.json();
};
}
// Test with a fake fetch
const fakeFetch = async () => ({
ok: true,
json: async () => ({ ok: true }),
});
const client = createClient(fakeFetch);
client("/test").then((data) => console.log(data));
This pattern keeps your business logic decoupled from the network. It’s also easier to test error paths without relying on a real API.
A checklist I use before shipping
I keep a short checklist whenever I add a new fetch integration. It helps me avoid the most common problems:
- Do I handle non‑2xx responses explicitly?
- Do I handle empty responses or non‑JSON bodies?
- Is the URL constructed safely (no string concat mistakes)?
- Do I need timeouts or cancellation?
- Are credentials and headers set correctly?
- Is there a plan for retries or error messaging?
This takes two minutes and saves hours of debugging later.
Alternative patterns: hooks and composables
If you’re working in a UI framework, you’ll often wrap fetch in a higher‑level abstraction. Here’s the conceptual shape I use, regardless of framework:
- Keep the fetch logic in a separate module.
- Keep UI state (loading, error, data) in the component or hook.
- Keep cancellation logic near the UI to avoid updating unmounted views.
Example of a minimal hook‑style pattern:
function useFetch(url) {
const [data, setData] = React.useState(null);
const [error, setError] = React.useState(null);
const [loading, setLoading] = React.useState(false);
React.useEffect(() => {
let active = true;
const controller = new AbortController();
setLoading(true);
setError(null);
fetch(url, { signal: controller.signal })
.then((res) => {
if (!res.ok) throw new Error(HTTP ${res.status});
return res.json();
})
.then((data) => {
if (active) setData(data);
})
.catch((err) => {
if (err.name !== "AbortError" && active) setError(err);
})
.finally(() => {
if (active) setLoading(false);
});
return () => {
active = false;
controller.abort();
};
}, [url]);
return { data, error, loading };
}
This separates concerns nicely: the hook manages lifecycle and cancellation, while fetch handles the network call. I like it because you can keep the fetch wrapper and the UI logic decoupled.
Production observability: logging and metrics
When data fetching goes wrong in production, logs are your lifeline. I usually add lightweight instrumentation to my fetch wrapper:
- Log failures with the URL and status code.
- Record timing with
performance.now(). - Tag requests with a short request ID for correlation.
async function monitoredRequest(url, options = {}) {
const start = performance.now();
try {
const response = await fetch(url, options);
const duration = performance.now() - start;
if (!response.ok) {
console.warn("Request failed", { url, status: response.status, duration });
throw new Error(HTTP ${response.status});
}
console.log("Request ok", { url, duration });
return response.json();
} catch (error) {
const duration = performance.now() - start;
console.error("Network error", { url, duration, error });
throw error;
}
}
This kind of observability doesn’t require heavy tooling, but it makes debugging far easier. If you already have a logging system, route these events into it.
A compact rules‑of‑thumb section
If you only remember a few things from this whole guide, let it be these:
- Always check
response.okbefore parsing data. - Prefer
async/awaitfor readable error handling. - Use
AbortControllerfor timeouts and stale request cancellation. - Treat JSON parsing as a potential failure point.
- Build URLs with
URLandURLSearchParams, not string concat. - Keep fetch logic centralized so behavior is consistent.
A stronger end‑to‑end example
To show how all these pieces fit together, here’s a more realistic example that includes: timeout, auth header, status handling, JSON parsing, and a clean API for the rest of the app.
class HttpError extends Error {
constructor(message, status, payload) {
super(message);
this.name = "HttpError";
this.status = status;
this.payload = payload;
}
}
function createApiClient({ baseUrl, getToken, timeoutMs = 8000 }) {
return async function apiRequest(path, options = {}) {
const controller = new AbortController();
const timeoutId = setTimeout(() => controller.abort(), timeoutMs);
const headers = {
"Accept": "application/json",
...(options.headers || {}),
};
const token = getToken && getToken();
if (token) {
headers.Authorization = Bearer ${token};
}
const response = await fetch(${baseUrl}${path}, {
...options,
headers,
signal: controller.signal,
});
let payload = null;
const contentType = response.headers.get("content-type") || "";
if (contentType.includes("application/json")) {
const text = await response.text();
payload = text ? JSON.parse(text) : null;
} else {
payload = await response.text();
}
clearTimeout(timeoutId);
if (!response.ok) {
throw new HttpError(HTTP ${response.status}, response.status, payload);
}
return payload;
};
}
// Usage
const api = createApiClient({
baseUrl: "https://api.example.com",
getToken: () => localStorage.getItem("token"),
timeoutMs: 7000,
});
async function loadDashboard() {
try {
const [profile, stats] = await Promise.all([
api("/me"),
api("/stats"),
]);
return { profile, stats };
} catch (error) {
if (error instanceof HttpError && error.status === 401) {
console.error("Auth error, redirecting to login");
return null;
}
console.error("Dashboard load failed", error);
throw error;
}
}
This is the level of structure I aim for in real code. It’s not heavy, but it gives you clarity, safety, and flexibility.
Closing thoughts
Fetch is deceptively simple. You can make a request in one line, but real‑world usage needs a bit more care. Once you internalize the mental model—promise resolves on network success, not HTTP success—you can build helpers that make the API safe and predictable. Add a small wrapper, a timeout, and consistent status handling, and you’ll eliminate 80% of common fetch bugs.
I keep coming back to the same theme: treat data fetching as a real feature. It deserves structure, testing, and thoughtful error handling. When you do, your UI becomes more reliable, debugging gets easier, and your team shares a common language around how requests are made and handled. That’s the difference between “it works” and “it’s dependable.”
If you adopt just a few of these patterns, you’ll feel the difference immediately—especially the first time a server hiccup doesn’t take down your interface.


