<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Viktor Lázár</title>
    <description>The latest articles on DEV Community by Viktor Lázár (@lazarv).</description>
    <link>https://dev.to/lazarv</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/lazarv"/>
    <language>en</language>
    <item>
      <title>The Master Builder, Unleashed</title>
      <dc:creator>Viktor Lázár</dc:creator>
      <pubDate>Thu, 07 May 2026 06:40:36 +0000</pubDate>
      <link>https://dev.to/lazarv/the-master-builder-unleashed-48bf</link>
      <guid>https://dev.to/lazarv/the-master-builder-unleashed-48bf</guid>
      <description>&lt;p&gt;There is a particular kind of pain in software work: sitting in a meeting about a thing you already know how to build.&lt;/p&gt;

&lt;p&gt;Not vaguely. Not optimistically. You can see the first version. You can see the shape of the data, the awkward part of the UI, the one integration that will probably hurt, the test that should exist before anyone trusts it, the part that can be ugly for a week, and the part that must be right from the beginning. The work is not done, but the form is already present in your head.&lt;/p&gt;

&lt;p&gt;Then the meeting continues.&lt;/p&gt;

&lt;p&gt;The discussion moves through alignment, ownership, prioritization, stakeholder expectations, dependency mapping, launch risk, follow-up meetings, and the increasingly ceremonial question of who should "drive" the thing. None of those words are fake. Some of them point at real constraints. But the emotional fact remains: the software could have started existing an hour ago.&lt;/p&gt;

&lt;p&gt;This is not the impatience of someone who does not understand organizations. It is the frustration of someone who understands both the work and the organization well enough to feel the gap between them.&lt;/p&gt;

&lt;p&gt;I have spent most of my career building things that were not supposed to fit where I put them: old game engines in the browser, data protocols in JavaScript, React Server Components outside the frameworks that tried to own them.&lt;/p&gt;

&lt;p&gt;That kind of work teaches you something uncomfortable: the hard part is rarely the first line of code. The hard part is keeping the shape of the thing intact while the world asks you to translate it into smaller, safer pieces.&lt;/p&gt;

&lt;p&gt;This is where AI agents change the equation.&lt;/p&gt;

&lt;p&gt;For a long time, the gap between seeing the shape of the thing and getting it built without losing that shape was just the cost of doing serious software. Big products needed big teams. Big teams needed coordination. Coordination needed meetings. The developer who could see the shape of the thing still needed designers, reviewers, frontend engineers, backend engineers, QA, release managers, platform support, security review, product sign-off, and enough calendar space for all of those people to agree that the thing should become real.&lt;/p&gt;

&lt;p&gt;The company owned execution. The individual owned at most a piece of intent.&lt;/p&gt;

&lt;p&gt;AI agents have started to disturb that bargain.&lt;/p&gt;

&lt;h2&gt;
  
  
  The master builder
&lt;/h2&gt;

&lt;p&gt;The developer I am talking about is not any developer.&lt;/p&gt;

&lt;p&gt;This is not a beginner with a prompt box. It is not a mid-level engineer asking a model to fill in the parts they do not yet understand. It is not the fantasy that software can now be produced by desire alone, where a person describes an app, accepts the first plausible artifact, and calls the result engineering.&lt;/p&gt;

&lt;p&gt;The person at the center of this shift is closer to the old idea of the master builder.&lt;/p&gt;

&lt;p&gt;A master builder does not merely place bricks. They understand the structure before it exists. They know what can be improvised and what cannot. They know which details are cosmetic, which details are load-bearing, and which shortcuts will become expensive only after the room is full of people. They can work with specialists without being dissolved by specialization, because they carry a model of the whole.&lt;/p&gt;

&lt;p&gt;In software, this is the staff-level engineer, the principal engineer, the technical founder, the experienced IC with taste and ownership, the person who has built enough systems to know that implementation is never just implementation. They can read a product problem and see a system. They can read a system and see the product assumptions hiding inside it. They know when a design is under-specified, when an abstraction is premature, when a test suite is giving false comfort, when the happy path is lying, and when a release is safe enough to learn from.&lt;/p&gt;

&lt;p&gt;That kind of developer was already valuable. AI does not create that value. It gives that value a larger surface to act on.&lt;/p&gt;

&lt;p&gt;The agent is not the builder. The agent is a tool in the builder's workshop.&lt;/p&gt;

&lt;h2&gt;
  
  
  Execution used to be scarce
&lt;/h2&gt;

&lt;p&gt;Most software organizations were shaped by a simple historical fact: writing, changing, and maintaining code required human time in large quantities.&lt;/p&gt;

&lt;p&gt;If a roadmap had more work than the current team could do, the answer was usually headcount. More frontend engineers. More backend engineers. More QA. More managers to coordinate the larger group. More process to make sure the larger group did not destroy itself by moving independently. The shape of the organization followed the scarcity of implementation.&lt;/p&gt;

&lt;p&gt;That scarcity made the company powerful. A small team might have a sharper idea, but the large company had the machinery to grind through the implementation. It could assign ten people to a problem, put a manager over them, attach design and product, run research, staff a platform dependency, and push the thing through a release train. The small team could move quickly at the beginning, but the large company could eventually bring mass to bear.&lt;/p&gt;

&lt;p&gt;That is why the old acquisition story made sense. A small company found a shape the market wanted. A large company bought it, copied it, or slowly surrounded it with distribution and resources. The small company had clarity. The large company had execution capacity.&lt;/p&gt;

&lt;p&gt;AI agents do not eliminate the large company's advantages. Distribution still matters. Trust still matters. Compliance, support, procurement, brand, data access, sales channels, regulatory knowledge, and operational maturity still matter. A bank is not replaced by a weekend app. A payments company is not replaced by a clever clone. NASA is not made less capable at space exploration because a web page could be more inspiring.&lt;/p&gt;

&lt;p&gt;But a particular advantage has weakened: the assumption that serious software requires organizational mass before it can be executed.&lt;/p&gt;

&lt;p&gt;That assumption is what &lt;a href="https://www.youtube.com/watch?v=p2aea9dytpE" rel="noopener noreferrer"&gt;Theo was circling in "Software engineering is dead now"&lt;/a&gt;. The provocative title is less interesting than the operational shift underneath it. When code becomes cheaper to produce, the bottleneck moves. The important question stops being "how many engineers can we assign?" and becomes "who understands the problem well enough to direct the work?"&lt;/p&gt;

&lt;p&gt;That is a very different question.&lt;/p&gt;

&lt;h2&gt;
  
  
  The agent changes the unit of leverage
&lt;/h2&gt;

&lt;p&gt;The most important thing about AI coding agents is not that they write code.&lt;/p&gt;

&lt;p&gt;It is that they let one coherent intent remain coherent across more of the work.&lt;/p&gt;

&lt;p&gt;Before agents, even a strong engineer had to break their intent apart to get enough capacity. One person could hold the whole shape, but the work had to be distributed across a team. That meant translation. The product shape became tickets. The tickets became implementation slices. The slices moved through people with different contexts, incentives, calendars, and levels of taste. Review tried to recover coherence after the fact.&lt;/p&gt;

&lt;p&gt;Sometimes that worked beautifully. Good teams are real. Collaboration can improve an idea. A second pair of eyes can catch the thing the builder missed. The point is not that teams are bad.&lt;/p&gt;

&lt;p&gt;The point is that teams are expensive, not only in salary but in semantic loss.&lt;/p&gt;

&lt;p&gt;Every handoff risks changing the idea. Every meeting turns part of the artifact back into language. Every approval step asks the work to justify itself before it has had a chance to become visible. Every person added to the loop increases capacity and coordination at the same time. When implementation was scarce, that trade was often worth it. When implementation becomes cheaper, the cost becomes easier to see.&lt;/p&gt;

&lt;p&gt;An AI agent changes the trade because it adds execution without adding a second will.&lt;/p&gt;

&lt;p&gt;That sentence is dangerous if read carelessly, so it needs the adult version immediately: the agent adds mistakes, hallucinations, overconfidence, style drift, security risk, and an endless appetite for plausible wrongness. It must be constrained, reviewed, tested, and corrected. It does not remove engineering discipline.&lt;/p&gt;

&lt;p&gt;But it also does not need to be aligned in the human sense. It does not need a career path, a meeting, a roadmap narrative, a title, a territory, or a week to build context from office politics. It can be pointed at a narrow part of the system, given constraints, corrected when it drifts, and asked to try again. It is not autonomous in the way a teammate is autonomous. That is precisely why it is useful as leverage.&lt;/p&gt;

&lt;p&gt;For the master builder, this is new. The builder can keep the whole artifact in view while delegating pieces of execution to tools that do not dilute the intent. The work still needs judgment. It needs more judgment, not less. But the distance between judgment and execution shrinks.&lt;/p&gt;

&lt;h2&gt;
  
  
  This is not vibe coding
&lt;/h2&gt;

&lt;p&gt;This distinction matters because the public language around AI-assisted development has been polluted by "vibe coding."&lt;/p&gt;

&lt;p&gt;Vibe coding is useful as a name for a real phenomenon: someone repeatedly prompts an AI system, accepts whatever looks close enough, and moves forward without deeply understanding the result. It can be fun. It can produce charming prototypes. It can help people explore personal software. It can also produce systems nobody should be asked to maintain.&lt;/p&gt;

&lt;p&gt;Syntax has been good on this distinction. In &lt;a href="https://syntax.fm/show/887/vibe-coding-is-a-problem" rel="noopener noreferrer"&gt;"Vibe Coding Is a Problem"&lt;/a&gt;, the problem is not that AI helps write code. The problem is the absence of close review, the willingness to stay at the surface, and the illusion that running software is the same thing as understood software. Their later episode, &lt;a href="https://syntax.fm/show/998/how-to-fix-vibe-coding" rel="noopener noreferrer"&gt;"How to Fix Vibe Coding"&lt;/a&gt;, points in the better direction: deterministic tools, linting, quality analysis, headless browsers, task workflows, observability, and tighter feedback loops.&lt;/p&gt;

&lt;p&gt;That is the line.&lt;/p&gt;

&lt;p&gt;The future worth taking seriously is not vibe coding. It is developer-led AI engineering.&lt;/p&gt;

&lt;p&gt;The developer supplies the intent. The developer supplies the taste. The developer supplies the constraints. The developer decides where the agent is allowed to roam and where it must stay on rails. The developer reads the diff. The developer runs the tests. The developer notices when the solution is locally correct but globally wrong. The developer decides whether the artifact deserves to exist.&lt;/p&gt;

&lt;p&gt;The agent accelerates the loop. It does not own the loop.&lt;/p&gt;

&lt;p&gt;This is why AI does not flatten all developers equally. It amplifies what is already there. A developer without judgment can now produce more code than before, which mostly means they can produce more unresolved consequence than before. A developer with judgment can produce more finished thought than before.&lt;/p&gt;

&lt;p&gt;The difference is not typing speed. The difference is taste under acceleration.&lt;/p&gt;

&lt;h2&gt;
  
  
  Quality was never guaranteed by size
&lt;/h2&gt;

&lt;p&gt;One of the quiet revelations of this era is that large institutions do not automatically produce better artifacts.&lt;/p&gt;

&lt;p&gt;They can produce extraordinary things. They can coordinate missions, operate infrastructure, satisfy regulators, support millions of users, and preserve knowledge across decades. But the artifact in front of the user is not always where that strength appears.&lt;/p&gt;

&lt;p&gt;NASA's &lt;a href="https://www.nasa.gov/ignition/" rel="noopener noreferrer"&gt;Ignition&lt;/a&gt; page is a useful object to look at for this reason. The underlying subject is enormous: Artemis, commercial lunar transportation, moon base capabilities, lunar terrain vehicles, procurement strategy, timelines, technical ambition. The page itself is largely a resource hub: PDFs, videos, advisories, requests for information, presentations, links. That may be the correct institutional shape for NASA's internal and public obligations. It is not the same thing as a product experience that makes the ambition legible.&lt;/p&gt;

&lt;p&gt;This is not a dunk on NASA. NASA can do things that no web developer can do.&lt;/p&gt;

&lt;p&gt;The point is more specific: institutional seriousness does not automatically become interface quality. A large organization can have the facts, the mission, the budget, the experts, and the public mandate, and still produce a web artifact that feels assembled by process rather than shaped by taste.&lt;/p&gt;

&lt;p&gt;That is exactly the kind of gap an AI-amplified master builder can attack. Not because they know more about lunar transportation than NASA. They do not. Because they can take a pile of material, infer the narrative shape, build an explorable interface, tighten the hierarchy, improve the pacing, test the interactions, and iterate before the institutional process has finished deciding which department owns the page.&lt;/p&gt;

&lt;p&gt;The same pattern shows up in developer tooling. &lt;a href="https://pingdotgg-t3code.mintlify.app/introduction" rel="noopener noreferrer"&gt;T3 Code&lt;/a&gt; is interesting not only as a tool for coding agents, but as an artifact of the new workflow. It is a minimal web GUI around agents like Codex, with sessions, git integration, worktrees, runtime modes, and a developer-facing surface designed around actual agent use. Whether or not that particular product becomes the winner is beside the point. Its existence is a sign of the tempo change. A small team can feel a workflow problem, build directly into it, and ship a tool that makes the new loop more usable.&lt;/p&gt;

&lt;p&gt;The old world made this kind of thing harder. The new world makes it common.&lt;/p&gt;

&lt;h2&gt;
  
  
  The small team becomes dangerous again
&lt;/h2&gt;

&lt;p&gt;The small team always had one advantage: fewer people had to agree before the work moved.&lt;/p&gt;

&lt;p&gt;That advantage used to be balanced by a brutal limitation: fewer people could build. A small team could choose quickly but execute slowly once the surface area grew. A large team could choose slowly but execute with force once the organization aligned.&lt;/p&gt;

&lt;p&gt;AI changes the ratio. It gives the small team, and sometimes the single master builder, access to execution capacity that used to require organizational size. It does not give them the large company's distribution, trust, legal department, customer base, or operational maturity. But for many software products, the first decisive question is not "who has the biggest organization?" It is "who can turn a clear product judgment into a working artifact fastest?"&lt;/p&gt;

&lt;p&gt;That is where the small team becomes dangerous.&lt;/p&gt;

&lt;p&gt;Not because bureaucracy is stupid. Bureaucracy is often memory. It is risk encoded as procedure. It is how large systems avoid repeating failures that individuals would happily rediscover. But bureaucracy becomes pathological when it continues to price execution as scarce after execution has become abundant.&lt;/p&gt;

&lt;p&gt;That is the source of the meeting pain.&lt;/p&gt;

&lt;p&gt;The master builder is not angry because other people exist. They are angry because the organization is still spending days converting intent into permission while the toolchain has made it possible to convert intent into a prototype, a test, a diff, a demo, or a shipped internal version. The old process insists on discussing the work in the abstract because it was designed for a world where making the work concrete was expensive.&lt;/p&gt;

&lt;p&gt;In the new world, concreteness is cheap enough to be part of the conversation.&lt;/p&gt;

&lt;p&gt;Instead of six meetings to decide whether an idea is viable, the builder can return with a working version. Instead of arguing about a flow in a document, they can put the flow in front of users. Instead of writing a speculative architecture proposal for a small feature, they can branch, build, test, measure, and throw it away if it fails. The artifact can arrive earlier in the decision process.&lt;/p&gt;

&lt;p&gt;That should make organizations better. Often it will make them uncomfortable first.&lt;/p&gt;

&lt;h2&gt;
  
  
  What still belongs to the team
&lt;/h2&gt;

&lt;p&gt;There is an easy but wrong conclusion here: if agents give execution back to individuals, teams no longer matter.&lt;/p&gt;

&lt;p&gt;Teams still matter. They matter most where reality is wider than the artifact.&lt;/p&gt;

&lt;p&gt;A master builder can build a remarkable first version, but production software lives in obligations. Security matters. Accessibility matters. On-call matters. Data retention matters. Customer migration matters. Billing matters. Support matters. Legal review matters. Incident response matters. The larger the promise a product makes to the world, the more the work extends beyond the person who first saw the shape.&lt;/p&gt;

&lt;p&gt;The mistake is not having a team. The mistake is using the team as a substitute for clear intent.&lt;/p&gt;

&lt;p&gt;A healthy team around a master builder should sharpen the artifact, not dissolve it. It should bring constraints into the work at the moment those constraints become real. It should catch risks, improve taste, protect users, and make the result operable. It should not turn every act of building into a negotiation over whether building may begin.&lt;/p&gt;

&lt;p&gt;That is the organizational challenge of AI-assisted engineering. The best teams will learn to let artifacts arrive earlier, then apply discipline around them. The worst teams will keep demanding consensus before concreteness, and they will slowly discover that the builders with the clearest intent have stopped waiting.&lt;/p&gt;

&lt;p&gt;Some will leave to start companies. Some will stay and route around the process. Some will become the people inside large organizations who quietly change the operating model. But the psychological shift is already here: the experienced engineer no longer has to accept that execution belongs somewhere else.&lt;/p&gt;

&lt;h2&gt;
  
  
  The work after code gets cheap
&lt;/h2&gt;

&lt;p&gt;When code gets cheap, software does not get easy.&lt;/p&gt;

&lt;p&gt;The hard parts move. Understanding users becomes harder to fake. Taste becomes more visible. QA becomes more important, because the amount of code that can be produced now exceeds the amount of code anyone should trust. Architecture becomes less about preventing people from typing the wrong thing and more about preserving coherence under acceleration. Product judgment becomes load-bearing.&lt;/p&gt;

&lt;p&gt;This is why the master builder matters more, not less.&lt;/p&gt;

&lt;p&gt;The builder is the person who can keep asking the questions the agent cannot answer by itself:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Is this the right problem?&lt;/li&gt;
&lt;li&gt;Is this the right shape?&lt;/li&gt;
&lt;li&gt;Did the implementation preserve the intent?&lt;/li&gt;
&lt;li&gt;What did we make harder by making this easy?&lt;/li&gt;
&lt;li&gt;Where is the hidden coupling?&lt;/li&gt;
&lt;li&gt;What would a user misunderstand?&lt;/li&gt;
&lt;li&gt;What will break when the happy path ends?&lt;/li&gt;
&lt;li&gt;Is this good, or merely complete?&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Those questions were always part of engineering. AI makes them more central because it makes the lower layers faster. When implementation slows down, weak judgment can hide inside the schedule. When implementation speeds up, weak judgment becomes visible almost immediately.&lt;/p&gt;

&lt;p&gt;That is good news for the kind of developer who has spent years building taste, systems sense, and ownership. It is bad news for organizations that treated those people as interchangeable implementation capacity.&lt;/p&gt;

&lt;p&gt;The master builder was never just a ticket processor. The ticket processor is the part AI threatens most directly. The builder is the person who knows what the tickets should have been, which tickets should not exist, and what artifact the tickets are failing to describe.&lt;/p&gt;

&lt;h2&gt;
  
  
  Permission was the bottleneck
&lt;/h2&gt;

&lt;p&gt;The deepest change is not that one person can now write more code.&lt;/p&gt;

&lt;p&gt;The deepest change is that one person can now carry an idea farther before asking an organization to believe in it.&lt;/p&gt;

&lt;p&gt;That changes the emotional contract of software work. A developer with a clear idea used to need permission early, because execution required resources. They needed time from other people. They needed a sprint slot. They needed a team. They needed the machinery. The idea had to survive as language long enough to earn the right to become software.&lt;/p&gt;

&lt;p&gt;Now the idea can become software sooner.&lt;/p&gt;

&lt;p&gt;That does not mean it deserves to ship. It does not mean it is correct. It does not mean the builder gets to ignore everyone else. It means the first artifact no longer has to wait for the full social machinery of production software to assemble around it.&lt;/p&gt;

&lt;p&gt;This is the thing many corporate developers feel before they can name it. The meeting hurts because the artifact is now closer than the organization thinks it is. The work is waiting behind a door that used to require a team to open. The builder now has tools in their hands.&lt;/p&gt;

&lt;p&gt;AI agents do not make developers optional. They make engineering judgment more important. They do not remove the need for teams. They remove the automatic advantage of organizational mass. They do not turn software into vibes. They give execution capacity back to the people who can already see the whole thing.&lt;/p&gt;

&lt;p&gt;The master builder is not unleashed because the machine became smart enough to replace them.&lt;/p&gt;

&lt;p&gt;The master builder is unleashed because the machine became useful enough to follow them.&lt;/p&gt;

</description>
      <category>ai</category>
    </item>
    <item>
      <title>A Framework Is Not a Platform</title>
      <dc:creator>Viktor Lázár</dc:creator>
      <pubDate>Wed, 06 May 2026 18:32:23 +0000</pubDate>
      <link>https://dev.to/lazarv/a-framework-is-not-a-platform-33ef</link>
      <guid>https://dev.to/lazarv/a-framework-is-not-a-platform-33ef</guid>
      <description>&lt;p&gt;For most of the time we have been writing web applications, two different teams answered two different questions. The framework team decided what the application looked like. The platform team decided where it ran. The line between the two questions held quietly for thirty years, and it held because nobody seriously challenged it.&lt;/p&gt;

&lt;p&gt;Rails decided how a controller talked to a model. Spring decided how a bean was wired. Express decided what a route handler looked like. None of them decided what database, proxy, cache, message bus, CDN, or regional topology the organization bought.&lt;/p&gt;

&lt;p&gt;That separation was not an accident. It was a property of how those frameworks were built. They produced a process. The process did its job. The infrastructure around the process — the CDN, the cache, the queue, the database, the function runtime, the regional layout — was someone else's job, and that someone else worked on a different review cycle, with different KPIs, accountable to different parts of the org chart.&lt;/p&gt;

&lt;p&gt;The line is being erased, and the cleanest place to see it being erased is Next.js 16. Cache Components did not just change caching. They moved an infrastructure decision into a framework API.&lt;/p&gt;

&lt;h2&gt;
  
  
  The handshake we used to have
&lt;/h2&gt;

&lt;p&gt;A Node.js web application running on Kubernetes is a clean handshake. The application produces a request handler. The platform team picks the cluster, the ingress, the CDN, the cache backend, the secrets store, the regional topology, the function runtime if there is one. They pick those things based on cost, security posture, vendor portfolio, contractual obligations, the team's existing operational expertise, and whatever standards the org has already paid down.&lt;/p&gt;

&lt;p&gt;The framework's job, in that handshake, is to be agnostic about all of it. The same code runs behind any reverse proxy. The same code uses whatever cache the platform team chose to put in front of it. The same code can be moved between vendors without changes that touch the application's source — only the deployment surface changes, and the deployment surface is a thin layer the platform team owns end-to-end.&lt;/p&gt;

&lt;p&gt;This is what Incremental Static Regeneration looked like in practice. A Next.js application built with ISR produced HTML files and a small revalidation loop. A CDN sat in front. The CDN served the file. Occasionally, on a stale-while-revalidate window, a function regenerated the file in the background. The shape was familiar to every CDN-fronted Node host. Vercel hosted it; Netlify hosted it; Kubernetes with Cloudflare in front hosted it; a bare VPS with nginx and a cron job hosted a recognizable version of it. The economics were similar everywhere because the architecture was platform-neutral, built from a CDN-and-function shape every platform team already understood.&lt;/p&gt;

&lt;p&gt;That shape is what Cache Components walks away from.&lt;/p&gt;

&lt;h2&gt;
  
  
  What v16 changed
&lt;/h2&gt;

&lt;p&gt;Cache Components, the headline feature of Next.js 16, replaces the route-segment caching model with a directive-based one. A page is dynamic by default. The developer marks regions with &lt;code&gt;'use cache'&lt;/code&gt; to opt those regions into caching. The framework prerenders a static shell where it can, streams the dynamic regions when they resolve, and stitches the response together at request time. Inside the page, the model is elegant. I have written about it from the directive-design angle in &lt;a href="https://dev.to/lazarv/the-cache-belongs-to-the-function-6f5"&gt;The Cache Belongs to the Function&lt;/a&gt; and will not repeat that argument here.&lt;/p&gt;

&lt;p&gt;The argument here is not about what &lt;code&gt;'use cache'&lt;/code&gt; looks like to the developer writing it. It is about what the runtime requires of the infrastructure underneath, once the flag is on.&lt;/p&gt;

&lt;p&gt;A page that uses Cache Components is, mechanically, a page whose response is produced per request by the framework's renderer, with cached fragments spliced in. In the general case, the CDN can no longer serve the full response without invoking the renderer. The static parts of the page exist as cached &lt;em&gt;fragments&lt;/em&gt;, not as cacheable artifacts. The renderer must run, even on a request where every fragment is a hit, because the renderer is what knows how to assemble the fragments into a streamed response.&lt;/p&gt;

&lt;p&gt;This is a small architectural change with large consequences. It moves the unit of caching from "a complete response a CDN can serve" to "a piece of a response the renderer assembles." A CDN is the infrastructure that serves complete responses. It is not the infrastructure that assembles responses from pieces. The framework, in choosing the second model, has chosen to be the assembler — which means the framework has become a piece of infrastructure that did not used to exist between the application and the CDN.&lt;/p&gt;

&lt;p&gt;Once the framework is in the request path on every request, three secondary requirements appear, each of which used to be the platform team's choice and is now the framework's demand. A cache backend has to exist, because the default in-memory cache is per-process; in practice, the framework expects a &lt;code&gt;cacheHandlers&lt;/code&gt; implementation pointing at a real backing store such as Redis. Tag invalidation has to be coordinated across instances, typically by refreshing a local view of shared invalidation state on the request path; in a clustered deployment, that becomes a round trip to shared storage the application did not used to make. The function runtime starts to matter in ways it did not before, because the dynamic-by-default model only amortizes its renderer cost on a platform that multiplexes concurrent requests across warm function invocations; on a platform without that, the cost is paid linearly with traffic.&lt;/p&gt;

&lt;p&gt;None of these requirements are illegitimate as choices. They are illegitimate as &lt;em&gt;framework outputs&lt;/em&gt;. The team did not pick Redis because it wanted Redis; the team did not put a per-request lookup on the request path because it wanted one there; the team did not select a function-runtime billing model because it had a view about how Cache Components should amortize. Redis is not the problem. The problem is when Redis stops being an application choice and becomes part of the framework's performance contract.&lt;/p&gt;

&lt;h2&gt;
  
  
  The escape hatches that closed
&lt;/h2&gt;

&lt;p&gt;In Next.js 15, the team that wanted to keep the platform-neutral economics had options. Mark a route &lt;code&gt;force-static&lt;/code&gt;. Enable Partial Prerendering per route with &lt;code&gt;experimental_ppr&lt;/code&gt;. Set a route's &lt;code&gt;revalidate&lt;/code&gt; value. Each of those decisions was visible at the route-segment level, and each one was a way for the developer to opt a route into a model the platform team's existing infrastructure already knew how to host.&lt;/p&gt;

&lt;p&gt;In v16, with &lt;code&gt;cacheComponents: true&lt;/code&gt;, those options are gone. The migration guide tells you to delete &lt;code&gt;force-dynamic&lt;/code&gt; and &lt;code&gt;force-static&lt;/code&gt;. The &lt;code&gt;experimental_ppr&lt;/code&gt; segment configuration is removed. The &lt;code&gt;revalidate&lt;/code&gt; and &lt;code&gt;fetchCache&lt;/code&gt; exports are replaced by &lt;code&gt;cacheLife&lt;/code&gt; inside &lt;code&gt;'use cache'&lt;/code&gt; boundaries. The route-segment escape hatches that used to let an application express "this page is static, please serve it as a file" are no longer in the API.&lt;/p&gt;

&lt;p&gt;The flag is opt-in, today. A team that wants the v15 economics can leave it off. But the docs already treat Cache Components as the recommended path, the dedicated PPR test suites in the repository are migrating away from a separate identity, and the trajectory of any flag that the framework team owns and recommends is well-known. Within a release or two, the recommended path becomes the default. Within a release or two after that, the legacy path becomes deprecated. The ability to refuse the new model is on a clock, and the clock is the framework team's.&lt;/p&gt;

&lt;h2&gt;
  
  
  Technically portable, economically captive
&lt;/h2&gt;

&lt;p&gt;The runtime is open source. The contract is documented. The adapters work. By the strict definition of vendor lock-in — &lt;em&gt;you cannot leave&lt;/em&gt; — there is no lock-in. Every claim a salesperson would make about the framework's portability is true.&lt;/p&gt;

&lt;p&gt;The honest definition of lock-in is not the strict one. The honest definition is: &lt;em&gt;you can leave, but the cost of leaving is large enough to change the build-vs-buy decision.&lt;/em&gt; Under that definition, Cache Components introduces a soft form of capture that ISR did not have. The runtime runs anywhere; the cost-effectiveness lives on one platform. Off that platform, the same code shape produces a meaningfully worse cost profile, a meaningfully higher operational burden, and a meaningfully lower performance ceiling.&lt;/p&gt;

&lt;p&gt;The performance ceiling is the part that is hardest to recover. On a platform that owns both the proxy and the function runtime, the static shell of a Cache-Components page can be served from the edge before the renderer is even invoked, with the dynamic stream stitched into the same response over a single connection. This is not a standard CDN primitive. It is not the contract a generic CDN signs with the application in front of it — serve a complete response, or proxy through to the origin and serve that. The handoff between a static shell and a function-produced stream, on the same connection, mid-response, is a vendor-aware proxy/runtime product. It can be built; it has not been standardized; and the team that wants it on Kubernetes is not picking it from a menu of CDN features. They are integrating bespoke pieces, or they are accepting a TTFB floor of "pod-reachable plus first render byte" instead of "edge node plus first static byte." The gap is structural, not operational.&lt;/p&gt;

&lt;p&gt;The question is not whether another platform can build the missing machinery. The question is whether an application framework should require that machinery to recover the economics it used to preserve by default.&lt;/p&gt;

&lt;p&gt;None of this is impossible to operate. It is only impossible to operate &lt;em&gt;optimally&lt;/em&gt;, because the optimum has been moved to a place only one vendor lives.&lt;/p&gt;

&lt;h2&gt;
  
  
  The pattern, beyond Next.js
&lt;/h2&gt;

&lt;p&gt;Next.js is the most aggressive case, but it is not the only framework being pulled in this direction, and the direction is the more interesting story than any one framework.&lt;/p&gt;

&lt;p&gt;Remix and React Router 7 sit at the other end of the spectrum, partly by inheritance and partly by deliberate choice. The cache contract has historically been a &lt;code&gt;headers()&lt;/code&gt; function on a loader returning standard &lt;code&gt;Cache-Control&lt;/code&gt; directives. The CDN does what CDNs do; the framework does not need a backing store, a tag manifest, or a request-time invalidation hook. Whether that posture survives future product pressure is an open question, but today the cache story is platform-neutral by construction.&lt;/p&gt;

&lt;p&gt;SvelteKit and Astro preserve the older bargain through adapters and static-first output. The application produces a generic artifact; the adapter materializes it into a deployment-specific shape only when the application has earned a dynamic runtime. The specifics stay at the deployment seam rather than seeping into the application source.&lt;/p&gt;

&lt;p&gt;Nuxt sits in the middle. Nitro's caching primitives are function-level and storage-pluggable rather than render-coupled, so a Nuxt application can express a cached value without dragging the rendering pipeline into the request path. The framework has caching, but it has not annexed caching as infrastructure.&lt;/p&gt;

&lt;p&gt;TanStack Start sits on a different axis altogether. It is router-and-query first, not renderer-and-cache first. Its primitives — TanStack Router, TanStack Query, server functions, loaders — describe what data should flow where, not what infrastructure should hold the cache. The cache lives with the query, function-level and storage-pluggable, the way TanStack Query has always shipped it. The framework does not need a Redis backing store, a tag manifest, or a request-time invalidation hook to be correct; the application's freshness is a property of its queries, not of the framework's renderer. It is a different architecture from Next.js, not a competing implementation of the same one.&lt;/p&gt;

&lt;p&gt;The structural caution is general, not aimed at any one project: a framework that adopts the renderer-and-cache architecture without the matching platform machinery inherits the hard part without inheriting the economic advantage.&lt;/p&gt;

&lt;p&gt;Some runtimes refuse this trade by construction. That is the line I have tried to hold in &lt;code&gt;@lazarv/react-server&lt;/code&gt; — a cache primitive that lives with the function, a router that is opt-in rather than load-bearing, a deployment story handled at the build seam rather than at the source. Hono, Fastify, Express, the older Node frameworks never had this problem because they never tried to absorb infrastructure decisions in the first place. They stay frameworks because they stay small.&lt;/p&gt;

&lt;p&gt;The point is not that every framework should look like the smaller ones. The point is that there is a spectrum, the spectrum has been visible for years, and the choice each framework makes about where to sit on it shapes the economics of every team that picks it.&lt;/p&gt;

&lt;h2&gt;
  
  
  What "framework" used to mean
&lt;/h2&gt;

&lt;p&gt;A framework, historically, is a thing you pick up to write an application. The decision is local. The team's senior engineer reads two days of docs, the team's frontend lead does a spike, the team picks one, and the work moves forward. The decision does not require sign-off from security, platform, FinOps, procurement, or an architecture review board. It does not need to, because the framework's blast radius is the application source.&lt;/p&gt;

&lt;p&gt;A platform is a thing you provision. The decision is organizational. It involves vendor risk review, multi-year contracts, integration with the org's authentication and observability, alignment with the org's existing infrastructure, and the long tail of "what happens if this provider gets acquired" thinking. Those reviews exist because the wrong platform decision is hard to walk back, and because the people who feel the consequences are not the same people who made the call.&lt;/p&gt;

&lt;p&gt;When a framework's correctness and performance start to require a specific cache topology, a specific function runtime, a specific proxy behavior, the framework has crossed the category line. Picking it is no longer a local decision. It is a platform decision dressed as a framework decision, and the people who would normally weigh in on a platform decision are not in the room when it is made. The frontend lead picks Next.js because Next.js is what frontend leads pick; the cost of that choice shows up months later, in a Redis bill, in a Lambda invocation count, in a p99 graph that nobody can explain to the CFO without a paragraph of caveats.&lt;/p&gt;

&lt;p&gt;This is the part of the trade that does not recover quickly. Money recovers. A team can switch frameworks; it is painful but bounded. What does not recover is the org's awareness that infrastructure was a thing the org was supposed to choose. The next framework that ships on the same model finds the ground already prepared. Each one normalizes the next.&lt;/p&gt;

&lt;h2&gt;
  
  
  The line we forgot
&lt;/h2&gt;

&lt;p&gt;A framework is not a platform, and a platform should not pretend to be a framework.&lt;/p&gt;

&lt;p&gt;The honest test for any tool wearing the framework label is the one this article has been circling. &lt;em&gt;What infrastructure does it require us to operate? What is the degraded-mode cost if we don't?&lt;/em&gt; A tool whose answers are "your existing Node host, and roughly the same as before" is a framework. A tool whose answers are "vendor-shaped infrastructure, and meaningfully worse" is something else. It does not have to be a worse thing. It does have to be named for what it is, because the people responsible for the answers to those two questions used to be the ones making the decision.&lt;/p&gt;

&lt;p&gt;The dev/ops handshake we used to have was not nostalgia. It was a real division of labor that let frameworks evolve without dragging infrastructure along, and let platforms evolve without rewriting applications. It let teams stay in motion. It let small projects stay small. It let large projects choose where they ran on the basis of their own constraints, not the framework's.&lt;/p&gt;

&lt;p&gt;We are losing that division of labor one framework choice at a time, mostly without noticing, and the cost is showing up in places — bills, latency floors, operational complexity, vendor leverage — that nobody connected to the original decision back when it was just "what should we use to build the app."&lt;/p&gt;

&lt;p&gt;A framework should be replaceable without replacing the infrastructure underneath it. Infrastructure should not become a consequence of the framework. When those two roles invert, the team has stopped owning the most important architectural surface in the system, and the framework's authors have started.&lt;/p&gt;

&lt;p&gt;A framework is not a platform. The two have always known what they were. We are the ones who forgot.&lt;/p&gt;

</description>
      <category>nextjs</category>
      <category>redis</category>
      <category>architecture</category>
      <category>infrastructure</category>
    </item>
    <item>
      <title>Time to Yield</title>
      <dc:creator>Viktor Lázár</dc:creator>
      <pubDate>Sun, 03 May 2026 12:10:32 +0000</pubDate>
      <link>https://dev.to/lazarv/time-to-yield-20m8</link>
      <guid>https://dev.to/lazarv/time-to-yield-20m8</guid>
      <description>&lt;p&gt;&lt;em&gt;An SSG benchmark across five React frameworks, from one thousand&lt;br&gt;
pages to half a million.&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;You're building a marketplace. Or a documentation site. A wiki,&lt;br&gt;
a generated archive, any of a dozen things that ship a static&lt;br&gt;
catalogue at scale. Your CMS has a hundred thousand entries.&lt;br&gt;
You've picked your SSG. You run the build.&lt;/p&gt;

&lt;p&gt;Five minutes. Ten. Twenty. Maybe an hour. Maybe a stack trace.&lt;/p&gt;

&lt;p&gt;You don't know in advance — and the public benchmarks won't tell&lt;br&gt;
you. Most stop at a thousand pages, where most real catalogues&lt;br&gt;
start. The gap between what gets measured and what gets shipped&lt;br&gt;
is where the unpleasant surprises live, and the engineer who has&lt;br&gt;
to ship into that gap usually finds out which side of it their&lt;br&gt;
tool was designed for at deploy time.&lt;/p&gt;

&lt;p&gt;So I built a &lt;a href="https://github.com/lazarv/ssg-bench" rel="noopener noreferrer"&gt;benchmark&lt;/a&gt; for the gap.&lt;/p&gt;


&lt;h2&gt;
  
  
  The benchmark
&lt;/h2&gt;

&lt;p&gt;Five frameworks in a pnpm workspace, each rendering one dynamic&lt;br&gt;
route &lt;code&gt;/posts/[id]&lt;/code&gt; from a shared deterministic data source. Same&lt;br&gt;
content, same shape, idiomatic config per tool. The output has to&lt;br&gt;
be pure deployable static HTML — no Node runtime is allowed at&lt;br&gt;
request time, which is the whole point of SSG. The harness sweeps&lt;br&gt;
&lt;code&gt;PAGE_COUNT&lt;/code&gt; across &lt;code&gt;1k → 10k → 100k → 200k → 300k → 400k → 500k&lt;/code&gt;,&lt;br&gt;
measures wall time, time-to-first-page (TTFP), peak RSS, output&lt;br&gt;
size, and validates a sample of generated HTML actually contains&lt;br&gt;
the right &lt;code&gt;Post #N&lt;/code&gt; content. It's all in&lt;br&gt;
&lt;a href="https://github.com/lazarv/ssg-bench/blob/main/bench" rel="noopener noreferrer"&gt;&lt;code&gt;bench/&lt;/code&gt;&lt;/a&gt;.&lt;/p&gt;
&lt;h2&gt;
  
  
  The contestants
&lt;/h2&gt;

&lt;p&gt;Five different bets on what static-site generation should look&lt;br&gt;
like in 2026.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Next.js (&lt;code&gt;apps/next&lt;/code&gt;)&lt;/strong&gt; — Vercel's framework, version 16, App&lt;br&gt;
Router and Turbopack. The most-deployed React tool in the world&lt;br&gt;
and the default reference point for any tooling comparison. Its&lt;br&gt;
strengths are well documented elsewhere; what this benchmark&lt;br&gt;
exercises is one of its many output modes — &lt;code&gt;output: "export"&lt;/code&gt;,&lt;br&gt;
the fully static path with no Node runtime at request time.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;TanStack Start (&lt;code&gt;apps/tanstack&lt;/code&gt;)&lt;/strong&gt; — the youngest entry, from&lt;br&gt;
the team behind TanStack Router and Query. Vite plus a Nitro-&lt;br&gt;
backed prerender plugin, file-system routing, currently in the&lt;br&gt;
1.x line and rapidly evolving. Prerendering takes a materialized&lt;br&gt;
&lt;code&gt;pages&lt;/code&gt; array of paths declared inside the Vite config.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Gatsby (&lt;code&gt;apps/gatsby&lt;/code&gt;)&lt;/strong&gt; — the old guard. GraphQL-driven by&lt;br&gt;
default, Redux-backed build cache, a sprawling plugin ecosystem,&lt;br&gt;
now maintained by Netlify after acquisition. It pre-dates every&lt;br&gt;
other entry here by years and has a distinct mental model:&lt;br&gt;
imperative &lt;code&gt;createPage&lt;/code&gt; calls inside a &lt;code&gt;gatsby-node.mjs&lt;/code&gt;&lt;br&gt;
lifecycle hook. People left it for Next.js partly because Gatsby&lt;br&gt;
builds were slow at scale; it's interesting to find out whether&lt;br&gt;
that's still the relevant fact.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Astro (&lt;code&gt;apps/astro&lt;/code&gt;)&lt;/strong&gt; — a static-first multi-framework site&lt;br&gt;
builder. Strictly speaking it isn't running React in this&lt;br&gt;
benchmark; pages are written in Astro's own &lt;code&gt;.astro&lt;/code&gt; template&lt;br&gt;
language with a fast static optimizer. It's included as the&lt;br&gt;
ceiling — the answer to "how fast can a non-React SSG go?" —&lt;br&gt;
against which the React-runtime entries can be measured fairly.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;&lt;a href="https://www.npmjs.com/package/@lazarv/react-server" rel="noopener noreferrer"&gt;@lazarv/react-server&lt;/a&gt; (&lt;code&gt;apps/react-server&lt;/code&gt;)&lt;/strong&gt; —&lt;br&gt;
an open React Server Components runtime built on Vite 8's&lt;br&gt;
Environment API with Rolldown as the production bundler.&lt;br&gt;
Disclosure: I wrote it. It's in this comparison because it's the&lt;br&gt;
only React-runtime entry whose static-export pipeline accepts a&lt;br&gt;
streaming path source — which, as the rest of this article will&lt;br&gt;
show, turns out to be the decisive design choice.&lt;/p&gt;
&lt;h2&gt;
  
  
  The headline
&lt;/h2&gt;

&lt;p&gt;At a thousand pages, every modern tool finishes in seconds and&lt;br&gt;
the table is a wash. At ten thousand, the leaders pull a small&lt;br&gt;
lead. The interesting story starts at a hundred thousand. The&lt;br&gt;
decisive story starts above two hundred thousand.&lt;/p&gt;

&lt;p&gt;I'll give you the whole thing chart by chart, but here's the&lt;br&gt;
spoiler. At 100,000 pages:&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Framework&lt;/th&gt;
&lt;th&gt;wall&lt;/th&gt;
&lt;th&gt;ttfp&lt;/th&gt;
&lt;th&gt;output bytes&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Astro&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;22.6s&lt;/td&gt;
&lt;td&gt;2.18s&lt;/td&gt;
&lt;td&gt;47 MiB&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;&lt;a class="mentioned-user" href="https://dev.to/lazarv"&gt;@lazarv&lt;/a&gt;/react-server&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;26.1s&lt;/td&gt;
&lt;td&gt;&lt;strong&gt;1.63s&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;&lt;strong&gt;83 MiB&lt;/strong&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;TanStack Start&lt;/td&gt;
&lt;td&gt;36.9s&lt;/td&gt;
&lt;td&gt;2.65s&lt;/td&gt;
&lt;td&gt;172 MiB&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Gatsby&lt;/td&gt;
&lt;td&gt;62.1s&lt;/td&gt;
&lt;td&gt;7.91s&lt;/td&gt;
&lt;td&gt;189 MiB&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Next.js&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;&lt;strong&gt;264.5s&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;124s&lt;/td&gt;
&lt;td&gt;&lt;strong&gt;1.84 GiB&lt;/strong&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;At 200,000 pages, Next.js's build crashes — exit 1, no HTML.&lt;/p&gt;


&lt;h2&gt;
  
  
  The chart that broke the pattern
&lt;/h2&gt;

&lt;p&gt;Most benchmark charts are roughly parallel lines: the same&lt;br&gt;
ranking from one page count to the next, gaps roughly constant,&lt;br&gt;
nothing that asks you to stop and look. This one isn't.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fllrtu2lp4i13ao3vz6ii.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fllrtu2lp4i13ao3vz6ii.png" alt="Time to first page" width="800" height="467"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;react-server's TTFP is a flat line. From a thousand pages to half&lt;br&gt;
a million, the time between "I started the build" and "the first&lt;br&gt;
HTML file appeared on disk" stays between 1.4 and 3.2 seconds.&lt;br&gt;
Astro and TanStack Start curve gently upward. Gatsby's curve&lt;br&gt;
starts mid-air at 5 seconds and climbs to over a hundred. Next.js&lt;br&gt;
sits between them within its working range, climbing from 2.9s at&lt;br&gt;
1k pages to 124s at 100k.&lt;/p&gt;

&lt;p&gt;What you're looking at is a single architectural decision, made&lt;br&gt;
once, repeated through every layer of each pipeline. One framework&lt;br&gt;
streams its work. The others batch it.&lt;/p&gt;
&lt;h2&gt;
  
  
  Yield, don't return
&lt;/h2&gt;

&lt;p&gt;When you tell an SSG to render &lt;code&gt;/posts/[id]&lt;/code&gt; for many IDs, it has&lt;br&gt;
to ask you for the list. The shape of that question — the API your&lt;br&gt;
config file uses — turns out to determine almost everything else.&lt;/p&gt;

&lt;p&gt;Most frameworks ask you for an array.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight javascript"&gt;&lt;code&gt;&lt;span class="c1"&gt;// Next.js — apps/next/app/posts/[id]/page.jsx&lt;/span&gt;
&lt;span class="k"&gt;export&lt;/span&gt; &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;dynamicParams&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="kc"&gt;false&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;

&lt;span class="k"&gt;export&lt;/span&gt; &lt;span class="kd"&gt;function&lt;/span&gt; &lt;span class="nf"&gt;generateStaticParams&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="nf"&gt;allIds&lt;/span&gt;&lt;span class="p"&gt;().&lt;/span&gt;&lt;span class="nf"&gt;map&lt;/span&gt;&lt;span class="p"&gt;((&lt;/span&gt;&lt;span class="nx"&gt;id&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="p"&gt;({&lt;/span&gt; &lt;span class="na"&gt;id&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nc"&gt;String&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;id&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;}));&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;





&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;---
// Astro — apps/astro/src/pages/posts/[id].astro
export async function getStaticPaths() {
  return allIds().map((id) =&amp;gt; ({ params: { id: String(id) } }));
}
---
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;





&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight typescript"&gt;&lt;code&gt;&lt;span class="c1"&gt;// TanStack Start — apps/tanstack/vite.config.mjs&lt;/span&gt;
&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;pages&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nf"&gt;allIds&lt;/span&gt;&lt;span class="p"&gt;().&lt;/span&gt;&lt;span class="nf"&gt;map&lt;/span&gt;&lt;span class="p"&gt;((&lt;/span&gt;&lt;span class="nx"&gt;id&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="p"&gt;({&lt;/span&gt;
  &lt;span class="na"&gt;path&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;`/posts/&lt;/span&gt;&lt;span class="p"&gt;${&lt;/span&gt;&lt;span class="nx"&gt;id&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="s2"&gt;`&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
  &lt;span class="na"&gt;prerender&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="na"&gt;enabled&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="kc"&gt;true&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="na"&gt;outputPath&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;`/posts/&lt;/span&gt;&lt;span class="p"&gt;${&lt;/span&gt;&lt;span class="nx"&gt;id&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="s2"&gt;/index.html`&lt;/span&gt; &lt;span class="p"&gt;},&lt;/span&gt;
&lt;span class="p"&gt;}));&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The shape is identical: build an array, return an array. The&lt;br&gt;
runtime then has to materialize that array — all hundred thousand&lt;br&gt;
elements of it — before any rendering can start. The first page&lt;br&gt;
of HTML cannot be written before the last entry of the path list&lt;br&gt;
has been allocated.&lt;/p&gt;

&lt;p&gt;react-server asks the same question differently:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight javascript"&gt;&lt;code&gt;&lt;span class="c1"&gt;// react-server — apps/react-server/src/pages/posts/[id].static.mjs&lt;/span&gt;
&lt;span class="k"&gt;import&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="nx"&gt;idStream&lt;/span&gt; &lt;span class="p"&gt;}&lt;/span&gt; &lt;span class="k"&gt;from&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;@ssg-test/shared&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;

&lt;span class="k"&gt;export&lt;/span&gt; &lt;span class="k"&gt;default&lt;/span&gt; &lt;span class="k"&gt;async&lt;/span&gt; &lt;span class="kd"&gt;function&lt;/span&gt;&lt;span class="o"&gt;*&lt;/span&gt; &lt;span class="p"&gt;()&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="k"&gt;for &lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;id&lt;/span&gt; &lt;span class="k"&gt;of&lt;/span&gt; &lt;span class="nf"&gt;idStream&lt;/span&gt;&lt;span class="p"&gt;())&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="k"&gt;yield&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="na"&gt;id&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nc"&gt;String&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;id&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;};&lt;/span&gt;
  &lt;span class="p"&gt;}&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;It's an async generator. The router pulls one descriptor at a&lt;br&gt;
time, when a render worker is free. The path list is never in&lt;br&gt;
memory all at once; peak memory of the path source is &lt;code&gt;O(1)&lt;/code&gt;,&lt;br&gt;
regardless of N. As soon as the first descriptor is yielded, the&lt;br&gt;
first page can render. As soon as the first page renders, it lands&lt;br&gt;
on disk. The rest of the build is just keeping the workers fed.&lt;/p&gt;

&lt;p&gt;The runtime documents this contract explicitly at&lt;br&gt;
&lt;a href="https://react-server.dev/router/static#streaming-static-paths" rel="noopener noreferrer"&gt;react-server.dev/router/static#streaming-static-paths&lt;/a&gt;&lt;br&gt;
— and the detection is by &lt;strong&gt;function kind&lt;/strong&gt;: write &lt;code&gt;async&lt;br&gt;
function*&lt;/code&gt; directly as the default export, or fall back to the&lt;br&gt;
legacy array contract. There's no opt-in flag. The shape of your&lt;br&gt;
function is the shape of the build.&lt;/p&gt;

&lt;p&gt;You can chain the same idea at the config level, which is what the&lt;br&gt;
benchmark does to skip RSC payload sidecars (the other frameworks&lt;br&gt;
emit HTML only; we want the bytes column to compare like with like):&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight javascript"&gt;&lt;code&gt;&lt;span class="c1"&gt;// react-server — apps/react-server/react-server.config.mjs&lt;/span&gt;
&lt;span class="k"&gt;export&lt;/span&gt; &lt;span class="k"&gt;default&lt;/span&gt; &lt;span class="nf"&gt;defineConfig&lt;/span&gt;&lt;span class="p"&gt;({&lt;/span&gt;
  &lt;span class="na"&gt;root&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;src/pages&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
  &lt;span class="k"&gt;async&lt;/span&gt; &lt;span class="o"&gt;*&lt;/span&gt;&lt;span class="k"&gt;export&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;paths&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="k"&gt;for&lt;/span&gt; &lt;span class="k"&gt;await &lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;p&lt;/span&gt; &lt;span class="k"&gt;of&lt;/span&gt; &lt;span class="nx"&gt;paths&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
      &lt;span class="k"&gt;yield&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="p"&gt;...&lt;/span&gt;&lt;span class="nx"&gt;p&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="na"&gt;rsc&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="kc"&gt;false&lt;/span&gt; &lt;span class="p"&gt;};&lt;/span&gt;
    &lt;span class="p"&gt;}&lt;/span&gt;
  &lt;span class="p"&gt;},&lt;/span&gt;
&lt;span class="p"&gt;});&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Two &lt;code&gt;async function*&lt;/code&gt; shapes — one in the route, one in the&lt;br&gt;
config. The whole streaming property of the build comes from&lt;br&gt;
those two declarations. Look at the TTFP chart again with this in&lt;br&gt;
mind: react-server is renderer-bound; everyone else is array-bound.&lt;/p&gt;
&lt;h2&gt;
  
  
  Things start to fall apart at a hundred thousand
&lt;/h2&gt;

&lt;p&gt;If TTFP is the early-warning signal, total wall time is where the&lt;br&gt;
architecture pays its real bill.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F2ircrhbjop5gj1za5jeu.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F2ircrhbjop5gj1za5jeu.png" alt="Build wall time" width="800" height="467"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;At a thousand pages, every framework here finishes in single-digit&lt;br&gt;
seconds and you'd struggle to feel the difference in a CI log. The&lt;br&gt;
slope of the curves is what matters, and the slope diverges hard&lt;br&gt;
above ten thousand.&lt;/p&gt;

&lt;p&gt;By a hundred thousand pages, react-server has finished in &lt;strong&gt;26&lt;br&gt;
seconds&lt;/strong&gt;. Astro, the leader, in &lt;strong&gt;22.6 seconds&lt;/strong&gt;. TanStack Start&lt;br&gt;
in 37. Gatsby in just over a minute.&lt;/p&gt;

&lt;p&gt;Next.js takes &lt;strong&gt;four and a half minutes&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;For the same work. Same content, same hundred thousand pages on&lt;br&gt;
disk. Next.js's curve is steeper than linear above 50k pages, and&lt;br&gt;
by 100k the wall time is into the "go for a coffee" territory&lt;br&gt;
that distinguishes a benchmark from a real engineering decision.&lt;/p&gt;

&lt;p&gt;The other notable result at this scale: at 100,000 pages, Gatsby&lt;br&gt;
finishes faster than Next.js. 62 seconds versus 264. Gatsby&lt;br&gt;
has a long-standing reputation for slow builds at scale, and&lt;br&gt;
that reputation isn't unfair, but on this specific workload it&lt;br&gt;
crosses the line first. The framework people moved off of for&lt;br&gt;
build performance is now, on this measurement, the faster of&lt;br&gt;
the two.&lt;/p&gt;

&lt;p&gt;The same data reads sharper as throughput: pages produced per&lt;br&gt;
second, the per-page work each framework does once it's warmed&lt;br&gt;
up.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fab0nkqs2drpckfhl67kk.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fab0nkqs2drpckfhl67kk.png" alt="Throughput" width="800" height="467"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The four frameworks that complete the workload all reach a&lt;br&gt;
plateau somewhere above ten thousand pages — a steady-state&lt;br&gt;
pages-per-second ceiling that holds up the rest of the way.&lt;br&gt;
&lt;a class="mentioned-user" href="https://dev.to/lazarv"&gt;@lazarv&lt;/a&gt;/react-server runs around 3,000–3,800 pages/s; Astro&lt;br&gt;
3,000–4,400; TanStack Start 1,900–2,700; Gatsby 1,500–1,700.&lt;br&gt;
The plateaus tell you how much overhead each framework has&lt;br&gt;
amortized away once the build is steady.&lt;/p&gt;

&lt;p&gt;Next.js never reaches a plateau. Its throughput peaks at 480&lt;br&gt;
pages/s at 10k, drops to 378 pages/s at 100k, and crashes before&lt;br&gt;
it can be measured at higher counts. The build is doing &lt;strong&gt;more&lt;br&gt;
work per page as the page count grows&lt;/strong&gt; — the opposite of what&lt;br&gt;
amortization should produce. That trajectory is what makes the&lt;br&gt;
next section's failure mode predictable in retrospect: a&lt;br&gt;
pipeline whose per-page cost is increasing was always going to&lt;br&gt;
hit a ceiling.&lt;/p&gt;
&lt;h2&gt;
  
  
  The wall
&lt;/h2&gt;

&lt;p&gt;Then I cranked the count to two hundred thousand.&lt;/p&gt;

&lt;p&gt;The build crashed.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;RangeError: Maximum call stack size exceeded
    at ignore-listed frames

&amp;gt; Build error occurred
Error: Failed to collect page data for /posts/[id]
    at ignore-listed frames {
  type: 'Error'
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Three seconds of CPU. No HTML. Exit code 1. Next.js's "collect&lt;br&gt;
page data" phase — the step that runs after Turbopack compiles&lt;br&gt;
your app and before the worker pool starts rendering — overflows&lt;br&gt;
V8's call stack.&lt;/p&gt;

&lt;p&gt;I bumped to 300k, 400k, 500k. Same crash, every time. The error&lt;br&gt;
itself is forthright: stack overflow, here's the phase. What the&lt;br&gt;
error can't tell you is that the input the pipeline cannot handle&lt;br&gt;
is your own page list, and that there is no flag in &lt;code&gt;next.config&lt;/code&gt;&lt;br&gt;
to ask for a different consumer of it.&lt;/p&gt;

&lt;p&gt;A &lt;code&gt;RangeError: Maximum call stack size exceeded&lt;/code&gt; is a recursion&lt;br&gt;
fingerprint. Something in Next's pipeline is walking the params&lt;br&gt;
array via naive recursion — JSON-serializing it, normalizing it,&lt;br&gt;
hashing it for the data cache, building a tree from it, take your&lt;br&gt;
pick — with recursion depth proportional to the array length&lt;br&gt;
itself, not to its log. (A balanced-tree traversal would push&lt;br&gt;
log₂(200,000) ≈ 18 frames; nowhere near a stack limit. The&lt;br&gt;
overflow only makes sense if each entry contributes a constant&lt;br&gt;
share of frames.) At 100k entries the depth still fits inside&lt;br&gt;
V8's default ~10k-frame stack. At 200k it doesn't.&lt;/p&gt;

&lt;p&gt;This is not something &lt;code&gt;--max-old-space-size=8192&lt;/code&gt; can fix (we&lt;br&gt;
tried). It's not a memory issue at all. It's an &lt;strong&gt;algorithmic&lt;br&gt;
ceiling&lt;/strong&gt;: Next.js's page-data collection is implemented as&lt;br&gt;
recursive traversal over the materialized params array, and that&lt;br&gt;
recursion has a depth limit baked into the JavaScript engine. You&lt;br&gt;
cannot grow your way past it. There is no flag because there is&lt;br&gt;
no scalar to turn.&lt;/p&gt;

&lt;p&gt;The runtime &lt;em&gt;requires&lt;/em&gt; the array contract — &lt;code&gt;generateStaticParams&lt;/code&gt;&lt;br&gt;
must return one — and the pipeline that consumes it cannot tolerate&lt;br&gt;
arrays past a certain size. Both halves of that statement are&lt;br&gt;
architecture, not bugs.&lt;/p&gt;

&lt;p&gt;react-server, on the same hardware, with the same content, spent&lt;br&gt;
&lt;strong&gt;155 seconds&lt;/strong&gt; on five hundred thousand pages. First HTML on&lt;br&gt;
disk: 2.87 seconds. The same TTFP it has at a thousand pages.&lt;br&gt;
Nothing in its pipeline ever sees a 500,000-element array, because&lt;br&gt;
nothing in its pipeline is allowed to construct one.&lt;/p&gt;

&lt;h2&gt;
  
  
  What's actually in the output directory
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F651z8ubr6ggoupdghkie.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F651z8ubr6ggoupdghkie.png" alt="Deployable output size" width="800" height="467"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;If the wall time is the loud problem, the output bytes are the&lt;br&gt;
quiet one. At 100,000 pages:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Astro emits &lt;strong&gt;47 MiB&lt;/strong&gt;.&lt;/li&gt;
&lt;li&gt;react-server emits &lt;strong&gt;83 MiB&lt;/strong&gt;.&lt;/li&gt;
&lt;li&gt;TanStack Start emits &lt;strong&gt;172 MiB&lt;/strong&gt;.&lt;/li&gt;
&lt;li&gt;Gatsby emits &lt;strong&gt;189 MiB&lt;/strong&gt;.&lt;/li&gt;
&lt;li&gt;Next.js emits &lt;strong&gt;1.84 GiB&lt;/strong&gt;.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;At 100k pages, the deployable Next.js output is roughly forty&lt;br&gt;
times larger than Astro's and twenty times larger than react-&lt;br&gt;
server's. The bulk of it is per-page files: a &lt;code&gt;.txt&lt;/code&gt; RSC payload&lt;br&gt;
sidecar for every route, used to power client-router prefetch on&lt;br&gt;
navigation, plus a runtime bundle the page links to for hydration&lt;br&gt;
even on routes without &lt;code&gt;"use client"&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;Both files are part of the App Router's contract: the &lt;code&gt;.txt&lt;/code&gt;&lt;br&gt;
payload exists so the client router can prefetch, the runtime&lt;br&gt;
exists so client components can hydrate. They're features of the&lt;br&gt;
deployment topology Next.js is designed for. The trade-off, when&lt;br&gt;
the deployment is fully static and no client component is ever&lt;br&gt;
going to run, is that the contract still ships. There's no&lt;br&gt;
documented flag to drop either for &lt;code&gt;output: "export"&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;react-server makes the equivalent choice in the opposite&lt;br&gt;
direction: emit HTML only by default for fully static export, and&lt;br&gt;
let the user opt back into RSC payload sidecars per path if they&lt;br&gt;
want them. The benchmark's config-level &lt;code&gt;export()&lt;/code&gt; hook tags every&lt;br&gt;
yielded path with &lt;code&gt;rsc: false&lt;/code&gt; to keep the bytes column comparing&lt;br&gt;
HTML to HTML.&lt;/p&gt;

&lt;h2&gt;
  
  
  Memory: where Gatsby still hurts and &lt;a class="mentioned-user" href="https://dev.to/lazarv"&gt;@lazarv&lt;/a&gt;/react-server stays quiet
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fjv319xmw67jji8m3toqs.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fjv319xmw67jji8m3toqs.png" alt="Peak resident memory" width="800" height="467"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The memory chart is shaped a lot like the wall chart, with one&lt;br&gt;
outlier: Gatsby. Gatsby's build cache is a Redux store that&lt;br&gt;
appends every &lt;code&gt;createPage&lt;/code&gt; call into in-memory state, and it never&lt;br&gt;
sheds that state until the build finishes. At 500k pages, Gatsby's&lt;br&gt;
peak resident set hits &lt;strong&gt;9.55 GiB&lt;/strong&gt;. Long-time Gatsby users will&lt;br&gt;
be unsurprised; this is what &lt;code&gt;gatsby build&lt;/code&gt; has always done.&lt;/p&gt;

&lt;p&gt;react-server holds between &lt;strong&gt;1.2 GiB at a thousand pages and 2.6&lt;br&gt;
GiB at half a million&lt;/strong&gt; — essentially flat above 10k. TanStack&lt;br&gt;
Start ranges from &lt;strong&gt;600 MiB at 1k to 3.6 GiB at 400k&lt;/strong&gt; before&lt;br&gt;
nudging back down to 3.1 GiB at 500k. Astro is the leanest of all&lt;br&gt;
at &lt;strong&gt;0.6 to 1.8 GiB&lt;/strong&gt; across the same range.&lt;/p&gt;

&lt;p&gt;The streaming path source is one reason react-server's memory&lt;br&gt;
curve flattens. The bigger reason is what it doesn't accumulate:&lt;br&gt;
no per-route manifest, no fingerprinted asset graph for every&lt;br&gt;
page, no client-router prefetch index. Whatever doesn't exist&lt;br&gt;
doesn't take memory.&lt;/p&gt;




&lt;h2&gt;
  
  
  A note about Astro
&lt;/h2&gt;

&lt;p&gt;Astro is the fastest tool in this benchmark. It deserves the&lt;br&gt;
credit, with one important asterisk: &lt;strong&gt;Astro isn't running React&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;In &lt;code&gt;apps/astro/src/pages/posts/[id].astro&lt;/code&gt;, the page is written in&lt;br&gt;
Astro's own template language. There's no React reconciler, no&lt;br&gt;
hydration framework, no Server Components flight protocol — it's&lt;br&gt;
closer to JSX-flavored server-side templating with a fast static&lt;br&gt;
optimizer. Astro is the &lt;em&gt;right ceiling&lt;/em&gt; for "what can a static-&lt;br&gt;
site generator do at all," but it isn't an apples-to-apples&lt;br&gt;
comparison with React-runtime tools.&lt;/p&gt;

&lt;p&gt;Which makes the next sentence the actual story.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;&lt;a class="mentioned-user" href="https://dev.to/lazarv"&gt;@lazarv&lt;/a&gt;/react-server matches Astro.&lt;/strong&gt; Within ~15% on wall time&lt;br&gt;
at 100k (26s vs 22.6s), and &lt;strong&gt;faster on TTFP&lt;/strong&gt; (1.63s vs 2.18s). And it&lt;br&gt;
does this while running the actual React Server Components&lt;br&gt;
production server — the same one a deployment would serve at&lt;br&gt;
request time, bundled by Vite 8 and Rolldown, driven by a&lt;br&gt;
streaming path source. The HTML on disk after the export is the&lt;br&gt;
HTML the production server would have produced for a real&lt;br&gt;
request. A real React runtime moving at static-template-engine&lt;br&gt;
speed.&lt;/p&gt;

&lt;p&gt;That isn't a result you get by accident.&lt;/p&gt;

&lt;h2&gt;
  
  
  Why this works
&lt;/h2&gt;

&lt;p&gt;A React Server Components runtime keeping pace with a static-&lt;br&gt;
template engine doesn't happen because someone optimized a hot&lt;br&gt;
loop. It happens because the architecture has fewer places for&lt;br&gt;
work to pile up. Five things contribute, and none of them are&lt;br&gt;
clever; they're all just the absence of unnecessary buffers.&lt;/p&gt;

&lt;p&gt;The build phase produces a &lt;strong&gt;real production server&lt;/strong&gt;. Vite 8 and&lt;br&gt;
Rolldown bundle the runtime exactly as it would run at request&lt;br&gt;
time; the static export then starts that bundled server and asks&lt;br&gt;
it to render each yielded path. The thing that produces your HTML&lt;br&gt;
during the export is the same thing that would serve your HTML if&lt;br&gt;
you weren't exporting. There is no separate build-only renderer,&lt;br&gt;
no compile-time-only sandbox, no special static-export pipeline&lt;br&gt;
running its own copy of half the framework. Whatever the&lt;br&gt;
production server can render at request time, the export can&lt;br&gt;
produce. Two phases — bundle, then render — but the second phase&lt;br&gt;
is the production server you'd deploy, not a parallel universe&lt;br&gt;
of build-time machinery.&lt;/p&gt;

&lt;p&gt;The static path source &lt;strong&gt;streams by contract&lt;/strong&gt;. Both &lt;code&gt;[id]&lt;br&gt;
.static.mjs&lt;/code&gt; and the config-level &lt;code&gt;export()&lt;/code&gt; are &lt;code&gt;async function*&lt;/code&gt;&lt;br&gt;
shapes that the router pulls from. Memory of the path source is&lt;br&gt;
&lt;code&gt;O(1)&lt;/code&gt;. Rendering can start on the first yielded path.&lt;/p&gt;

&lt;p&gt;Render workers are &lt;strong&gt;driven by the stream&lt;/strong&gt;. The&lt;br&gt;
&lt;code&gt;--export-concurrency&lt;/code&gt; flag forks N child processes; each runs its&lt;br&gt;
own RSC main thread plus an SSR worker thread; the coordinator&lt;br&gt;
dispatches one path per free worker. Output bytes never cross the&lt;br&gt;
IPC boundary — every artifact (HTML, optional &lt;code&gt;.gz&lt;/code&gt; / &lt;code&gt;.br&lt;/code&gt;&lt;br&gt;
sidecars, postponed-fragment cache) is written to disk inside the&lt;br&gt;
child. There is no central "collect page data" buffer because&lt;br&gt;
there is no central buffer.&lt;/p&gt;

&lt;p&gt;There is &lt;strong&gt;no per-page runtime tax&lt;/strong&gt;. Pages without &lt;code&gt;"use client"&lt;/code&gt;&lt;br&gt;
get pure HTML. The runtime doesn't inject bootstrap scripts,&lt;br&gt;
doesn't write &lt;code&gt;_buildManifest.js&lt;/code&gt;, doesn't emit per-page payload&lt;br&gt;
sidecars unless you ask. The 22× output-size delta vs. Next.js&lt;br&gt;
collapses to: emit only what the page needs.&lt;/p&gt;

&lt;p&gt;And there is &lt;strong&gt;no extra compiler in the path&lt;/strong&gt;. No Turbopack-&lt;br&gt;
style parallel compiler stack, no SWC custom plugins, no static-&lt;br&gt;
build renderer that's a different runtime from the production&lt;br&gt;
server. Vite 8, Rolldown, Node, JavaScript — and the runtime&lt;br&gt;
itself. Phase 2 is just the runtime. Fewer moving parts than its&lt;br&gt;
peers, which is precisely why fewer of them break at scale.&lt;/p&gt;

&lt;h2&gt;
  
  
  What this means if you have to pick
&lt;/h2&gt;

&lt;p&gt;If you have a thousand pages, all of these tools work. The&lt;br&gt;
differences are noise. Pick on developer experience.&lt;/p&gt;

&lt;p&gt;If you have ten thousand, Next.js is already five times slower&lt;br&gt;
than the leaders. Worth knowing before your next pitch deck.&lt;/p&gt;

&lt;p&gt;If you have a hundred thousand:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Astro&lt;/strong&gt; is the fastest if you don't need React.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;&lt;a class="mentioned-user" href="https://dev.to/lazarv"&gt;@lazarv&lt;/a&gt;/react-server&lt;/strong&gt; is the fastest React runtime that
completes the workload, on par with Astro while running RSC
end-to-end, with the smallest HTML-only output of any React
option.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;TanStack Start&lt;/strong&gt; completes but loses time to the materialized
&lt;code&gt;pages&lt;/code&gt; array.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Gatsby&lt;/strong&gt; completes, slowly, with high memory.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Next.js&lt;/strong&gt; completes but takes about ten times as long as the
leaders and emits roughly twenty times the bytes; both numbers
follow from defaults that aren't configurable away in the
&lt;code&gt;output: "export"&lt;/code&gt; path today.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;If you have two hundred thousand pages or more of pre-rendered&lt;br&gt;
routes — a CMS-backed catalogue, a docs archive, a programmatically&lt;br&gt;
generated index — &lt;strong&gt;Next.js's static-export pipeline does not&lt;br&gt;
complete.&lt;/strong&gt; The build crashes with a &lt;code&gt;RangeError: Maximum call&lt;br&gt;
stack size exceeded&lt;/code&gt; during page-data collection. The failure is&lt;br&gt;
recursion depth in V8, not heap size, so it isn't fixable by&lt;br&gt;
flags or environment variables. The right framing is that&lt;br&gt;
&lt;code&gt;output: "export"&lt;/code&gt; at this scale isn't a supported topology for&lt;br&gt;
Next.js — its answer for catalogues this large is ISR, which is a&lt;br&gt;
different topology, which is the next section.&lt;/p&gt;

&lt;h2&gt;
  
  
  But what about ISR?
&lt;/h2&gt;

&lt;p&gt;Whenever the sentence "Next.js can't pre-render two hundred&lt;br&gt;
thousand pages" appears in public, someone responds: just use ISR.&lt;/p&gt;

&lt;p&gt;Incremental Static Regeneration is Next.js's answer to large&lt;br&gt;
catalogues. Don't pre-render every page at build time. Build the&lt;br&gt;
app shell, deploy it, and have the runtime generate each page on&lt;br&gt;
first request and cache the result. A &lt;code&gt;revalidate: N&lt;/code&gt; knob handles&lt;br&gt;
freshness. On Vercel it works well; on a Next.js-aware host it&lt;br&gt;
mostly works.&lt;/p&gt;

&lt;p&gt;For a strictly static deployment, it doesn't work at all.&lt;/p&gt;

&lt;p&gt;The unspoken word in "Incremental Static &lt;strong&gt;Regeneration&lt;/strong&gt;" is&lt;br&gt;
the regeneration, and regeneration requires a runtime. ISR turns&lt;br&gt;
your "static site" into an HTTP server that lazily produces HTML&lt;br&gt;
on the way to the browser. If your deployment target is a CDN&lt;br&gt;
that only serves files — GitHub Pages, S3 + CloudFront, an nginx&lt;br&gt;
in front of a directory, Cloudflare Pages without a Worker, the&lt;br&gt;
static-files product on Netlify, an air-gapped intranet, the&lt;br&gt;
classic shared-hosting plan your client insists on — there is no&lt;br&gt;
runtime for ISR to run on. The feature isn't degraded, it's&lt;br&gt;
missing.&lt;/p&gt;

&lt;p&gt;This is the case the benchmark was designed for: pure static HTML&lt;br&gt;
plus assets, no Node runtime at request time. All five tools in&lt;br&gt;
the comparison advertise themselves as supporting that mode. The&lt;br&gt;
point of measuring at 100k+ is to find out whether the advertised&lt;br&gt;
mode survives at the scale a real catalogue produces. ISR doesn't&lt;br&gt;
enter the comparison because it isn't the same product — it's a&lt;br&gt;
different deployment topology that swaps a build-time problem for&lt;br&gt;
a request-time one. Both are valid; they aren't interchangeable,&lt;br&gt;
and the trade-offs should be visible to whoever signs off on&lt;br&gt;
hosting cost, security posture, or operational surface area.&lt;/p&gt;

&lt;p&gt;Three concrete consequences of that swap, worth knowing before&lt;br&gt;
reaching for ISR as a workaround:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;The first visitor to every page pays the bill.&lt;/strong&gt; A hundred&lt;br&gt;
thousand product pages and a hundred thousand unique long-tail&lt;br&gt;
visits over a quarter mean each visitor is the unlucky one for&lt;br&gt;
exactly one page. Cold start plus render time plus cache write —&lt;br&gt;
typically a hundred milliseconds to a few seconds, depending on&lt;br&gt;
the page. A static export amortizes that work into one build. ISR&lt;br&gt;
amortizes it into one hundred thousand request-time renders, each&lt;br&gt;
on the critical path of someone's pageview.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;You are now paying for compute you weren't paying for.&lt;/strong&gt; A&lt;br&gt;
static site sits on CDN edge cache and costs essentially nothing&lt;br&gt;
above bandwidth. ISR requires a serverless function (or a long-&lt;br&gt;
running process) that's billable per invocation and per millisecond&lt;br&gt;
of execution. The bigger the catalogue, the more pages enter the&lt;br&gt;
"never visited" tail and the more compute you allocate for HTML&lt;br&gt;
that nobody reads.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Cache invalidation enters your application's design.&lt;/strong&gt; ISR's&lt;br&gt;
freshness story is &lt;code&gt;revalidate: N&lt;/code&gt; plus on-demand revalidation&lt;br&gt;
hooks. Both are reasonable, both are concepts your team now has to&lt;br&gt;
think about, and both are operational surface area that didn't&lt;br&gt;
exist when the deployment was files in a directory. For sites&lt;br&gt;
whose content really doesn't change often, this is purely added&lt;br&gt;
complexity.&lt;/p&gt;

&lt;p&gt;And there's a subtler point. &lt;strong&gt;ISR doesn't fix the underlying&lt;br&gt;
build ceiling.&lt;/strong&gt; If you mark some routes as fully pre-rendered&lt;br&gt;
via the array contract — &lt;code&gt;dynamicParams: false&lt;/code&gt;, &lt;code&gt;generateStatic&lt;br&gt;
Params&lt;/code&gt; returning the full set — you're back in the recursion-&lt;br&gt;
overflow territory from earlier in this article. ISR side-steps&lt;br&gt;
the wall by routing around it. It doesn't move the wall.&lt;/p&gt;

&lt;p&gt;None of this makes ISR a bad feature. It makes ISR an answer to a&lt;br&gt;
different question. "How do I serve a hundred thousand pages&lt;br&gt;
without paying for a build that materializes them all" is a real&lt;br&gt;
problem. "How do I generate a hundred thousand pages of pure&lt;br&gt;
static HTML to a CDN" is a different real problem. You don't&lt;br&gt;
solve the second with the answer to the first.&lt;/p&gt;

&lt;p&gt;react-server, Astro, TanStack Start, and Gatsby answer the second&lt;br&gt;
one. Next.js, in its &lt;code&gt;output: "export"&lt;/code&gt; mode, scales to about&lt;br&gt;
150,000 pages and is designed around ISR for the rest.&lt;/p&gt;




&lt;h2&gt;
  
  
  The contract is the product
&lt;/h2&gt;

&lt;p&gt;&lt;a class="mentioned-user" href="https://dev.to/lazarv"&gt;@lazarv&lt;/a&gt;/react-server didn't win this benchmark with new&lt;br&gt;
technology. The runtime is Node. The bundler is Vite 8 with&lt;br&gt;
Rolldown. The API is &lt;code&gt;async function*&lt;/code&gt; — a primitive that's been&lt;br&gt;
in JavaScript engines since 2018. There's nothing in the build&lt;br&gt;
pipeline you couldn't have shipped seven years ago.&lt;/p&gt;

&lt;p&gt;What's novel is choosing it.&lt;/p&gt;

&lt;p&gt;Most of the React ecosystem has spent the last half-decade&lt;br&gt;
optimizing the wrong layer. The renderer is fast everywhere. The&lt;br&gt;
worker pool is fast everywhere. The compiler — Turbopack, SWC,&lt;br&gt;
take your pick — is fast everywhere. The bottleneck at scale&lt;br&gt;
turns out to be one decision made at the top of your route file:&lt;br&gt;
&lt;strong&gt;does the path source return, or does it yield?&lt;/strong&gt; And the only&lt;br&gt;
way to fix the bottleneck is to change the contract. Nobody else&lt;br&gt;
has.&lt;/p&gt;

&lt;p&gt;&lt;code&gt;generateStaticParams&lt;/code&gt; returns an array. &lt;code&gt;getStaticPaths&lt;/code&gt; returns&lt;br&gt;
an array. TanStack Start's &lt;code&gt;pages&lt;/code&gt; is an array. Gatsby's&lt;br&gt;
&lt;code&gt;createPage&lt;/code&gt; is an array smuggled in through a loop. Every layer&lt;br&gt;
downstream of those APIs is forced to assume the worst case lives&lt;br&gt;
in memory at once. At a thousand pages the assumption costs&lt;br&gt;
nothing. At a hundred thousand it costs minutes. At two hundred&lt;br&gt;
thousand, in Next.js, it costs the build — &lt;code&gt;RangeError: Maximum&lt;br&gt;
call stack size exceeded&lt;/code&gt;, exit one, zero pages produced.&lt;/p&gt;

&lt;p&gt;react-server's &lt;code&gt;[id].static.mjs&lt;/code&gt; doesn't return anything. It&lt;br&gt;
yields. The renderer pulls. Memory of the path source is &lt;code&gt;O(1)&lt;/code&gt;.&lt;br&gt;
N is unbounded. The build is the same shape at a thousand pages&lt;br&gt;
as it is at half a million, because the architecture has nothing&lt;br&gt;
that grows with it.&lt;/p&gt;

&lt;p&gt;If you are picking an SSG in 2026 and your roadmap has more than&lt;br&gt;
ten thousand pages in it, look at the path-list API before you&lt;br&gt;
look at anything else. The framework that lets you yield will&lt;br&gt;
scale with your content. The framework that asks for a return&lt;br&gt;
will, eventually, give you back an empty &lt;code&gt;out/&lt;/code&gt; directory and a&lt;br&gt;
stack trace.&lt;/p&gt;

&lt;p&gt;This isn't really a Next.js problem. It's a generation-of-tooling&lt;br&gt;
problem. Static-site generation at scale has been treated as a&lt;br&gt;
build-pipeline optimization for years. It isn't. It's an API&lt;br&gt;
design problem, and the API is the array.&lt;/p&gt;

&lt;p&gt;Change the API. Yield, don't return.&lt;/p&gt;

&lt;p&gt;It's time.&lt;/p&gt;




&lt;p&gt;&lt;em&gt;The full benchmark is open source. &lt;a href="https://github.com/lazarv/ssg-bench/tree/main/apps" rel="noopener noreferrer"&gt;&lt;code&gt;apps/&lt;/code&gt;&lt;/a&gt; for each&lt;br&gt;
framework's setup, &lt;a href="https://github.com/lazarv/ssg-bench/tree/main/bench" rel="noopener noreferrer"&gt;&lt;code&gt;bench/&lt;/code&gt;&lt;/a&gt; for the harness, and&lt;br&gt;
&lt;a href="https://github.com/lazarv/ssg-bench/blob/main/bench/REPORT.md" rel="noopener noreferrer"&gt;&lt;code&gt;bench/REPORT.md&lt;/code&gt;&lt;/a&gt; for the complete table. To&lt;br&gt;
reproduce: &lt;code&gt;pnpm install &amp;amp;&amp;amp; pnpm bench:sweep &amp;amp;&amp;amp; pnpm report &amp;amp;&amp;amp;&lt;br&gt;
pnpm chart&lt;/code&gt;.&lt;/em&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Disagreements welcome
&lt;/h2&gt;

&lt;p&gt;I wrote &lt;a class="mentioned-user" href="https://dev.to/lazarv"&gt;@lazarv&lt;/a&gt;/react-server. The disclosure is at the top of&lt;br&gt;
the article, and I built the bench to keep the comparison honest&lt;br&gt;
despite it — same content per route, fastest successful run wins&lt;br&gt;
per cell, sample HTML validated per build, failed cells reported&lt;br&gt;
as failed rather than dropped, every framework's idiomatic&lt;br&gt;
configuration used as documented. I believe the comparison is&lt;br&gt;
fair.&lt;/p&gt;

&lt;p&gt;But I'm one person reading my own benchmark. If you spot a flag&lt;br&gt;
I should have set, a version I should have tried, an inadvertent&lt;br&gt;
advantage I've handed &lt;a class="mentioned-user" href="https://dev.to/lazarv"&gt;@lazarv&lt;/a&gt;/react-server — open an issue or&lt;br&gt;
send a PR. The harness is in &lt;code&gt;bench/&lt;/code&gt;, the apps are in &lt;code&gt;apps/&lt;/code&gt;,&lt;br&gt;
and any change that produces a fairer comparison wins.&lt;/p&gt;

&lt;p&gt;If the data lands somewhere different in your read than in mine,&lt;br&gt;
that's the conversation worth having. I'd rather the article get&lt;br&gt;
the technical story right than win an argument.&lt;/p&gt;

</description>
      <category>nextjs</category>
      <category>tanstack</category>
      <category>astro</category>
      <category>gatsby</category>
    </item>
    <item>
      <title>A Low Floor Is Not a Low Ceiling</title>
      <dc:creator>Viktor Lázár</dc:creator>
      <pubDate>Fri, 01 May 2026 18:58:19 +0000</pubDate>
      <link>https://dev.to/lazarv/a-low-floor-is-not-a-low-ceiling-2o2f</link>
      <guid>https://dev.to/lazarv/a-low-floor-is-not-a-low-ceiling-2o2f</guid>
      <description>&lt;p&gt;There is a moment at the beginning of using a framework when the framework tells you what kind of developer it thinks you are.&lt;/p&gt;

&lt;p&gt;It rarely says this directly. It says it by what it asks of you before your own idea is allowed to appear. It says it through the scaffold it generates, the folders it names, the configuration files it creates, the conventions it assumes you already understand, and the amount of system you must accept before the smallest useful program can run.&lt;/p&gt;

&lt;p&gt;This first moment matters because it defines the emotional shape of the tool. Some systems begin with a primitive: a function, a component, a request handler, a file. They let the idea arrive first and allow structure to grow around it. Other systems begin with an institution. Before there is behavior, there is a project. Before there is a program, there is a topology.&lt;/p&gt;

&lt;p&gt;We have become used to this, especially in frontend development. A new app is expected to be born as a tree. It has routing before it has routes, build configuration before it has a build problem, lint rules before it has a team, deployment assumptions before it has users, and a package graph before it has a reason to exist. Each piece may be defensible on its own. The problem is not that any one file is absurd. The problem is that the smallest idea is asked to carry the shape of a much larger future.&lt;/p&gt;

&lt;p&gt;That is a strange bargain. It is especially strange now, because the two kinds of developers most exposed to the beginning of a system, &lt;strong&gt;beginners and AI agents&lt;/strong&gt;, are exactly the two least able to separate essential shape from accumulated ceremony.&lt;/p&gt;

&lt;h2&gt;
  
  
  What experts stop seeing
&lt;/h2&gt;

&lt;p&gt;Experienced developers have a skill we do not talk about enough: &lt;em&gt;selective blindness&lt;/em&gt;.&lt;/p&gt;

&lt;p&gt;We can open a repository and immediately reduce its apparent size. We know that some files are behavior, some files are policy, some files are boilerplate, some files are generated, and some files are present only because a tool once needed a place to write down its preferences. We know when a folder name is meaningful to the framework and when it is merely organizational. We know when a config file is actively shaping the program and when it is an artifact of the scaffold.&lt;/p&gt;

&lt;p&gt;This is not the same as simplicity. It is &lt;em&gt;familiarity doing compression&lt;/em&gt;.&lt;/p&gt;

&lt;p&gt;A beginner does not have that compression. When they open a scaffolded project, the entire tree arrives with equal authority. Every file might matter. Every convention might be something they are already supposed to know. Every import, suffix, folder, generated type, and default export might be part of the lesson. To an expert, the surrounding machinery is background. To a beginner, it is the room.&lt;/p&gt;

&lt;p&gt;That changes what the first lesson becomes. Instead of learning that a program is an idea made executable, the beginner learns that software begins inside a prepared environment whose rules are not yet visible. They learn that making even a small thing requires standing in the correct place, naming files correctly, accepting the correct project shape, and trusting that the framework will interpret the structure as intended.&lt;/p&gt;

&lt;p&gt;Some of that knowledge will eventually be necessary. But "eventually" is the important word. The first encounter with a tool should not require the learner to distinguish core concepts from scaffolding residue. A good beginning should bring the irreducible thing close: data becomes UI, input becomes state, a request becomes a response. Architecture should arrive as a way to preserve clarity as the program grows, not as the admission price for writing the first line.&lt;/p&gt;

&lt;h2&gt;
  
  
  The agent has the same problem
&lt;/h2&gt;

&lt;p&gt;AI agents make this problem visible in a different way. They are not beginners in the usual sense; they have absorbed patterns from more code than any human will read. But when an agent enters a particular repository, it does not bring the local memory of the team. It does not know which conventions are intentional, which are obsolete, which are inherited from the starter template, and which are workarounds nobody likes but everyone is afraid to remove.&lt;/p&gt;

&lt;p&gt;The agent has to discover the system by reading it. That sounds obvious, but it changes the economics of ceremony. What used to be a one-time human annoyance at project creation becomes a recurring cost paid on every AI-assisted change. The model must spend attention on the filesystem, the dependency graph, the framework conventions, the version-specific behavior, and the shape of the surrounding setup before it can safely reason about the user's request.&lt;/p&gt;

&lt;p&gt;It is tempting to reduce this to token count. More files mean more tokens; more tokens mean more cost. That is true, but it is the least interesting part. The deeper issue is that tokens do not all have the same semantic weight. In a real project, some text defines behavior, some configures behavior, some describes behavior that used to exist, some is framework glue, and some is simply the fossil record of how the project began. A human teammate can often point at a file and say, "ignore that." The model has to infer it.&lt;/p&gt;

&lt;p&gt;This is where bloated systems become dangerous for AI. They do not merely give the model more to read. They give it more ways to be plausibly wrong. It can follow a pattern that exists in the repository but no longer represents the intended direction. It can apply a framework rule from the wrong version. It can miss that a file path changes rendering mode, or that a cache option interacts with a parent segment, or that a wrapper exists only because a previous tool could not express the smaller thing directly.&lt;/p&gt;

&lt;p&gt;The beginner asks, "where do I put the code?" The agent asks the same question in another form: &lt;em&gt;"which of these tokens are the program?"&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;Systems with too much ceremony answer both questions poorly.&lt;/p&gt;

&lt;h2&gt;
  
  
  Code size is a reasoning surface
&lt;/h2&gt;

&lt;p&gt;We often talk about code size as if it were a maintenance problem that appears after the fact. The project gets larger, so it becomes harder to maintain. That is true, but it misses the more immediate effect: &lt;strong&gt;code size changes the way a system can be understood&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;A small program can be held in the mind. You can read it and keep the whole shape present: inputs, outputs, state, effects, and failure modes. As the program grows, understanding has to move through supports: names, tests, types, boundaries, conventions, documentation, and trust. Those supports are necessary, but they are not free. Each one helps organize the system while also becoming another surface on which a wrong assumption can land.&lt;/p&gt;

&lt;p&gt;The growth is not linear because the problem is not only the number of lines. It is the number of relationships between them. A route can interact with a layout, a cache rule, a bundling boundary, a server/client split, a deployment target, and a default inherited from somewhere the developer is not currently looking. A config file can change the meaning of a component that does not mention it. A directory name can affect runtime behavior even though it looks like organization.&lt;/p&gt;

&lt;p&gt;At small sizes, adding code mostly adds capability. At larger sizes, adding code increasingly adds interaction. The surface the next change has to cross becomes wider, less local, and harder to see at once. That is the familiar moment when &lt;strong&gt;a small change stops being small&lt;/strong&gt; because the system around it must be understood first. You want to add a button, but first you need to know whether it belongs on the client. You want to move data fetching, but first you need to know which cache owns freshness. You want to simplify a file, but first you need to know whether the filename itself is an API.&lt;/p&gt;

&lt;p&gt;For humans, this becomes onboarding time, superstition, fatigue, and the slow accumulation of "don't touch that" knowledge. For AI agents, it becomes larger prompts, weaker locality, pattern matching where understanding should be, and edits that are syntactically reasonable but semantically misplaced.&lt;/p&gt;

&lt;p&gt;This is why "use a bigger context window" is not a complete answer. A bigger context window lets the model carry more of the maze. It does not tell us whether the maze needed to be there.&lt;/p&gt;

&lt;h2&gt;
  
  
  The toy path is not kindness
&lt;/h2&gt;

&lt;p&gt;Once the weight of modern tooling becomes visible, the obvious solution is to give beginners something smaller. A simpler framework. A reduced mode. A teaching tool. A toy environment with fewer concepts and fewer ways to get lost.&lt;/p&gt;

&lt;p&gt;Sometimes this is useful. Teaching often requires choosing a smaller surface. But as an architectural answer, it fails if the small path is not part of the same world as the large path. If the beginner learns one model and then has to abandon it when the application becomes real, the simplicity was not a doorway. It was a waiting room.&lt;/p&gt;

&lt;p&gt;The same is true for small projects. A tiny internal tool should not have to choose between a toy framework that will be outgrown and a production framework that arrives already bloated. A prototype should be allowed to be real. A first file should be allowed to become the first file of the final system. The path from "almost nothing" to "something serious" should be continuous.&lt;/p&gt;

&lt;p&gt;This is the part that is easy to miss: &lt;strong&gt;beginners do not need worse tools&lt;/strong&gt;. They need real tools with lower entry points.&lt;/p&gt;

&lt;p&gt;If the only way to make a framework approachable is to remove its power, then the framework has not solved approachability. It has outsourced it to a different tool. A better framework shape lets the same primitive participate at multiple scales. The first component is not a demo artifact; it is a legitimate member of the system. The first route is not a special tutorial mode; it is the smallest case of the routing model. The first cache is not a global doctrine; it is a local decision next to the computation it affects.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;A low floor is not a low ceiling.&lt;/strong&gt; In the best systems, the low floor is evidence that the ceiling is supported by real structure rather than by ceremony.&lt;/p&gt;

&lt;h2&gt;
  
  
  Almost nothing should work
&lt;/h2&gt;

&lt;p&gt;There is a design principle hiding here that sounds more radical than it is: &lt;em&gt;almost nothing should work&lt;/em&gt;.&lt;/p&gt;

&lt;p&gt;A single file should work. A single component should work. No configuration should work. No router should work until there is more than one place to go. No cache policy should exist until freshness has become a question. No deployment adapter should change the meaning of the application before deployment is actually being discussed.&lt;/p&gt;

&lt;p&gt;Absence should be a valid state of the system.&lt;/p&gt;

&lt;p&gt;This is not minimalism for its own sake. It is maintainability in its most practical form. A file that does not exist cannot go stale. A wrapper that was never extracted cannot become a place where names drift. A configuration key that was never introduced cannot be copied into the next project without understanding. A convention that was never required cannot become folklore. The strongest abstraction is often not the clever one, but the missing one.&lt;/p&gt;

&lt;p&gt;Frameworks are usually better at adding capabilities than at preserving absence, because capabilities are easier to demonstrate. A router can be documented. A cache layer can be benchmarked. A deployment adapter can be announced. "You do not have to think about this yet" is harder to turn into a feature page, even though it may be the most important feature for the first hour, the first week, and every AI agent session after that.&lt;/p&gt;

&lt;p&gt;The discipline is not to avoid power. The discipline is to &lt;strong&gt;delay power until the problem asks for it&lt;/strong&gt;. Configuration is good when it changes something the developer has chosen to care about. Project structure is good when the project has enough internal gravity to need one. Defaults are good when they remain defaults. They become bloat when they appear before the program has earned them and then pretend their presence is neutral.&lt;/p&gt;

&lt;h2&gt;
  
  
  Scaling downward
&lt;/h2&gt;

&lt;p&gt;We usually use "scalable" to mean that a system can grow upward. More users, more routes, more teams, more data, more features, more deployment targets. That kind of scale matters, and a framework that cannot grow upward will eventually trap serious applications.&lt;/p&gt;

&lt;p&gt;But there is another kind of scale that is just as important: &lt;strong&gt;a system must scale downward&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;It must scale down to one file, one component, one endpoint, one idea tested before lunch. It must scale down to the beginner trying to see the whole program at once. It must scale down to the AI agent trying to make a narrow change without reconstructing the entire framework context first. A system that scales upward but not downward is not truly scalable. It is only &lt;em&gt;large-capable&lt;/em&gt;.&lt;/p&gt;

&lt;p&gt;This distinction changes how we judge architecture. The question is not only whether a framework can host an enormous application. The question is whether it can host a tiny one without making it pretend. Can the smallest useful program be written directly? Can it grow by adding concepts one at a time? Can each new layer explain itself by answering a pressure already present in the code?&lt;/p&gt;

&lt;p&gt;That is what a grown-up framework should feel like. At the beginning, most decisions should be &lt;em&gt;not yet&lt;/em&gt;. Not yet a routing tree. Not yet a cache hierarchy. Not yet a deployment-specific semantic. Not yet a global configuration file. Just the program. Then, when the program needs a second page, routing appears. When it needs shared structure, layout appears. When it needs data freshness control, caching appears next to the data. When it needs background isolation, a worker boundary appears around the work. When it needs deployment specificity, an adapter appears at the edge rather than changing the meaning of the center.&lt;/p&gt;

&lt;p&gt;Each new concept should feel like a door opening from the room you are already standing in.&lt;/p&gt;

&lt;h2&gt;
  
  
  The same world
&lt;/h2&gt;

&lt;p&gt;The deepest mistake is believing that beginners, experts, and AI agents need different worlds. They do not. They need &lt;em&gt;different distances from the same center&lt;/em&gt;.&lt;/p&gt;

&lt;p&gt;The beginner needs to stand close to the irreducible idea, where the relationship between code and behavior is visible. The expert needs to move outward into power, performance, specificity, and control without being trapped by the framework author's fixed menu. The AI agent needs the same locality both of them need: code whose meaning is present in the text before it is hidden in conventions that must be inferred.&lt;/p&gt;

&lt;p&gt;These are not competing requirements. They are the same architectural requirement seen from different heights.&lt;/p&gt;

&lt;p&gt;Make the primitive honest. Make the first step real. Make absence valid. Make defaults optional. Make every layer replaceable when it finally appears. &lt;strong&gt;Let the small thing belong to the same world as the large thing.&lt;/strong&gt; Then the beginner is not trapped in a toy path, the expert is not trapped in a convention path, and the agent is not trapped in a fog of scaffolding.&lt;/p&gt;

&lt;p&gt;We should stop admiring systems merely because they can host enormous applications. That is only one kind of strength. The more interesting strength is the ability to be gentle with beginnings: to let an idea exist before it has proved that it deserves architecture, and to let it grow without exile.&lt;/p&gt;

&lt;p&gt;A serious framework should be able to hold almost nothing.&lt;/p&gt;

&lt;p&gt;And if the idea grows, it should not have to leave home.&lt;/p&gt;

</description>
      <category>architecture</category>
      <category>discuss</category>
      <category>programming</category>
      <category>softwaredevelopment</category>
    </item>
    <item>
      <title>A Function Should Know Where It Runs</title>
      <dc:creator>Viktor Lázár</dc:creator>
      <pubDate>Thu, 30 Apr 2026 10:27:03 +0000</pubDate>
      <link>https://dev.to/lazarv/a-function-should-know-where-it-runs-3721</link>
      <guid>https://dev.to/lazarv/a-function-should-know-where-it-runs-3721</guid>
      <description>&lt;p&gt;There is an obvious appeal to a server function you can call from anywhere. The old version of the same idea was not pleasant. You wrote an endpoint, then a client helper for that endpoint, then some shared schema to keep the two sides honest, then error handling in both places, and eventually a small pile of files whose main job was to move one value from the browser to the server and another value back again.&lt;/p&gt;

&lt;p&gt;So when a framework lets you write the server part as a normal function and call it as a normal function, it feels like the right kind of progress.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight tsx"&gt;&lt;code&gt;&lt;span class="k"&gt;export&lt;/span&gt; &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;getUser&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nf"&gt;createServerFn&lt;/span&gt;&lt;span class="p"&gt;().&lt;/span&gt;&lt;span class="nf"&gt;handler&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="k"&gt;async &lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="nx"&gt;db&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;user&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;findCurrent&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;
&lt;span class="p"&gt;})&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Somewhere else:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight tsx"&gt;&lt;code&gt;&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;user&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="nf"&gt;getUser&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This is much nicer than wiring an endpoint by hand. The function is typed. The caller is typed. Refactors have a path through the codebase instead of disappearing into a string URL. For a lot of application code, especially small reads and mutations, this is exactly the kind of boilerplate a framework should remove.&lt;/p&gt;

&lt;p&gt;The question is not whether the API is useful. It is. The question is what gets hidden when the call becomes this smooth.&lt;/p&gt;

&lt;h2&gt;
  
  
  The same call is not always the same operation
&lt;/h2&gt;

&lt;p&gt;&lt;code&gt;await getUser()&lt;/code&gt; can mean slightly different things depending on where it appears. If the call happens while the application is already running on the server, it can be a direct path into server code. If it happens in the browser, it has to become a request. If it happens in a route loader, it belongs to the router's data lifecycle. If it happens after a click, it belongs to an interaction that the user is waiting on.&lt;/p&gt;

&lt;p&gt;Those cases can all share the same TypeScript signature, but they are not the same situation. The value that comes back may have the same shape; the act of getting it does not.&lt;/p&gt;

&lt;p&gt;That is the part of isomorphic server functions that makes me uneasy. The abstraction removes a lot of code nobody wanted to write, but it also makes the call site less descriptive. The line looks ordinary in places where the operation behind it may not be ordinary at all.&lt;/p&gt;

&lt;h2&gt;
  
  
  What TanStack makes pleasant
&lt;/h2&gt;

&lt;p&gt;TanStack Start leans into this trade quite naturally. A server function is explicit when it is defined, and then the exported value is designed to be called from the places where application code tends to need it: loaders, components, hooks, event handlers, other server functions. That fits the rest of TanStack's style. The router is central, the data flow is typed, and the application is assembled out of explicit functions rather than a large menu of special filenames. If that is already the way you want to build, the server function API feels consistent.&lt;/p&gt;

&lt;p&gt;There is nothing dishonest about the definition site. &lt;code&gt;createServerFn()&lt;/code&gt; tells you that the handler is server code. It can touch a database. It can read secrets. It can do work the browser cannot do. The ambiguity appears later, when the call has been made deliberately ordinary.&lt;/p&gt;

&lt;p&gt;That ordinariness is useful while you are writing the code. You know where you are. You know whether the call is inside a loader or inside a button handler. You know what the framework is going to do. The problem shows up later, when the code is read without all of that context already loaded into someone's head.&lt;/p&gt;

&lt;h2&gt;
  
  
  A small refactor changes the role
&lt;/h2&gt;

&lt;p&gt;Imagine a settings page that starts like this:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight tsx"&gt;&lt;code&gt;&lt;span class="k"&gt;export&lt;/span&gt; &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;Route&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nf"&gt;createFileRoute&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;/settings&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)({&lt;/span&gt;
  &lt;span class="na"&gt;loader&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;()&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="nf"&gt;getUser&lt;/span&gt;&lt;span class="p"&gt;(),&lt;/span&gt;
  &lt;span class="na"&gt;component&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;SettingsPage&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
&lt;span class="p"&gt;})&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Later, someone adds a refresh button inside the page:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight tsx"&gt;&lt;code&gt;&lt;span class="k"&gt;async&lt;/span&gt; &lt;span class="kd"&gt;function&lt;/span&gt; &lt;span class="nf"&gt;refresh&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;user&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="nf"&gt;getUser&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;
  &lt;span class="nf"&gt;setUser&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;user&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Both calls are reasonable. Both may be exactly what the application wants. But they are not playing the same role anymore. The first call belongs to navigation. The second call belongs to an interaction after the page is already on screen. It has a different timing, a different failure shape, probably a different loading state, and possibly a different relationship to invalidation.&lt;/p&gt;

&lt;p&gt;Nothing about &lt;code&gt;getUser()&lt;/code&gt; is wrong here. The issue is that the call is too polite to mention that its role changed. The code moved from one part of the application to another, and the most important difference is now carried by the surrounding framework context rather than by the expression itself.&lt;/p&gt;

&lt;h2&gt;
  
  
  Types do not carry place
&lt;/h2&gt;

&lt;p&gt;Types do not really solve this. They solve an important part of it, but not this part. &lt;code&gt;Promise&amp;lt;User&amp;gt;&lt;/code&gt; tells me what value I will eventually get. It does not tell me why I am waiting. It does not tell me whether the delay is a database query in the same process or a request from the browser to the server. It does not tell me whether cookies are involved, whether middleware runs, whether a rate limit can trip, or whether the user is now staring at a disabled button.&lt;/p&gt;

&lt;p&gt;All of those things can live behind the same return type.&lt;/p&gt;

&lt;h2&gt;
  
  
  What RSC keeps visible
&lt;/h2&gt;

&lt;p&gt;This is where React Server Components come from a different direction. RSC does not try to make server code and client code feel like the same kind of code. It lets them participate in the same React tree, but it keeps their environments distinct. Server Components run on the server. Client Components run in the browser. Server Functions are server code that can be referenced across the boundary.&lt;/p&gt;

&lt;p&gt;The same settings page has a different shape in that model:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight tsx"&gt;&lt;code&gt;&lt;span class="k"&gt;export&lt;/span&gt; &lt;span class="k"&gt;default&lt;/span&gt; &lt;span class="k"&gt;async&lt;/span&gt; &lt;span class="kd"&gt;function&lt;/span&gt; &lt;span class="nf"&gt;SettingsPage&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;user&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="nf"&gt;getUser&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;

  &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="p"&gt;&amp;lt;&lt;/span&gt;&lt;span class="nc"&gt;SettingsForm&lt;/span&gt; &lt;span class="na"&gt;user&lt;/span&gt;&lt;span class="p"&gt;=&lt;/span&gt;&lt;span class="si"&gt;{&lt;/span&gt;&lt;span class="nx"&gt;user&lt;/span&gt;&lt;span class="si"&gt;}&lt;/span&gt; &lt;span class="na"&gt;refreshUser&lt;/span&gt;&lt;span class="p"&gt;=&lt;/span&gt;&lt;span class="si"&gt;{&lt;/span&gt;&lt;span class="nx"&gt;refreshUser&lt;/span&gt;&lt;span class="si"&gt;}&lt;/span&gt; &lt;span class="p"&gt;/&amp;gt;&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;

&lt;span class="k"&gt;async&lt;/span&gt; &lt;span class="kd"&gt;function&lt;/span&gt; &lt;span class="nf"&gt;getUser&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="nx"&gt;db&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;user&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;findCurrent&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;

&lt;span class="k"&gt;async&lt;/span&gt; &lt;span class="kd"&gt;function&lt;/span&gt; &lt;span class="nf"&gt;refreshUser&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;use server&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;
  &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="nx"&gt;db&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;user&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;findCurrent&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;





&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight tsx"&gt;&lt;code&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;use client&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;

&lt;span class="k"&gt;export&lt;/span&gt; &lt;span class="kd"&gt;function&lt;/span&gt; &lt;span class="nf"&gt;SettingsForm&lt;/span&gt;&lt;span class="p"&gt;({&lt;/span&gt; &lt;span class="nx"&gt;user&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;refreshUser&lt;/span&gt; &lt;span class="p"&gt;})&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="nx"&gt;currentUser&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;setCurrentUser&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nf"&gt;useState&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;user&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;

  &lt;span class="k"&gt;async&lt;/span&gt; &lt;span class="kd"&gt;function&lt;/span&gt; &lt;span class="nf"&gt;refresh&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="nf"&gt;setCurrentUser&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="nf"&gt;refreshUser&lt;/span&gt;&lt;span class="p"&gt;())&lt;/span&gt;
  &lt;span class="p"&gt;}&lt;/span&gt;

  &lt;span class="c1"&gt;// ...&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;There is more ceremony here. The client piece has to be named. The server function has to be passed across the boundary. Depending on the framework, this may also mean another file. But the roles are visible in the shape of the code: the initial read belongs to the Server Component, and the later refresh is a client interaction calling a Server Function.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;The split does not necessarily have to be a file split. With function-level boundaries, the same idea could live much closer to the place where it is used:&lt;/p&gt;


&lt;pre class="highlight tsx"&gt;&lt;code&gt;&lt;span class="k"&gt;export&lt;/span&gt; &lt;span class="k"&gt;default&lt;/span&gt; &lt;span class="k"&gt;async&lt;/span&gt; &lt;span class="kd"&gt;function&lt;/span&gt; &lt;span class="nf"&gt;SettingsPage&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;user&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="nf"&gt;getUser&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;

  &lt;span class="kd"&gt;function&lt;/span&gt; &lt;span class="nf"&gt;SettingsForm&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;use client&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;

    &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="nx"&gt;currentUser&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;setCurrentUser&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nf"&gt;useState&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;user&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;

    &lt;span class="k"&gt;async&lt;/span&gt; &lt;span class="kd"&gt;function&lt;/span&gt; &lt;span class="nf"&gt;refreshUser&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
      &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;use server&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;
      &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="nx"&gt;db&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;user&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;findCurrent&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;
    &lt;span class="p"&gt;}&lt;/span&gt;

    &lt;span class="k"&gt;async&lt;/span&gt; &lt;span class="kd"&gt;function&lt;/span&gt; &lt;span class="nf"&gt;refresh&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
      &lt;span class="nf"&gt;setCurrentUser&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="nf"&gt;refreshUser&lt;/span&gt;&lt;span class="p"&gt;())&lt;/span&gt;
    &lt;span class="p"&gt;}&lt;/span&gt;

    &lt;span class="c1"&gt;// ...&lt;/span&gt;
  &lt;span class="p"&gt;}&lt;/span&gt;

  &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="p"&gt;&amp;lt;&lt;/span&gt;&lt;span class="nc"&gt;SettingsForm&lt;/span&gt; &lt;span class="p"&gt;/&amp;gt;&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;


&lt;p&gt;That is the argument in &lt;a href="https://dev.to/lazarv/the-use-client-tax-1ed0"&gt;The "use client" Tax&lt;/a&gt;: the boundary should stay visible, but it should be allowed to live closer to the code it describes.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;The current ergonomics of that model are not perfect. Next's file-level &lt;code&gt;"use client"&lt;/code&gt; boundary creates real friction, and small interactive pieces often end up in files that exist mostly because the bundler needs a module boundary. That is not a minor annoyance; it changes how code is organized. But the underlying idea is still important: a piece of code should communicate where it belongs.&lt;/p&gt;

&lt;p&gt;When something is server code, the reader should be able to expect server capabilities. When something is client code, the reader should be able to expect browser capabilities. When a value or reference crosses from one side to the other, the model should have a visible place for that crossing. Not because visible boundaries are beautiful in themselves, but because hidden boundaries tend to come back later as surprises about latency, failure, serialization, or state.&lt;/p&gt;

&lt;p&gt;This is the difference I care about between the two approaches. With an isomorphic server function, the definition says "server", but the call site tries to feel universal. With RSC, the model keeps insisting that server and client are different places, even when they are composed together.&lt;/p&gt;

&lt;h2&gt;
  
  
  The boundary should be cheap, not invisible
&lt;/h2&gt;

&lt;p&gt;I do not think the answer is to give up the convenience of server functions. Hand-written endpoints are not some lost paradise. A framework should make it cheap to invoke server code from the client, and TanStack's version of that idea is useful. The part I would be careful with is the framing. There is a difference between "this is server code with a convenient client invocation mechanism" and "this is just a function you can call from anywhere."&lt;/p&gt;

&lt;p&gt;The first framing keeps the boundary in the reader's mind. The second makes the boundary feel incidental until some operational detail forces it back into view.&lt;/p&gt;

&lt;p&gt;That is not just a matter of taste. It becomes a real maintenance problem.&lt;/p&gt;

&lt;p&gt;It shows up in code review, when a harmless-looking call has moved from a loader into an event handler and the diff does not make the change feel as large as it is. It shows up in debugging, when a line that reads like a function call fails like a network interaction. It shows up in refactors, when moving code across an invisible boundary changes timing, failure, and user-visible behavior without changing the expression that caused it.&lt;/p&gt;

&lt;p&gt;That is why I find the RSC direction healthier, even with its current rough edges. The goal should not be to make every server call dramatic. It should not be to reintroduce ceremony for its own sake. It should be to make the boundary cheap enough that we can keep it visible without resenting it.&lt;/p&gt;

&lt;p&gt;A function does not need to shout where it runs. But if understanding the function requires knowing whether it is local code, server code, or a request in disguise, then that fact should not live only in the reader's memory of the framework. Once the boundary is invisible at the call site, every reader has to rediscover it later.&lt;/p&gt;

</description>
      <category>api</category>
      <category>architecture</category>
      <category>javascript</category>
      <category>webdev</category>
    </item>
    <item>
      <title>The Cache Belongs to the Function</title>
      <dc:creator>Viktor Lázár</dc:creator>
      <pubDate>Wed, 29 Apr 2026 09:38:00 +0000</pubDate>
      <link>https://dev.to/lazarv/the-cache-belongs-to-the-function-6f5</link>
      <guid>https://dev.to/lazarv/the-cache-belongs-to-the-function-6f5</guid>
      <description>&lt;p&gt;A few years ago, the question about caching in modern web frameworks was whether it should be on by default. That question is largely settled. Frameworks that defaulted to caching every fetch and rendering every page statically have walked the defaults back; frameworks that didn't, didn't have to. The argument that caching should be opt-in, and that the developer should be the one who decides where it pays, has won. Anyone arguing it today is arguing against a position the industry has already conceded.&lt;/p&gt;

&lt;p&gt;What is not settled is where the caching primitive &lt;em&gt;lives&lt;/em&gt;. The directive that marks a function as cacheable can be implemented in two structurally different ways, and the difference is not yet obvious to most of the people writing code that uses it.&lt;/p&gt;

&lt;p&gt;The first design treats &lt;code&gt;"use cache"&lt;/code&gt; as a marker on a function. The function carries its own caching contract. Wherever the function runs — on a server, on the edge, in a worker, in a browser — the directive means the same thing. The cache is a property of the function.&lt;/p&gt;

&lt;p&gt;The second design treats &lt;code&gt;"use cache"&lt;/code&gt; as a marker on a region of a rendering tree. The function exists, the directive is on it, but the cache machinery underneath is part of the framework's rendering pipeline. The cached output is a streaming shell that the framework stitches into a partially prerendered response, with dynamic holes carved out of it by &lt;code&gt;&amp;lt;Suspense&amp;gt;&lt;/code&gt; boundaries. The cache is a property of how the page is built.&lt;/p&gt;

&lt;p&gt;Both designs are coherent. Both are reasonable answers to a real engineering problem. They are not the same answer. This article is for the first one.&lt;/p&gt;

&lt;h2&gt;
  
  
  What was settled, and what wasn't
&lt;/h2&gt;

&lt;p&gt;A short reset on the territory is worth the paragraph.&lt;/p&gt;

&lt;p&gt;The case against default caching used to be obvious to anyone who had shipped a production application: you spend more time disabling the cache than enabling it. Routes get marked &lt;code&gt;dynamic&lt;/code&gt;. Fetches get &lt;code&gt;cache: 'no-store'&lt;/code&gt;. Layout segments get tagged &lt;code&gt;force-dynamic&lt;/code&gt;. The defaults were calibrated for a population of pages where staleness is cheap and slowness is expensive, and most production applications are not that population. Every site that mattered ended up annotating its way out of the default.&lt;/p&gt;

&lt;p&gt;The frameworks that shipped this model heard the criticism and inverted the defaults. In Next.js 15, &lt;code&gt;fetch&lt;/code&gt; is no longer cached, segments are no longer static. The &lt;code&gt;dynamicIO&lt;/code&gt; mode introduced &lt;code&gt;"use cache"&lt;/code&gt; as the primitive a developer reaches for when caching actually pays. Inside that mode, uncached is the baseline; cache only what you mark. This is the design the critics asked for. They got it.&lt;/p&gt;

&lt;p&gt;So when this article talks about what a caching primitive should look like, it is not arguing against caching by default. The default is gone. The argument that survives is about what the primitive in its place is &lt;em&gt;made of&lt;/em&gt;.&lt;/p&gt;

&lt;h2&gt;
  
  
  Two places the cache can live
&lt;/h2&gt;

&lt;p&gt;A caching primitive has to live somewhere. The two structural choices are &lt;em&gt;with the function&lt;/em&gt; and &lt;em&gt;with the renderer&lt;/em&gt;.&lt;/p&gt;

&lt;p&gt;A cache that lives with the function is portable. It travels wherever the function travels. The directive is a contract between a function and a runtime; any runtime that understands the directive can honor it; any runtime that does not, ignores it and runs the function. The cache key is the function's inputs. The cache value is the function's output. Nothing about the surrounding system needs to be present for the cache to work, because the cache &lt;em&gt;is&lt;/em&gt; the function's contract with its host.&lt;/p&gt;

&lt;p&gt;A cache that lives with the renderer is part of a rendering pipeline. This is the design Next.js's Cache Components ships under &lt;code&gt;dynamicIO&lt;/code&gt;. It works because the pipeline is in the loop — the cached value is a stream of bytes representing a region of the rendered tree, the dynamic holes are &lt;code&gt;&amp;lt;Suspense&amp;gt;&lt;/code&gt; boundaries, the streaming response stitches the cached shell back together with the live data. The directive is a marker on a region of the tree the renderer cares about, and the cache is the part of the renderer that remembers what that region produced. Take the function out of the renderer and the cache disappears, because there is nothing to cache.&lt;/p&gt;

&lt;p&gt;The first design is small. The second is integrated. Each one has a thing it is good at and a thing it cannot do.&lt;/p&gt;

&lt;p&gt;The first cannot stitch shells around dynamic holes. It does not know about Suspense. It does not produce streaming responses with prerendered prefixes. If you want partial prerendering, the second design is the one you want.&lt;/p&gt;

&lt;p&gt;The second cannot run outside the renderer. It cannot ship as a primitive in a library. It cannot run on the edge before the framework boots. It cannot run in the browser. It cannot dedupe a database lookup that happens during a worker job that has nothing to do with rendering. If you want a caching primitive you can reach for in any program, the first design is the one you want.&lt;/p&gt;

&lt;p&gt;The two designs are not interchangeable. A cache that requires the framework to be present is a feature of the framework. A cache that requires only a function is a feature of the program. This article is for the second one, and the rest of it is the structural case for that choice — four properties the cache acquires when it belongs to the function and gives up when it belongs to the renderer.&lt;/p&gt;

&lt;h2&gt;
  
  
  Atomic, not ambient
&lt;/h2&gt;

&lt;p&gt;A function-level cache is atomic. One function, one cache, one set of inputs, one output. The developer can assert — locally, by inspection — that the output is a function of the inputs and that nothing else in the world matters. The function's inputs are the developer's parameters; there is nowhere else for hidden state to come from.&lt;/p&gt;

&lt;p&gt;Render-coupled caches give some of this up. A region of a rendering tree closes over the request, the user, the route parameters, the surrounding component state — and the cache machinery has to chase those captures and decide what is safe to serialize. The result is a more powerful cache, but the unit of reasoning has moved. The function is no longer what the developer reasons about. The region is.&lt;/p&gt;

&lt;p&gt;The hard part of caching is not the syntax. It is the honesty — marking a function only when its output really is a function of its inputs, and not closing over state the key cannot see. A smaller unit makes that honesty checkable. A larger unit makes it a research project.&lt;/p&gt;

&lt;h2&gt;
  
  
  Caches that compose
&lt;/h2&gt;

&lt;p&gt;A function-level cache composes. Two cached functions written by two different people, in two different libraries, called from a third function that caches nothing of its own, all behave the way the source reads. The outer call is uncached. The inner calls each consult their own caches. Each layer's decision belongs to that layer.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight typescript"&gt;&lt;code&gt;&lt;span class="k"&gt;async&lt;/span&gt; &lt;span class="kd"&gt;function&lt;/span&gt; &lt;span class="nf"&gt;getOrder&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;id&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="kr"&gt;string&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;use cache&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;
  &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;order&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="nx"&gt;db&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;orders&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;find&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;id&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
  &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;customer&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="nf"&gt;getCustomer&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;order&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;customerId&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;   &lt;span class="c1"&gt;// also "use cache"&lt;/span&gt;
  &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;products&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="nb"&gt;Promise&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;all&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;
    &lt;span class="nx"&gt;order&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;lineItems&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;map&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;item&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="nf"&gt;getProduct&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;item&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;productId&lt;/span&gt;&lt;span class="p"&gt;))&lt;/span&gt; &lt;span class="c1"&gt;// also "use cache"&lt;/span&gt;
  &lt;span class="p"&gt;)&lt;/span&gt;
  &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="nx"&gt;order&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;customer&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;products&lt;/span&gt; &lt;span class="p"&gt;}&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Three caches stacked, no coordination. The outer cache stores the assembled order. The inner caches store the customer and each product. They have different TTLs, different tags, different lifetimes. None of them know about the others. A request that asks for an order seen recently hits one cache. A request for a fresh order whose customer is well-known hits two. A request for an entirely new order misses everything and fills in three caches at once. Every path through the system is correct, because every cache was a local decision.&lt;/p&gt;

&lt;p&gt;Render-coupled caches compose under the renderer's rules. The shell's lifetime, the hole boundary, the streaming order — these are properties of the pipeline that two cached pieces inherit when they share a tree. The function-level cache carries no surrounding model. The diff is a one-line claim about the function. The blast radius is the function.&lt;/p&gt;

&lt;h2&gt;
  
  
  A function is a function
&lt;/h2&gt;

&lt;p&gt;A directive that marks a single function does not care what runtime is reading it. The contract is between a function and its caller. The caller might be a server rendering RSC, an SSR pipeline streaming HTML, a worker, an edge runtime, a browser tab running the same code on the client side of an isomorphic boundary. The directive's meaning does not change. The function says: my output is my inputs. The runtime, whichever runtime that is, says: I will hold it.&lt;/p&gt;

&lt;p&gt;This is the property the render-coupled cache cannot have, by construction. It works because the renderer is in the loop. Take the same code out of that loop — run it before the framework boots, in a worker job, in a browser tab that does not go through the request lifecycle — and the cache disappears, because the cache is the renderer remembering, and the renderer is not there.&lt;/p&gt;

&lt;p&gt;A function-level cache survives the move. A library can ship cacheable utilities and rely on whatever runtime hosts them to honor the directive. There is no &lt;code&gt;if (server)&lt;/code&gt; branch, no &lt;code&gt;if (browser)&lt;/code&gt; branch, no separate cache wiring per environment. The same function, in any host that understands the directive, has the same contract. A host that does not understand it leaves the function alone.&lt;/p&gt;

&lt;p&gt;This is what it means for a caching primitive to be &lt;em&gt;portable&lt;/em&gt;. Not that the framework runs in many places — that is a deployment concern, and a different one. The render-coupled cache is a property of the host. The function-level cache is a property of the source. The function carries its own contract.&lt;/p&gt;

&lt;h2&gt;
  
  
  Locality buys you removability
&lt;/h2&gt;

&lt;p&gt;A function-level cache is removable in a one-line diff. Delete the directive. Ship. The function reverts to first principles — it runs every time it is called — and you can investigate the staleness offline, where it is cheap to be wrong.&lt;/p&gt;

&lt;p&gt;Render-coupled caches are removable too, when the unit being uncached is the unit the renderer marks. The harder cases are the surrounding ones: a cached region whose contents vary in ways the renderer's closure analysis did not capture, a tag-based revalidation that turned out to invalidate too much, a &lt;code&gt;cacheLife&lt;/code&gt; profile that turned out to be wrong for one specific function in one specific context. The diff is still small; the diagnosis is not, because the failure isn't in the function — it's in the relationship between the function and the renderer.&lt;/p&gt;

&lt;p&gt;The same property holds for code review. A &lt;code&gt;"use cache"&lt;/code&gt; directive shows up in a diff. A reviewer asks: is this function actually a function of its inputs? Is the TTL right? When the unit being marked is a function, those questions have function-shaped answers. When the unit being marked is a region, the questions also have to ask about Suspense boundaries, about what the renderer captures, about how the streaming response composes. More variables, more places to be subtly wrong.&lt;/p&gt;

&lt;h2&gt;
  
  
  Scope is also a per-function decision
&lt;/h2&gt;

&lt;p&gt;The strongest underdeveloped property of the function-level cache is scope. The clearest way to see it is to look at a page that needs three caches.&lt;/p&gt;

&lt;p&gt;A product page calls &lt;code&gt;getProduct(id)&lt;/code&gt; from three different components in the same render. They should see the same value; the database lookup should run once. This is request-scoped dedup.&lt;/p&gt;

&lt;p&gt;The same page calls &lt;code&gt;getProductCatalog()&lt;/code&gt;, the company's full catalog — refreshed nightly, shared across every request, identical for every user. This is a long-lived in-memory cache.&lt;/p&gt;

&lt;p&gt;The same page calls &lt;code&gt;getInventoryStatus(sku)&lt;/code&gt;, which has to be synchronized across every server in the fleet, because two requests landing on different machines cannot disagree about whether an item is in stock. This is a shared store.&lt;/p&gt;

&lt;p&gt;In current Next.js, those are three primitives. &lt;code&gt;React.cache&lt;/code&gt; for the first. &lt;code&gt;"use cache"&lt;/code&gt; for the second. A custom cache provider, or an external store reached through a server function, for the third. Each has its own API, its own keying, its own invalidation model. A developer who picks the wrong one rewrites the function when they discover the choice was wrong.&lt;/p&gt;

&lt;p&gt;In a function-level design, all three are options on the same directive. &lt;code&gt;"use cache: request"&lt;/code&gt; for the first. &lt;code&gt;"use cache"&lt;/code&gt; for the second. &lt;code&gt;"use cache: shared"&lt;/code&gt; for the third. The function shape does not change. The directive carries the answer to which scope it belongs to. Picking the wrong one is a one-line fix.&lt;/p&gt;

&lt;p&gt;This is a real, structural advantage, and one of the places where the function-level design has not yet fully shipped in the largest framework that uses the syntax.&lt;/p&gt;

&lt;h2&gt;
  
  
  The shape of the contract
&lt;/h2&gt;

&lt;p&gt;The directive is a contract between the developer and the runtime, and the mandatory part of the contract has only three lines.&lt;/p&gt;

&lt;p&gt;The developer says: this function's output is determined by its inputs.&lt;/p&gt;

&lt;p&gt;The runtime says: I will hold the output and serve it again the next time the inputs match.&lt;/p&gt;

&lt;p&gt;Both parties say: when this is no longer true, the directive comes off.&lt;/p&gt;

&lt;p&gt;That is the surface that has to be there. Everything else — a TTL, a tag, a named profile, a choice of storage, a choice of scope — is an &lt;em&gt;option&lt;/em&gt; the developer attaches to the directive when the function calls for it. Tags are useful when there is something to invalidate by group. TTLs are useful when freshness has a known half-life. Named profiles are useful when several functions share the same caching shape and the shape is worth naming once.&lt;/p&gt;

&lt;p&gt;None of these options are wrong. They are all part of the directive's optional surface, all developer-attached, all visible at the call site. The asymmetry that matters is between options the developer wrote down and options the runtime applied silently. A developer adding &lt;code&gt;ttl=60&lt;/code&gt; or &lt;code&gt;tags=todos&lt;/code&gt; to a directive is making a decision visible in the source. A framework deciding the same thing on the developer's behalf is making the same decision invisible. Only the first kind is in the diff.&lt;/p&gt;

&lt;p&gt;The same argument applies, structurally, to every directive in this family. &lt;code&gt;"use client"&lt;/code&gt; is a marker that asserts a piece of code crosses a runtime boundary; the value of the marker is that you can read the program and see where the boundaries are. I have argued elsewhere that the directive should be allowed at finer granularity than a file — see &lt;a href="https://dev.to/lazarv/the-use-client-tax-1ed0"&gt;The "use client" Tax&lt;/a&gt; — but the underlying point is the same. A directive is the developer telling the runtime something the runtime could not have inferred. That contract scales because it is small.&lt;/p&gt;

&lt;h2&gt;
  
  
  Two kinds of software
&lt;/h2&gt;

&lt;p&gt;Underneath the function-vs-renderer choice is a more general one about what kind of software you are writing.&lt;/p&gt;

&lt;p&gt;A framework is a packaged product. It optimizes for the page — for the user-visible artifact at the end of the rendering pipeline. Coupling caching to rendering is the breakthrough that makes partial prerendering work: a static shell streamed first, dynamic holes filled in afterward, no full server roundtrip on a navigation. That is a real win, and it is the win the render-coupled cache exists to deliver. A framework architect choosing the render-coupled design is making a coherent product decision.&lt;/p&gt;

&lt;p&gt;A runtime is a primitive. It optimizes for the cache — for the contract a developer can hold in their head and reach for in any program. The function-level cache is not better than the render-coupled cache for the page. It is better for the cache. It composes outside a render tree. It runs in any environment. It survives library packaging. It does not require a mode flag to be turned on. A runtime architect choosing the function-level design is making a coherent primitive decision.&lt;/p&gt;

&lt;p&gt;Both choices are defensible. They produce different software. A developer who wants the page wants the render-coupled cache; a developer who wants a caching primitive they can reach for unconditionally wants the function-level one. The directive looks the same in both. The systems underneath it do not.&lt;/p&gt;

&lt;h2&gt;
  
  
  What the ecosystem pays
&lt;/h2&gt;

&lt;p&gt;The decision about where a caching primitive lives looks, inside one application, like a trade-off between two designs. From above the application — at the level of the JavaScript ecosystem the application depends on — it is something else.&lt;/p&gt;

&lt;p&gt;A library author shipping a function cannot assume the consumer is in &lt;code&gt;dynamicIO&lt;/code&gt; mode. They cannot assume the consumer is using a framework at all. So a library that wants to ship a cacheable utility — a database client that should dedupe identical queries, a markdown renderer that should not re-parse the same input twice, an API client that should pool requests — has one option under the render-coupled design: do not provide the cache. The library exposes raw functions; the consumer wires them into their framework's caching themselves; everyone reinvents the same wrappers in slightly different shapes, and the bugs all live in the wiring.&lt;/p&gt;

&lt;p&gt;Under a function-level design, the library author writes &lt;code&gt;"use cache"&lt;/code&gt; at the top of the function and ships. Any consumer whose runtime understands the directive gets the cache. Any consumer whose runtime does not, gets the raw function. The library does not have to know. The consumer does not have to wrap.&lt;/p&gt;

&lt;p&gt;This is a pattern. Every time a framework absorbs a capability that could have been a primitive — caching, server functions, partial hydration, request-scoped state, routing — the ecosystem pays. The capability becomes available only inside that framework. Libraries that want it pick the framework as a hard dependency, ship their own version, or expose it as a configuration surface for the consumer to wire up. None of these are good for the developers a level removed from the framework. Each one moves complexity out of the framework and into a thousand small repositories that did not need to invent it.&lt;/p&gt;

&lt;p&gt;The render-coupled cache is not the only place this happens. It is one place where the trade is unusually clear. The capability — memoizing a function on its inputs — has a canonical shape. The shape is small. It does not need a renderer to be useful. Putting the renderer in the loop trades that universality for an integration the framework can use to power partial prerendering. That trade is fine for the framework. It is paid by everyone else.&lt;/p&gt;

&lt;p&gt;I have made a parallel version of this argument about a different misplaced primitive: a framework exposing a capability as a library API where a directive would have been smaller (&lt;a href="https://dev.to/lazarv/rsc-as-a-serializer-not-a-model-56nj"&gt;RSC as a serializer, not a model&lt;/a&gt;). Misplaced primitives look different in the small. From far enough away they look the same.&lt;/p&gt;

&lt;h2&gt;
  
  
  The smaller point
&lt;/h2&gt;

&lt;p&gt;Where a primitive lives determines what the developer can do with it. A cache that lives in the renderer can do things a function cache cannot — partial prerendering, streamed shells, suspense-aware regions. A cache that lives with the function can do things a render-coupled cache cannot — travel between environments, compose without coordination, ship in a library that does not know what framework will host it.&lt;/p&gt;

&lt;p&gt;&lt;code&gt;"use cache"&lt;/code&gt; is the same five letters in either design. The choice the developer is making by writing the directive is not really a choice about caching. It is a choice about which of those two things the caching primitive should be.&lt;/p&gt;

&lt;p&gt;A cache that lives in the framework belongs to the framework. A cache that lives in the function belongs to the developer. Only one of those travels.&lt;/p&gt;

</description>
      <category>react</category>
      <category>framework</category>
      <category>cache</category>
      <category>architecture</category>
    </item>
    <item>
      <title>The "use client" Tax</title>
      <dc:creator>Viktor Lázár</dc:creator>
      <pubDate>Tue, 28 Apr 2026 17:16:59 +0000</pubDate>
      <link>https://dev.to/lazarv/the-use-client-tax-1ed0</link>
      <guid>https://dev.to/lazarv/the-use-client-tax-1ed0</guid>
      <description>&lt;p&gt;&lt;em&gt;Why React Server Components force small interactive ideas into file-sized boundaries — and why that boundary should be lexical instead.&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;There is a moment that every developer who tries React Server Components hits, usually within their first hour. They write a server component. It fetches some data. It renders a list. Beautiful. Then they want a button that toggles a filter, and the compiler stops them: "you can't use &lt;code&gt;useState&lt;/code&gt; here." So they cut the interactive piece out, paste it into a new file, sprinkle &lt;code&gt;"use client"&lt;/code&gt; at the top, import it back into the parent, and move on.&lt;/p&gt;

&lt;p&gt;A week later their &lt;code&gt;components/&lt;/code&gt; directory looks like this:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;components/
├── product-list.tsx
├── product-list-filter.tsx
├── product-list-filter-input.tsx
├── product-list-sort.tsx
├── product-list-sort-dropdown.tsx
├── product-card.tsx
├── product-card-actions.tsx
├── product-card-favorite-button.tsx
└── product-card-quantity-stepper.tsx
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Nine files for one product list. Each one a thin wrapper. Each one with two or three lines of real logic. Each one named with an increasingly desperate suffix because the original &lt;code&gt;Filter&lt;/code&gt; already exists three directories up.&lt;/p&gt;

&lt;p&gt;This is the "use client" tax, and it is real.&lt;/p&gt;

&lt;h2&gt;
  
  
  Where the tax comes from
&lt;/h2&gt;

&lt;p&gt;The directive is not arbitrary. &lt;code&gt;"use client"&lt;/code&gt; marks a module boundary that the bundler uses to split graphs: everything reachable from a &lt;code&gt;"use client"&lt;/code&gt; entry becomes part of the client bundle; everything else stays on the server. The directive has to live at the top of a file because that is the granularity the bundler operates on. Modules in, modules out.&lt;/p&gt;

&lt;p&gt;That works fine in theory. In practice it forces a one-to-one correspondence between &lt;em&gt;interactive concerns&lt;/em&gt; and &lt;em&gt;files on disk&lt;/em&gt;, and interactive concerns are not file-sized. They are paragraph-sized. A "favorite" button that toggles state is not a module — it is two lines inside the card that displays the product. But the runtime can't see those two lines unless you lift them into their own module, give them a name, export them, import them back, and pass props across the boundary.&lt;/p&gt;

&lt;p&gt;The result is a particular kind of friction that compounds:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;File sprawl.&lt;/strong&gt; Trivial widgets become trivial files. Most of the file is the import header.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Naming fatigue.&lt;/strong&gt; Every extracted leaf needs a name. Names that were unique in their lexical scope are no longer unique once they live in a flat directory. You end up with &lt;code&gt;ProductCardFavoriteButtonInner&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Lost colocation.&lt;/strong&gt; A server function that writes to the database and the form that calls it now live in two files. The relationship between them survives only as an import statement. To understand the feature you alt-tab.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Indirection without abstraction.&lt;/strong&gt; Each extracted client component is a wrapper that accepts everything the parent had in scope, as props. You are manually performing closure conversion — by hand, every time, with no help from the compiler.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Compositions you can't write.&lt;/strong&gt; The pattern that hurts most is the one you cannot express at all: a server function that computes some data and &lt;em&gt;returns&lt;/em&gt; a small interactive component bound to that data. You cannot do this in standard RSC, because the client component has to be a separate module, which means it cannot close over server-side values. You always end up exporting the client component, exporting the data fetch, and re-assembling them at the call site. The expression you wanted to write — a factory — is not available to you.&lt;/p&gt;

&lt;h2&gt;
  
  
  The shape of the pain
&lt;/h2&gt;

&lt;p&gt;Here is what a real fragment looks like under the current rules:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight tsx"&gt;&lt;code&gt;&lt;span class="c1"&gt;// product-card.tsx&lt;/span&gt;
&lt;span class="k"&gt;import&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="nx"&gt;FavoriteButton&lt;/span&gt; &lt;span class="p"&gt;}&lt;/span&gt; &lt;span class="k"&gt;from&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;./product-card-favorite-button&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;

&lt;span class="k"&gt;export&lt;/span&gt; &lt;span class="k"&gt;async&lt;/span&gt; &lt;span class="kd"&gt;function&lt;/span&gt; &lt;span class="nf"&gt;ProductCard&lt;/span&gt;&lt;span class="p"&gt;({&lt;/span&gt; &lt;span class="nx"&gt;id&lt;/span&gt; &lt;span class="p"&gt;})&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;product&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="nx"&gt;db&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;product&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;find&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;id&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
  &lt;span class="k"&gt;return &lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;
    &lt;span class="p"&gt;&amp;lt;&lt;/span&gt;&lt;span class="nt"&gt;article&lt;/span&gt;&lt;span class="p"&gt;&amp;gt;&lt;/span&gt;
      &lt;span class="p"&gt;&amp;lt;&lt;/span&gt;&lt;span class="nt"&gt;h3&lt;/span&gt;&lt;span class="p"&gt;&amp;gt;&lt;/span&gt;&lt;span class="si"&gt;{&lt;/span&gt;&lt;span class="nx"&gt;product&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;name&lt;/span&gt;&lt;span class="si"&gt;}&lt;/span&gt;&lt;span class="p"&gt;&amp;lt;/&lt;/span&gt;&lt;span class="nt"&gt;h3&lt;/span&gt;&lt;span class="p"&gt;&amp;gt;&lt;/span&gt;
      &lt;span class="p"&gt;&amp;lt;&lt;/span&gt;&lt;span class="nc"&gt;FavoriteButton&lt;/span&gt; &lt;span class="na"&gt;productId&lt;/span&gt;&lt;span class="p"&gt;=&lt;/span&gt;&lt;span class="si"&gt;{&lt;/span&gt;&lt;span class="nx"&gt;product&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;id&lt;/span&gt;&lt;span class="si"&gt;}&lt;/span&gt; &lt;span class="na"&gt;initial&lt;/span&gt;&lt;span class="p"&gt;=&lt;/span&gt;&lt;span class="si"&gt;{&lt;/span&gt;&lt;span class="nx"&gt;product&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;isFavorite&lt;/span&gt;&lt;span class="si"&gt;}&lt;/span&gt; &lt;span class="p"&gt;/&amp;gt;&lt;/span&gt;
    &lt;span class="p"&gt;&amp;lt;/&lt;/span&gt;&lt;span class="nt"&gt;article&lt;/span&gt;&lt;span class="p"&gt;&amp;gt;&lt;/span&gt;
  &lt;span class="p"&gt;);&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;





&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight tsx"&gt;&lt;code&gt;&lt;span class="c1"&gt;// product-card-favorite-button.tsx&lt;/span&gt;
&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;use client&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;

&lt;span class="k"&gt;import&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="nx"&gt;useState&lt;/span&gt; &lt;span class="p"&gt;}&lt;/span&gt; &lt;span class="k"&gt;from&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;react&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;span class="k"&gt;import&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="nx"&gt;toggleFavorite&lt;/span&gt; &lt;span class="p"&gt;}&lt;/span&gt; &lt;span class="k"&gt;from&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;./product-actions&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;

&lt;span class="k"&gt;export&lt;/span&gt; &lt;span class="kd"&gt;function&lt;/span&gt; &lt;span class="nf"&gt;FavoriteButton&lt;/span&gt;&lt;span class="p"&gt;({&lt;/span&gt; &lt;span class="nx"&gt;productId&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;initial&lt;/span&gt; &lt;span class="p"&gt;})&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="nx"&gt;favorite&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;setFavorite&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nf"&gt;useState&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;initial&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
  &lt;span class="k"&gt;return &lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;
    &lt;span class="p"&gt;&amp;lt;&lt;/span&gt;&lt;span class="nt"&gt;button&lt;/span&gt;
      &lt;span class="na"&gt;onClick&lt;/span&gt;&lt;span class="p"&gt;=&lt;/span&gt;&lt;span class="si"&gt;{&lt;/span&gt;&lt;span class="k"&gt;async &lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
        &lt;span class="nf"&gt;setFavorite&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="o"&gt;!&lt;/span&gt;&lt;span class="nx"&gt;favorite&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
        &lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="nf"&gt;toggleFavorite&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;productId&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
      &lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="si"&gt;}&lt;/span&gt;
    &lt;span class="p"&gt;&amp;gt;&lt;/span&gt;
      &lt;span class="si"&gt;{&lt;/span&gt;&lt;span class="nx"&gt;favorite&lt;/span&gt; &lt;span class="p"&gt;?&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;★&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt; &lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;☆&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="si"&gt;}&lt;/span&gt;
    &lt;span class="p"&gt;&amp;lt;/&lt;/span&gt;&lt;span class="nt"&gt;button&lt;/span&gt;&lt;span class="p"&gt;&amp;gt;&lt;/span&gt;
  &lt;span class="p"&gt;);&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;





&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight tsx"&gt;&lt;code&gt;&lt;span class="c1"&gt;// product-actions.ts&lt;/span&gt;
&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;use server&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;

&lt;span class="k"&gt;export&lt;/span&gt; &lt;span class="k"&gt;async&lt;/span&gt; &lt;span class="kd"&gt;function&lt;/span&gt; &lt;span class="nf"&gt;toggleFavorite&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;productId&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="kr"&gt;number&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="c1"&gt;// ...&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Three files for a star button. Two of them exist purely as plumbing for the directive system. The actual interesting code — eight lines of state and a database write — is buried under thirty lines of imports, exports, and prop-passing.&lt;/p&gt;

&lt;p&gt;This is what people mean when they say RSC is heavy. It is not the data fetching. It is not the streaming. It is this: the directive system asks you to manually re-architect every interactive idea into a multi-file module graph, and it does so for the smallest possible units of behavior.&lt;/p&gt;

&lt;p&gt;Zoom out one level and the same pressure exists at the project boundary: modern frontend frameworks force entire micro-apps to be scaffolded across directory trees, config files, and &lt;code&gt;node_modules&lt;/code&gt; for the same kind of mechanical reason — tooling that operates at a coarser unit than the developer's idea. I covered that version of the problem in &lt;a href="https://dev.to/lazarv/the-forgotten-joy-of-node-appjs-5761"&gt;The Forgotten Joy of &lt;code&gt;node app.js&lt;/code&gt;&lt;/a&gt;. The fix is structurally the same as the one proposed below: stop letting the file system be the unit of expression.&lt;/p&gt;

&lt;h2&gt;
  
  
  The constraint is in the tool, not in the model
&lt;/h2&gt;

&lt;p&gt;Here is the part that is worth saying out loud: the file-level restriction is a property of how bundlers were built, not a property of what the directive &lt;em&gt;means&lt;/em&gt;. &lt;code&gt;"use client"&lt;/code&gt; is asserting that a piece of code runs on the client and must be serialized across a runtime boundary. That assertion is perfectly meaningful at any function scope. It only has to live at the top of a file because that is what the bundler can see.&lt;/p&gt;

&lt;p&gt;A compiler that knows about RSC directives can do better. Given a server module that contains a nested function marked &lt;code&gt;"use client"&lt;/code&gt;, it can:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Identify the nested function and the variables it captures from its lexical scope.&lt;/li&gt;
&lt;li&gt;Lift the function into a synthetic module that the bundler treats exactly like a regular &lt;code&gt;"use client"&lt;/code&gt; module.&lt;/li&gt;
&lt;li&gt;Replace the original definition with a reference to the lifted module.&lt;/li&gt;
&lt;li&gt;Inject the captured variables as props at the call site.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;The developer wrote one file. The bundler sees the module graph it needs. Nothing about the underlying RSC contract changes — the same serialization rules apply, the same boundary is enforced — but the file system stops being the unit of expression. The function does.&lt;/p&gt;

&lt;h2&gt;
  
  
  What this should look like
&lt;/h2&gt;

&lt;p&gt;Imagine writing the favorite button like this instead:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight tsx"&gt;&lt;code&gt;&lt;span class="k"&gt;export&lt;/span&gt; &lt;span class="k"&gt;async&lt;/span&gt; &lt;span class="kd"&gt;function&lt;/span&gt; &lt;span class="nf"&gt;ProductCard&lt;/span&gt;&lt;span class="p"&gt;({&lt;/span&gt; &lt;span class="nx"&gt;id&lt;/span&gt; &lt;span class="p"&gt;})&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;product&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="nx"&gt;db&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;product&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;find&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;id&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;

  &lt;span class="kd"&gt;function&lt;/span&gt; &lt;span class="nf"&gt;FavoriteButton&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;use client&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
    &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="nx"&gt;favorite&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;setFavorite&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nf"&gt;useState&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;product&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;isFavorite&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;

    &lt;span class="k"&gt;async&lt;/span&gt; &lt;span class="kd"&gt;function&lt;/span&gt; &lt;span class="nf"&gt;toggle&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
      &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;use server&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
      &lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="nx"&gt;db&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;product&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;toggleFavorite&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;product&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;id&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
    &lt;span class="p"&gt;}&lt;/span&gt;

    &lt;span class="k"&gt;return &lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;
      &lt;span class="p"&gt;&amp;lt;&lt;/span&gt;&lt;span class="nt"&gt;button&lt;/span&gt; &lt;span class="na"&gt;onClick&lt;/span&gt;&lt;span class="p"&gt;=&lt;/span&gt;&lt;span class="si"&gt;{&lt;/span&gt;&lt;span class="k"&gt;async &lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="nf"&gt;setFavorite&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="o"&gt;!&lt;/span&gt;&lt;span class="nx"&gt;favorite&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt; &lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="nf"&gt;toggle&lt;/span&gt;&lt;span class="p"&gt;();&lt;/span&gt; &lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="si"&gt;}&lt;/span&gt;&lt;span class="p"&gt;&amp;gt;&lt;/span&gt;
        &lt;span class="si"&gt;{&lt;/span&gt;&lt;span class="nx"&gt;favorite&lt;/span&gt; &lt;span class="p"&gt;?&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;★&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt; &lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;☆&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="si"&gt;}&lt;/span&gt;
      &lt;span class="p"&gt;&amp;lt;/&lt;/span&gt;&lt;span class="nt"&gt;button&lt;/span&gt;&lt;span class="p"&gt;&amp;gt;&lt;/span&gt;
    &lt;span class="p"&gt;);&lt;/span&gt;
  &lt;span class="p"&gt;}&lt;/span&gt;

  &lt;span class="k"&gt;return &lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;
    &lt;span class="p"&gt;&amp;lt;&lt;/span&gt;&lt;span class="nt"&gt;article&lt;/span&gt;&lt;span class="p"&gt;&amp;gt;&lt;/span&gt;
      &lt;span class="p"&gt;&amp;lt;&lt;/span&gt;&lt;span class="nt"&gt;h3&lt;/span&gt;&lt;span class="p"&gt;&amp;gt;&lt;/span&gt;&lt;span class="si"&gt;{&lt;/span&gt;&lt;span class="nx"&gt;product&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;name&lt;/span&gt;&lt;span class="si"&gt;}&lt;/span&gt;&lt;span class="p"&gt;&amp;lt;/&lt;/span&gt;&lt;span class="nt"&gt;h3&lt;/span&gt;&lt;span class="p"&gt;&amp;gt;&lt;/span&gt;
      &lt;span class="p"&gt;&amp;lt;&lt;/span&gt;&lt;span class="nc"&gt;FavoriteButton&lt;/span&gt; &lt;span class="p"&gt;/&amp;gt;&lt;/span&gt;
    &lt;span class="p"&gt;&amp;lt;/&lt;/span&gt;&lt;span class="nt"&gt;article&lt;/span&gt;&lt;span class="p"&gt;&amp;gt;&lt;/span&gt;
  &lt;span class="p"&gt;);&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;One file. Server fetch, client interaction, server function — colocated, in the order you would read them, sharing a closure. The compiler does the lifting; the captured &lt;code&gt;product&lt;/code&gt; becomes a prop on the synthesized client module; the inner &lt;code&gt;"use server"&lt;/code&gt; function becomes a bound server function with the right scope. Server → client → server nesting works recursively because the same extraction pass runs until no nested directives remain.&lt;/p&gt;

&lt;p&gt;This is what a real RSC ergonomic story looks like. Not a new mental model — the same one — just expressed at the granularity humans actually think in.&lt;/p&gt;

&lt;h2&gt;
  
  
  Why this should be a standard feature
&lt;/h2&gt;

&lt;p&gt;The technique is not exotic. It is closure conversion, a transform compilers have been doing since the seventies. The hard part is wiring it into the RSC plugin chain so that virtual modules generated for inline directives flow through the same client/server graph the rest of the system already uses. That is engineering, not research.&lt;/p&gt;

&lt;p&gt;There is no fundamental reason an RSC-capable runtime cannot support this. The directive system is already a contract between the developer and the compiler; expanding it to cover function scopes in addition to module scopes does not change serialization, bundling, streaming, or the security boundary. It only changes where the developer is allowed to write the directive.&lt;/p&gt;

&lt;p&gt;If you are building an RSC runtime: pick this up. If you are using one that does not have it: ask for it. A "use client" file is not a feature. It is a workaround for a constraint we no longer need to accept.&lt;/p&gt;

&lt;p&gt;The point of RSC was to let us put server logic and client logic next to each other. The directive system, taken at face value, does the opposite: it forces them apart, file by file, until your repository is ninety percent wrappers. We can fix this. It is time to make the fix standard.&lt;/p&gt;

</description>
      <category>rsc</category>
      <category>react</category>
      <category>bundler</category>
      <category>compiler</category>
    </item>
    <item>
      <title>The Forgotten Joy of `node app.js`</title>
      <dc:creator>Viktor Lázár</dc:creator>
      <pubDate>Tue, 28 Apr 2026 17:16:40 +0000</pubDate>
      <link>https://dev.to/lazarv/the-forgotten-joy-of-node-appjs-5761</link>
      <guid>https://dev.to/lazarv/the-forgotten-joy-of-node-appjs-5761</guid>
      <description>&lt;p&gt;There used to be a moment, ten years or so ago, when you could go from "I have an idea" to "I have a running web server" in about thirty seconds:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight javascript"&gt;&lt;code&gt;&lt;span class="c1"&gt;// app.js&lt;/span&gt;
&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;express&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nf"&gt;require&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;express&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;app&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nf"&gt;express&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;

&lt;span class="nx"&gt;app&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;get&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;/&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;req&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;res&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="nx"&gt;res&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;send&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;hello&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;))&lt;/span&gt;

&lt;span class="nx"&gt;app&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;listen&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="mi"&gt;3000&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;





&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;node app.js
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;That was the whole thing. One file. One command. You could paste it into a Slack message. You could drop it in a Gist and someone could run it. A tiny webhook receiver, a debug dashboard, an internal tool, a stub API — the entire project lived in a single buffer in your editor.&lt;/p&gt;

&lt;p&gt;Then frontend frameworks happened, and somewhere along the way we collectively decided that "starting a new project" meant something else entirely.&lt;/p&gt;

&lt;h2&gt;
  
  
  The scaffold tax
&lt;/h2&gt;

&lt;p&gt;Today, the canonical first step in starting a new app is no longer writing code. It is running a command that writes code &lt;em&gt;for&lt;/em&gt; you:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;npx create-next-app@latest my-app
npx create-react-app my-app
npm create vite@latest
npx create-remix
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;What comes back is not a file. It is a &lt;em&gt;tree&lt;/em&gt;. Configuration files for tooling you have not yet decided to use. A &lt;code&gt;pages/&lt;/code&gt; or &lt;code&gt;app/&lt;/code&gt; directory with conventions you must learn before you can write a single line. A &lt;code&gt;tsconfig.json&lt;/code&gt; you did not write. ESLint rules. Prettier rules. A &lt;code&gt;.gitignore&lt;/code&gt;. A &lt;code&gt;README.md&lt;/code&gt; describing the scaffold itself. A &lt;code&gt;package.json&lt;/code&gt; with twelve dependencies and four scripts you did not pick.&lt;/p&gt;

&lt;p&gt;And, critically, there is no path &lt;em&gt;back&lt;/em&gt; to a single file. The scaffold is the unit of starting. There is no &lt;code&gt;framework dev ./App.jsx&lt;/code&gt;. There is only &lt;code&gt;framework new my-project&lt;/code&gt;, which produces forty files, of which you will edit two.&lt;/p&gt;

&lt;p&gt;This is fine when you are starting a real product. It is absurd when you are not.&lt;/p&gt;

&lt;h2&gt;
  
  
  What we lost
&lt;/h2&gt;

&lt;p&gt;The single-file app is not a relic of a less mature ecosystem. It is a fundamentally different &lt;em&gt;mode&lt;/em&gt; of working — one the modern frontend toolchain has quietly priced out of existence.&lt;/p&gt;

&lt;p&gt;Specifically, we lost:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;The throwaway.&lt;/strong&gt; The five-minute hack to verify that an idea works. The "let me just see what this looks like rendered" experiment. With a scaffold, the cost of starting is high enough that you don't bother. You either pollute an existing big project, or you open the browser DevTools console.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;The teaching artifact.&lt;/strong&gt; A blog post used to be able to say &lt;em&gt;here, run this file&lt;/em&gt;. Now it says &lt;em&gt;clone this repo&lt;/em&gt;. The reader is no longer reading code; they are operating a project.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;The micro-app.&lt;/strong&gt; The three-route admin tool. The internal status page. The webhook that posts a Slack message. Things that should be one file are now twenty, because the framework demands it.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;The shareable Gist.&lt;/strong&gt; I cannot send you a single &lt;code&gt;.jsx&lt;/code&gt; file and have you run it. I have to send you a repository — or a CodeSandbox URL, which is its own confession that the local toolchain has gotten too heavy.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;The curl-and-run.&lt;/strong&gt; Plain Node lets you stream a program straight from a URL into the runtime, no file on disk: &lt;code&gt;curl https://gist.githubusercontent.com/.../app.js | node&lt;/code&gt;. No clone, no install, no project to set up. The source travels over the wire, lands in the interpreter, runs. The same pattern should work for a single-file frontend app — &lt;code&gt;curl https://.../App.jsx | npx some-framework dev -&lt;/code&gt; — and the fact that this is &lt;em&gt;unimaginable&lt;/em&gt; today is the most concrete possible measurement of how heavy "starting a frontend app" has become. We have a JavaScript-shaped hole in our shells that the language used to fit through.&lt;/p&gt;

&lt;p&gt;There is a fractal version of this same pain one level down. Even &lt;em&gt;inside&lt;/em&gt; a project, modern React's &lt;code&gt;"use client"&lt;/code&gt; directive forces single features to be sharded across multiple files for purely mechanical reasons — the same disease, at smaller scale. I wrote about that version separately in &lt;a href="https://dev.to/lazarv/the-use-client-tax-1ed0"&gt;The "use client" Tax&lt;/a&gt;. What follows here is the project-level shape of the same problem: even when the whole app &lt;em&gt;should&lt;/em&gt; be one file, you are not allowed to write it that way.&lt;/p&gt;

&lt;h2&gt;
  
  
  The shape of the fix
&lt;/h2&gt;

&lt;p&gt;Imagine, for a second, that this just worked:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;npx next dev ./App.jsx
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;One file. One command. The framework picks it up, runs it, hot-reloads it, serves it. No &lt;code&gt;next.config.js&lt;/code&gt;, no &lt;code&gt;pages/&lt;/code&gt;, no &lt;code&gt;app/&lt;/code&gt;, no &lt;code&gt;package.json&lt;/code&gt;. If you decide later that you want a real project, you make the directory, add the config, split the file. The framework grows with you instead of demanding everything upfront.&lt;/p&gt;

&lt;p&gt;The technology to do this is not hard. Frameworks already build on dev servers — Vite, esbuild, Turbopack — that can resolve and bundle a single entry point. The framework conventions (file-based routing, layouts, server components) are conventions &lt;em&gt;over&lt;/em&gt; the bundler, not &lt;em&gt;replacements&lt;/em&gt; for it. There is no fundamental reason a framework's CLI cannot accept a path to a &lt;code&gt;.jsx&lt;/code&gt; file and Just Work, with the conventions kicking in only once you opt into a directory layout.&lt;/p&gt;

&lt;p&gt;The reason it doesn't work is not technical. It's cultural. We have decided, somewhere along the way, that &lt;em&gt;the project&lt;/em&gt; is the unit of frontend code, and the file is merely an implementation detail. Backend frameworks never made that mistake. You can still write a fifteen-line &lt;code&gt;server.js&lt;/code&gt; and run it. You can still write a Flask app in one file. You can still put a Go HTTP handler in &lt;code&gt;main.go&lt;/code&gt; and ship it. Scaffolds are offered as a convenience, not enforced as a precondition.&lt;/p&gt;

&lt;p&gt;Frontend should be no different.&lt;/p&gt;

&lt;h2&gt;
  
  
  One file in, one file out
&lt;/h2&gt;

&lt;p&gt;The single-file dev story is only half of the picture. The other half is what comes out when you build.&lt;/p&gt;

&lt;p&gt;Today, building a frontend project produces another tree. A &lt;code&gt;.next/&lt;/code&gt; directory. A &lt;code&gt;dist/&lt;/code&gt; directory. A &lt;code&gt;.output/&lt;/code&gt; directory. Hundreds of chunked JavaScript files, manifests, server bundles, client bundles, route maps — and a &lt;code&gt;node_modules&lt;/code&gt; you must ship alongside it, or carefully fold into the deployment artifact. Running the result usually means another framework-specific command (&lt;code&gt;next start&lt;/code&gt;, &lt;code&gt;node .output/server/index.mjs&lt;/code&gt;) that depends on the surrounding directory structure being intact.&lt;/p&gt;

&lt;p&gt;It should be possible to do this:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;some-framework build ./App.jsx &lt;span class="nt"&gt;-o&lt;/span&gt; app.js
node app.js
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;One file in. One file out. No &lt;code&gt;node_modules&lt;/code&gt;, no config, no manifest, no &lt;code&gt;dist/&lt;/code&gt; to preserve. A single &lt;code&gt;.js&lt;/code&gt; that boots an HTTP server, serves the assets it needs (inlined or referenced), and runs on any Node install with nothing else next to it.&lt;/p&gt;

&lt;p&gt;Backend developers have had this for years, just under different names. Go produces a static binary. Deno compiles to a single executable. esbuild can bundle a Node program into one file. The pattern is universal: take everything the program needs, fold it into one artifact, ship that. Nothing about a React app — even a server-rendered, server-component-heavy React app — fundamentally prevents the same thing.&lt;/p&gt;

&lt;p&gt;What this unlocks is bigger than convenience:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Trivial deployment.&lt;/strong&gt; &lt;code&gt;scp app.js server:/srv/ &amp;amp;&amp;amp; node app.js&lt;/code&gt;. No CI artifact pipelines, no Docker images for a webhook receiver, no Kubernetes for a status page.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Reproducibility.&lt;/strong&gt; The artifact is a file. You can hash it, version it, archive it, email it. Not a directory whose contents quietly differ depending on which &lt;code&gt;npm install&lt;/code&gt; produced it.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Sandboxes.&lt;/strong&gt; A single file is something a sandbox runtime — a serverless platform, a worker, a container — can swallow whole, with no need to mount a &lt;code&gt;node_modules&lt;/code&gt;.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Distribution.&lt;/strong&gt; Internal tools become as easy to share as a CLI binary. "Drop this on the server and run it" is a workflow we lost the moment frontends grew a build directory.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The deploy story for a small app should be as small as the app. Right now, even a thirty-line frontend deploys like a monorepo.&lt;/p&gt;

&lt;h2&gt;
  
  
  And then AI showed up
&lt;/h2&gt;

&lt;p&gt;The scaffold tax used to be paid mostly by humans — a one-time annoyance you absorbed at project start, then forgot about. AI coding tools have quietly turned it into a recurring tax, paid on every interaction.&lt;/p&gt;

&lt;p&gt;When you ask an AI to modify a single-file app, it can read the entire program in one shot, hold the whole behavior in its working memory, and reason about a change with confidence. The file &lt;em&gt;is&lt;/em&gt; the project. There is nothing else to discover.&lt;/p&gt;

&lt;p&gt;When you ask an AI to modify a scaffolded project, it has to do archaeology first. Where does routing live? Which &lt;code&gt;tsconfig&lt;/code&gt; paths are aliased? Is that import resolved by a framework convention or by the bundler? Is &lt;code&gt;app/&lt;/code&gt; the routing root, or a coincidentally-named folder? What does the project's ESLint config forbid? Half the request gets spent loading context that wasn't actually relevant to the change.&lt;/p&gt;

&lt;p&gt;This shows up as:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Worse answers&lt;/strong&gt;, because the model is reasoning under a noisier prompt.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Slower answers&lt;/strong&gt;, because more files have to be read before it can act.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;More expensive answers&lt;/strong&gt;, because tokens are not free, and a fresh agent re-discovers the same project structure on every session.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;More fragile answers&lt;/strong&gt;, because the model has more surface area on which to misread a convention.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;A one-file app is, by accident, the ideal substrate for AI-assisted coding: the entire program fits in a single attention window, every symbol resolves locally, and the change you ask for can be made without crawling a directory tree first. The convention overhead we built up to make starting a project "easier" turns out to be overhead we now pay &lt;em&gt;every time&lt;/em&gt; we ask a tool to help us edit one.&lt;/p&gt;

&lt;p&gt;The same things that made the single-file app pleasant to write by hand — small surface, no hidden conventions, nothing to discover — make it the format AI tools handle best. We just stopped producing apps in that shape.&lt;/p&gt;

&lt;h2&gt;
  
  
  Why this matters
&lt;/h2&gt;

&lt;p&gt;There is a subtle compounding effect to all of this. When the cost of starting is high, people start fewer things. When people start fewer things, the ecosystem gets less weird, less experimental, less playful. The thirty-line idea that would have become a beloved internal tool never gets written, because the scaffolding tax was higher than the energy budget for the experiment.&lt;/p&gt;

&lt;p&gt;The modern frontend stack is extraordinarily capable. It can render server components, stream HTML, hydrate selectively, generate static pages, run on the edge, do incremental builds. None of that is at odds with also being able to do &lt;em&gt;this&lt;/em&gt;:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;some-framework dev ./App.jsx
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;It's a small surface area. It's enormously valuable. And it is, conspicuously, missing from almost every option you'd reach for today.&lt;/p&gt;

&lt;p&gt;The good news is that &lt;em&gt;almost&lt;/em&gt;. If you look around carefully, this capability is starting to reappear in the corners of the ecosystem — runtimes that treat the single file as a first-class entry point, not as a degenerate case of a project. It's worth keeping an eye on.&lt;/p&gt;

&lt;p&gt;The thirty-second app deserves to come back.&lt;/p&gt;

</description>
      <category>discuss</category>
      <category>javascript</category>
      <category>node</category>
      <category>webdev</category>
    </item>
    <item>
      <title>The Two Joys of Coding Before AI</title>
      <dc:creator>Viktor Lázár</dc:creator>
      <pubDate>Mon, 27 Apr 2026 19:51:28 +0000</pubDate>
      <link>https://dev.to/lazarv/the-two-joys-of-coding-before-ai-1pbp</link>
      <guid>https://dev.to/lazarv/the-two-joys-of-coding-before-ai-1pbp</guid>
      <description>&lt;p&gt;There is a particular kind of grief floating around right now. You see it in blog posts, in conference talks, in late-night threads: a mourning for the joy of coding before AI. People describe it as if a forest has been paved over. Something they loved is gone, and something colder has taken its place.&lt;/p&gt;

&lt;p&gt;I think most of these conversations talk past each other because they skip the only question that matters: &lt;strong&gt;what was the joy actually made of?&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;"Coding" is not one activity. It is at least two, braided together so tightly that for decades nobody had to separate them. AI pulls on one of those strands and not the other, and whether that feels like loss or liberation depends entirely on which strand you were holding.&lt;/p&gt;

&lt;h2&gt;
  
  
  Two kinds of joy
&lt;/h2&gt;

&lt;p&gt;Strip a programming session down to its emotional core and you find two distinct rewards.&lt;/p&gt;

&lt;p&gt;The first is &lt;strong&gt;the joy of materializing a vision&lt;/strong&gt;. You see something in your head — a tool, an interface, a system, a small clever thing that does not exist yet — and you bring it into the world. The pleasure here is in the gap closing. The thing in your imagination and the thing on the screen converge until they are the same thing. The keyboard, the language, the build system: these are friction. Necessary friction, often beautiful friction, but friction. The joy lives at the moment of arrival.&lt;/p&gt;

&lt;p&gt;The second is &lt;strong&gt;the joy of figuring something out&lt;/strong&gt;. A problem resists you. You sit with it. You pull on threads, build mental models, get them wrong, refine them, and somewhere — sometimes in the shower, sometimes at 2 a.m., sometimes mid-sentence in a meeting — the shape of the answer clicks into place. The pleasure here is not in arrival but in the act of comprehension itself. You understand something now that you did not understand an hour ago, and your brain rewards you for it the way it rewards eating when you are hungry.&lt;/p&gt;

&lt;p&gt;These are not the same feeling. They use different muscles. They satisfy different hungers. And — this is the important part — they leave behind different kinds of memory. A vision-materializer remembers what they built. A problem-solver remembers how the world bent into a new shape inside their head.&lt;/p&gt;

&lt;p&gt;Most working programmers feel both, in different proportions, sometimes in the same hour. But if you ask honestly which one is the &lt;em&gt;core&lt;/em&gt; — the thing that made you a programmer rather than something else — almost everyone has an answer.&lt;/p&gt;

&lt;h2&gt;
  
  
  What AI actually changes
&lt;/h2&gt;

&lt;p&gt;AI coding assistants are extraordinary materializers. That is what they are built to be. You describe the thing, they produce the thing. The friction between vision and artifact collapses dramatically. What used to be an afternoon of plumbing is now a paragraph and a review pass.&lt;/p&gt;

&lt;p&gt;If your joy was in the materialization — in seeing the thing exist — AI is not stealing anything from you. It is &lt;strong&gt;giving you more of what you loved&lt;/strong&gt;. The gap closes faster, which means you can close more gaps, which means more visions per unit of life. The friction you tolerated was never the source of the joy; the arrival was. You can build the second thing now, and the third, and the weird side project you never had time for. The hands-on craft loss is real but it is a craft loss, not a joy loss. You can still write the loop by hand on a Saturday if you want to. Nothing stops you.&lt;/p&gt;

&lt;p&gt;One thing has to be said clearly here, because it is the most common bad-faith reading of the materializer position: &lt;strong&gt;materialization is not "whatever shipped, shipped."&lt;/strong&gt; A vision is not just a silhouette of a feature; it has internal coherence, a way it behaves under pressure, a quality it carries. A materializer who accepts slop because it superficially resembles the artifact in their head has not closed the gap — they have moved the goalposts to meet the output. That is not the joy of materializing a vision. That is the relief of being done. They are different feelings, and conflating them is how teams end up shipping confident-sounding garbage at unprecedented speed. The AI gives you a draft. The work — the actual materializer's work — is to keep pushing the draft until it matches the thing in your head, including the parts of the thing in your head that have to do with correctness, taste, performance, and how it will read to the next person. Acceleration is acceleration toward the &lt;em&gt;right&lt;/em&gt; artifact, not toward any artifact. A materializer who forgets this is no longer practicing their craft; they are just hitting accept.&lt;/p&gt;

&lt;p&gt;If your joy was in the figuring out, the picture is genuinely different — and the grief is genuinely earned. The AI is not just removing friction; it is removing &lt;strong&gt;the problem itself&lt;/strong&gt;. The puzzle you would have sat with for three days, turning it over in your head on the train and in the shower, building the mental model that becomes part of how you think forever — the AI hands you an answer in twelve seconds. The answer is often correct. And the comprehension that would have grown in you while you struggled does not grow, because you did not struggle.&lt;/p&gt;

&lt;p&gt;This is not a complaint about laziness or skill atrophy, though those are real concerns. It is something more specific: a category of human pleasure, the pleasure of &lt;em&gt;understanding something hard&lt;/em&gt;, requires the hardness. Remove the hardness and you remove the pleasure, even if you keep the answer. You cannot have the satisfaction of a crossword without the crossword.&lt;/p&gt;

&lt;h2&gt;
  
  
  Why the debate is so confused
&lt;/h2&gt;

&lt;p&gt;Once you see this split, the public conversation about AI and coding starts to make more sense. The two camps are not actually disagreeing about AI. They are reporting honestly on two different inner experiences.&lt;/p&gt;

&lt;p&gt;The "AI is wonderful, I ship five times faster" camp is overwhelmingly populated by materializers. They are telling the truth. Their joy is intact and amplified.&lt;/p&gt;

&lt;p&gt;The "AI is hollowing out my craft" camp is overwhelmingly populated by problem-solvers. They are also telling the truth. Their joy is, in fact, being eroded — not by malice or hype, but by the specific mechanism of having the puzzles solved before they get to play with them.&lt;/p&gt;

&lt;p&gt;When these two groups argue, they sound like they are arguing about a tool. They are actually arguing about which of two pleasures is the real one. There is no answer to that question, because there are two answers, and they are both correct for the person giving them.&lt;/p&gt;

&lt;p&gt;Notice how each camp uses the same phrase to dismiss the other's grief. To the materializer, the problem-solving was always an &lt;em&gt;implementation detail&lt;/em&gt; — a means, a tax you paid on the way to the artifact, something a sufficiently advanced tool was supposed to absorb eventually. To the problem-solver, the shipped artifact was the implementation detail — the residue, the visible echo of an internal event that had already happened in their head. Each side, in good faith, treats the other side's joy as the boring scaffolding around their own. That is why the conversation goes nowhere: both sides are correctly identifying what is, &lt;em&gt;for them&lt;/em&gt;, incidental.&lt;/p&gt;

&lt;h2&gt;
  
  
  What to do about it
&lt;/h2&gt;

&lt;p&gt;If you are mourning, the first useful move is to ask yourself the honest version of the question. Not "do I miss coding," but: &lt;em&gt;which kind of coding?&lt;/em&gt; When you replay the moments you would call joyful, are you watching yourself ship the thing, or are you watching yourself understand the thing? The answer tells you whether you are facing an opportunity or a loss.&lt;/p&gt;

&lt;p&gt;For materializers, the path is mostly forward. Use the tools. Build more. The thing you loved is more available now, not less.&lt;/p&gt;

&lt;p&gt;For problem-solvers, the answer is harder and more deliberate. The puzzles still exist; they have just stopped arriving on their own. Production code paths now route around them. To keep the joy, you have to &lt;strong&gt;choose the friction back in&lt;/strong&gt; — pick problems the AI cannot solve cleanly, work in domains where the model is weak, build from scratch on weekends, read papers, do the leetcode-equivalent that is actually interesting to you, contribute to runtimes and compilers and other places where the problem space is still deep enough that no autocomplete can shortcut it. The protected hour where you do not ask the assistant is not a Luddite stance; it is a deliberate preservation of the conditions your joy requires.&lt;/p&gt;

&lt;p&gt;Both responses are healthy. Both are grown-up. What is not healthy is conflating them — using a materializer's optimism to dismiss a problem-solver's grief, or using a problem-solver's grief to deny a materializer's genuine, earned acceleration.&lt;/p&gt;

&lt;h2&gt;
  
  
  The thing under the thing
&lt;/h2&gt;

&lt;p&gt;The deeper claim hiding inside all of this is that &lt;em&gt;coding was never one thing&lt;/em&gt;. It was a workbench where two very different human pleasures happened to use the same tools. The industry treated them as one because the workflow forced them to be one — you could not materialize a vision without solving a hundred small problems along the way, and you could not solve interesting problems without something to materialize them into.&lt;/p&gt;

&lt;p&gt;AI is the first force strong enough to pull those two pleasures apart. It is doing so cleanly and without asking permission. What we are watching is not the death of the joy of coding. It is the unbundling of two joys that were always separate, finally being forced to admit it.&lt;/p&gt;

&lt;p&gt;Which one was yours?&lt;/p&gt;

</description>
      <category>ai</category>
      <category>programming</category>
      <category>productivity</category>
    </item>
    <item>
      <title>RSC as a serializer, not a model</title>
      <dc:creator>Viktor Lázár</dc:creator>
      <pubDate>Mon, 27 Apr 2026 17:32:02 +0000</pubDate>
      <link>https://dev.to/lazarv/rsc-as-a-serializer-not-a-model-56nj</link>
      <guid>https://dev.to/lazarv/rsc-as-a-serializer-not-a-model-56nj</guid>
      <description>&lt;p&gt;The most interesting thing about how TanStack Start integrates React Server Components is not what it does with them. It is the shape of the API.&lt;/p&gt;

&lt;p&gt;You do not write a Server Component. You do not opt into a model. You opt into a &lt;em&gt;function&lt;/em&gt; — a programmatic primitive that takes a component, runs it through the RSC renderer, and hands you back a payload you can store, transport, and rehydrate later. RSC is not the substrate of the application. It is a tool you reach for at specific call sites, when you want a specific thing.&lt;/p&gt;

&lt;p&gt;That framing is worth pausing on, because it tells you something about the design intent that the marketing copy does not.&lt;/p&gt;

&lt;h2&gt;
  
  
  What the API is really for
&lt;/h2&gt;

&lt;p&gt;The use case being served is narrow and clear: caching values that the rest of the JavaScript ecosystem cannot cache. A normal cache stores JSON. JSON cannot represent a React element. JSON cannot pass through a value that resolves later. JSON cannot carry the richer plain types — Map, Set, Date, typed arrays — that show up in a real component tree. RSC's wire format can. It was &lt;em&gt;designed&lt;/em&gt; to. That is what the protocol is.&lt;/p&gt;

&lt;p&gt;So if you take the RSC renderer, strip it of everything that makes it a composition model, and expose just the serializer, you get a primitive that turns "a tree containing UI and non-JSON props" into "a string you can put in Redis." That is a real capability. It is also, on reflection, a very small one.&lt;/p&gt;

&lt;p&gt;This is the capability TanStack Start has surfaced. Not Server Components. A serializer for component-shaped values, exposed as a programmatic API.&lt;/p&gt;

&lt;h2&gt;
  
  
  If the goal is caching, name it caching
&lt;/h2&gt;

&lt;p&gt;&lt;code&gt;"use cache"&lt;/code&gt; is the design that already exists for this exact problem. It is a directive. It tells the runtime: &lt;em&gt;this output is cacheable, key it on these inputs, store it for this long&lt;/em&gt;. The runtime handles serialization, deserialization, key derivation, invalidation, and storage. The developer writes a function and adds one line.&lt;/p&gt;

&lt;p&gt;Concretely. Imagine a post card — a tree containing a server-rendered article and a client-rendered set of actions — produced once and reused across navigations. Under a directive:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight tsx"&gt;&lt;code&gt;&lt;span class="k"&gt;async&lt;/span&gt; &lt;span class="kd"&gt;function&lt;/span&gt; &lt;span class="nf"&gt;PostCard&lt;/span&gt;&lt;span class="p"&gt;({&lt;/span&gt; &lt;span class="nx"&gt;postId&lt;/span&gt; &lt;span class="p"&gt;}:&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="nl"&gt;postId&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="kr"&gt;string&lt;/span&gt; &lt;span class="p"&gt;})&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;use cache&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;
  &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;post&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="nx"&gt;db&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;posts&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;findById&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;postId&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
  &lt;span class="k"&gt;return &lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;
    &lt;span class="p"&gt;&amp;lt;&lt;/span&gt;&lt;span class="nt"&gt;article&lt;/span&gt;&lt;span class="p"&gt;&amp;gt;&lt;/span&gt;
      &lt;span class="p"&gt;&amp;lt;&lt;/span&gt;&lt;span class="nt"&gt;h1&lt;/span&gt;&lt;span class="p"&gt;&amp;gt;&lt;/span&gt;&lt;span class="si"&gt;{&lt;/span&gt;&lt;span class="nx"&gt;post&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;title&lt;/span&gt;&lt;span class="si"&gt;}&lt;/span&gt;&lt;span class="p"&gt;&amp;lt;/&lt;/span&gt;&lt;span class="nt"&gt;h1&lt;/span&gt;&lt;span class="p"&gt;&amp;gt;&lt;/span&gt;
      &lt;span class="p"&gt;&amp;lt;&lt;/span&gt;&lt;span class="nt"&gt;p&lt;/span&gt;&lt;span class="p"&gt;&amp;gt;&lt;/span&gt;&lt;span class="si"&gt;{&lt;/span&gt;&lt;span class="nx"&gt;post&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;body&lt;/span&gt;&lt;span class="si"&gt;}&lt;/span&gt;&lt;span class="p"&gt;&amp;lt;/&lt;/span&gt;&lt;span class="nt"&gt;p&lt;/span&gt;&lt;span class="p"&gt;&amp;gt;&lt;/span&gt;
      &lt;span class="p"&gt;&amp;lt;&lt;/span&gt;&lt;span class="nc"&gt;PostActions&lt;/span&gt; &lt;span class="na"&gt;postId&lt;/span&gt;&lt;span class="p"&gt;=&lt;/span&gt;&lt;span class="si"&gt;{&lt;/span&gt;&lt;span class="nx"&gt;post&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;id&lt;/span&gt;&lt;span class="si"&gt;}&lt;/span&gt; &lt;span class="na"&gt;authorId&lt;/span&gt;&lt;span class="p"&gt;=&lt;/span&gt;&lt;span class="si"&gt;{&lt;/span&gt;&lt;span class="nx"&gt;post&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;authorId&lt;/span&gt;&lt;span class="si"&gt;}&lt;/span&gt; &lt;span class="p"&gt;/&amp;gt;&lt;/span&gt;
    &lt;span class="p"&gt;&amp;lt;/&lt;/span&gt;&lt;span class="nt"&gt;article&lt;/span&gt;&lt;span class="p"&gt;&amp;gt;&lt;/span&gt;
  &lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The runtime knows what the cache key is (the inputs the function closes over), where the value lives, when it expires, and how to serialize it (the same wire format the rest of the system already speaks). One line of metadata and a function body. The developer writes a component.&lt;/p&gt;

&lt;p&gt;Under TanStack Start's API, the same tree is produced by driving the protocol explicitly:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight tsx"&gt;&lt;code&gt;&lt;span class="k"&gt;import&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="nx"&gt;createServerFn&lt;/span&gt; &lt;span class="p"&gt;}&lt;/span&gt; &lt;span class="k"&gt;from&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;@tanstack/react-start&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;
&lt;span class="k"&gt;import&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="nx"&gt;createCompositeComponent&lt;/span&gt; &lt;span class="p"&gt;}&lt;/span&gt; &lt;span class="k"&gt;from&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;@tanstack/react-start/rsc&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;

&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;getPostCard&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nf"&gt;createServerFn&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;
  &lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;validator&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;z&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;object&lt;/span&gt;&lt;span class="p"&gt;({&lt;/span&gt; &lt;span class="na"&gt;postId&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;z&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;string&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt; &lt;span class="p"&gt;}))&lt;/span&gt;
  &lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;handler&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="k"&gt;async &lt;/span&gt;&lt;span class="p"&gt;({&lt;/span&gt; &lt;span class="nx"&gt;data&lt;/span&gt; &lt;span class="p"&gt;})&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;post&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="nx"&gt;db&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;posts&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;findById&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;data&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;postId&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
    &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;src&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="nf"&gt;createCompositeComponent&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;
      &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="na"&gt;props&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
        &lt;span class="nx"&gt;renderActions&lt;/span&gt;&lt;span class="p"&gt;?:&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="na"&gt;d&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="na"&gt;postId&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="kr"&gt;string&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt; &lt;span class="nl"&gt;authorId&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="kr"&gt;string&lt;/span&gt; &lt;span class="p"&gt;})&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="nx"&gt;React&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;ReactNode&lt;/span&gt;
      &lt;span class="p"&gt;})&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;
        &lt;span class="p"&gt;&amp;lt;&lt;/span&gt;&lt;span class="nt"&gt;article&lt;/span&gt;&lt;span class="p"&gt;&amp;gt;&lt;/span&gt;
          &lt;span class="p"&gt;&amp;lt;&lt;/span&gt;&lt;span class="nt"&gt;h1&lt;/span&gt;&lt;span class="p"&gt;&amp;gt;&lt;/span&gt;&lt;span class="si"&gt;{&lt;/span&gt;&lt;span class="nx"&gt;post&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;title&lt;/span&gt;&lt;span class="si"&gt;}&lt;/span&gt;&lt;span class="p"&gt;&amp;lt;/&lt;/span&gt;&lt;span class="nt"&gt;h1&lt;/span&gt;&lt;span class="p"&gt;&amp;gt;&lt;/span&gt;
          &lt;span class="p"&gt;&amp;lt;&lt;/span&gt;&lt;span class="nt"&gt;p&lt;/span&gt;&lt;span class="p"&gt;&amp;gt;&lt;/span&gt;&lt;span class="si"&gt;{&lt;/span&gt;&lt;span class="nx"&gt;post&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;body&lt;/span&gt;&lt;span class="si"&gt;}&lt;/span&gt;&lt;span class="p"&gt;&amp;lt;/&lt;/span&gt;&lt;span class="nt"&gt;p&lt;/span&gt;&lt;span class="p"&gt;&amp;gt;&lt;/span&gt;
          &lt;span class="p"&gt;&amp;lt;&lt;/span&gt;&lt;span class="nt"&gt;footer&lt;/span&gt;&lt;span class="p"&gt;&amp;gt;&lt;/span&gt;&lt;span class="si"&gt;{&lt;/span&gt;&lt;span class="nx"&gt;props&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;renderActions&lt;/span&gt;&lt;span class="p"&gt;?.({&lt;/span&gt; &lt;span class="na"&gt;postId&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;post&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;id&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="na"&gt;authorId&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;post&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;authorId&lt;/span&gt; &lt;span class="p"&gt;})&lt;/span&gt;&lt;span class="si"&gt;}&lt;/span&gt;&lt;span class="p"&gt;&amp;lt;/&lt;/span&gt;&lt;span class="nt"&gt;footer&lt;/span&gt;&lt;span class="p"&gt;&amp;gt;&lt;/span&gt;
        &lt;span class="p"&gt;&amp;lt;/&lt;/span&gt;&lt;span class="nt"&gt;article&lt;/span&gt;&lt;span class="p"&gt;&amp;gt;&lt;/span&gt;
      &lt;span class="p"&gt;),&lt;/span&gt;
    &lt;span class="p"&gt;)&lt;/span&gt;
    &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="nx"&gt;src&lt;/span&gt; &lt;span class="p"&gt;}&lt;/span&gt;
  &lt;span class="p"&gt;})&lt;/span&gt;

&lt;span class="k"&gt;export&lt;/span&gt; &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;Route&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nf"&gt;createFileRoute&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;/posts/$postId&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)({&lt;/span&gt;
  &lt;span class="na"&gt;loader&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;({&lt;/span&gt; &lt;span class="nx"&gt;params&lt;/span&gt; &lt;span class="p"&gt;})&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="nf"&gt;getPostCard&lt;/span&gt;&lt;span class="p"&gt;({&lt;/span&gt; &lt;span class="na"&gt;data&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="na"&gt;postId&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;params&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;postId&lt;/span&gt; &lt;span class="p"&gt;}&lt;/span&gt; &lt;span class="p"&gt;}),&lt;/span&gt;
  &lt;span class="na"&gt;component&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;PostPage&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
&lt;span class="p"&gt;})&lt;/span&gt;

&lt;span class="kd"&gt;function&lt;/span&gt; &lt;span class="nf"&gt;PostPage&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="nx"&gt;src&lt;/span&gt; &lt;span class="p"&gt;}&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;Route&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;useLoaderData&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;
  &lt;span class="k"&gt;return &lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;
    &lt;span class="p"&gt;&amp;lt;&lt;/span&gt;&lt;span class="nc"&gt;CompositeComponent&lt;/span&gt;
      &lt;span class="na"&gt;src&lt;/span&gt;&lt;span class="p"&gt;=&lt;/span&gt;&lt;span class="si"&gt;{&lt;/span&gt;&lt;span class="nx"&gt;src&lt;/span&gt;&lt;span class="si"&gt;}&lt;/span&gt;
      &lt;span class="na"&gt;renderActions&lt;/span&gt;&lt;span class="p"&gt;=&lt;/span&gt;&lt;span class="si"&gt;{&lt;/span&gt;&lt;span class="p"&gt;({&lt;/span&gt; &lt;span class="nx"&gt;postId&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;authorId&lt;/span&gt; &lt;span class="p"&gt;})&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;
        &lt;span class="p"&gt;&amp;lt;&lt;/span&gt;&lt;span class="nc"&gt;PostActions&lt;/span&gt; &lt;span class="na"&gt;postId&lt;/span&gt;&lt;span class="p"&gt;=&lt;/span&gt;&lt;span class="si"&gt;{&lt;/span&gt;&lt;span class="nx"&gt;postId&lt;/span&gt;&lt;span class="si"&gt;}&lt;/span&gt; &lt;span class="na"&gt;authorId&lt;/span&gt;&lt;span class="p"&gt;=&lt;/span&gt;&lt;span class="si"&gt;{&lt;/span&gt;&lt;span class="nx"&gt;authorId&lt;/span&gt;&lt;span class="si"&gt;}&lt;/span&gt; &lt;span class="p"&gt;/&amp;gt;&lt;/span&gt;
      &lt;span class="p"&gt;)&lt;/span&gt;&lt;span class="si"&gt;}&lt;/span&gt;
    &lt;span class="p"&gt;/&amp;gt;&lt;/span&gt;
  &lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Six things have to agree across this code: the server function that produces the payload, the validator that gates its inputs, the &lt;code&gt;createCompositeComponent&lt;/code&gt; call that names the seam, the loader entry on the route, the &lt;code&gt;useLoaderData&lt;/code&gt; site that retrieves the payload, and the &lt;code&gt;&amp;lt;CompositeComponent&amp;gt;&lt;/code&gt; site that rehydrates it — including the render-prop signature that has to match on both sides. Caching is implicit in the router's &lt;code&gt;staleTime&lt;/code&gt; and &lt;code&gt;loaderDeps&lt;/code&gt;. Invalidation is the router's responsibility and shaped by the route boundary, not the value boundary.&lt;/p&gt;

&lt;p&gt;None of these decisions have a defensible non-canonical answer for the simple case. Every team using this will write the same wrapper around it, in slightly different shapes, and the bugs will all live in the relationships between the serialization site, the render-prop signature, and the rehydration site.&lt;/p&gt;

&lt;p&gt;The case where this matters most is the one the framework does not address at all: caching a component-shaped value at request scope. The docs recommend &lt;code&gt;React.cache&lt;/code&gt; for the request-scoped case, but &lt;code&gt;React.cache&lt;/code&gt; deduplicates plain function results — it does not memoize trees that contain client component references and closures over server-only data. There is no primitive shaped for that. The same computation can run three times in one request because three parts of the tree need it, and the framework's answer is the route-level cache, which is route-scoped, or &lt;code&gt;React.cache&lt;/code&gt;, which does not apply.&lt;/p&gt;

&lt;p&gt;A directive-shaped API hides those decisions because most of them only have one correct answer. A library-shaped API exposes them because the framework refuses to commit to one.&lt;/p&gt;

&lt;h2&gt;
  
  
  Boundary wrappers, protocol wrappers
&lt;/h2&gt;

&lt;p&gt;There is a more general problem under this. RSC, as a &lt;em&gt;protocol&lt;/em&gt;, is a serialization format. RSC, as a &lt;em&gt;model&lt;/em&gt;, is a composition story between two environments. When a framework lifts the protocol out of the model and exposes it as a programmable primitive, what it is really saying is: "the protocol is interesting to us, the model is not."&lt;/p&gt;

&lt;p&gt;It would be easy to read this as a complaint about wrappers in general. It is not. A wrapper-shaped API is often the right answer, and in TanStack Start specifically it is the dominant idiom. Routes are wrappers. Loaders are wrappers. Server functions are wrappers. The whole framework is built around the pattern of &lt;em&gt;pass your thing through this constructor, get back a richer thing the runtime now understands&lt;/em&gt;. A wrapper for RSC caching slots into that idiom cleanly.&lt;/p&gt;

&lt;p&gt;The argument is about what a wrapper &lt;em&gt;carries&lt;/em&gt; across the boundary it creates, because there are two kinds and they look almost identical until you start using them.&lt;/p&gt;

&lt;p&gt;A &lt;em&gt;boundary&lt;/em&gt; wrapper says: this is where the model changes shape. It takes a value, marks a seam the runtime will operate on, and hands the developer a richer reference back. The developer never touches the protocol underneath; they only see the seam, named and located. The wrapper carries the model with it — calling it is participating in the model.&lt;/p&gt;

&lt;p&gt;A &lt;em&gt;protocol&lt;/em&gt; wrapper says: here is the renderer, here is the payload, here is the deserializer. It hands the developer the wire format and asks them to drive it. The seam is gone. What is left is a set of operations on bytes that the developer has to compose themselves.&lt;/p&gt;

&lt;p&gt;TanStack Start's other wrappers — the route, the loader, the server function — are boundary wrappers. The framework knows what the wrapped thing is, the runtime knows what to do with it, and the developer writes their code in the vocabulary of the model rather than the vocabulary of the protocol underneath. That is what makes those APIs feel coherent next to each other.&lt;/p&gt;

&lt;p&gt;The RSC integration is the exception. The outer shape is the same — a function you call, a value you hand it, a value you get back — but there is no model on the other side. The developer is handed the renderer instead of being handed a richer reference.&lt;/p&gt;

&lt;h2&gt;
  
  
  The defensible version of the choice
&lt;/h2&gt;

&lt;p&gt;The strongest case for what TanStack has shipped goes something like this. The framework's audience is router-driven and SPA-adjacent. Committing to the full RSC composition model would constrain the rest of the architecture in ways that conflict with how the framework already works. A library-shaped API leaves room for teams whose caching needs really do diverge. A directive forces a single shape on every caller, and that shape — &lt;code&gt;"use cache"&lt;/code&gt; as it ships today — is itself not a settled design. Its key inference, its invalidation story, and its "what is safe to close over" semantics have known warts. Why commit to it?&lt;/p&gt;

&lt;p&gt;The first half of that case is real. The second half is the one to push back on.&lt;/p&gt;

&lt;p&gt;A directive does not have to mean &lt;code&gt;"use cache"&lt;/code&gt; exactly as it ships today. The directive is a shape, not a specification. The shape is: a marker at the boundary, a runtime that infers the keying and storage, a developer who writes a function. The runtime gets to choose how aggressive the inference is, what scope the directive defaults to, and what the escape hatches look like. A request-scoped variant is &lt;em&gt;easier&lt;/em&gt; than the application-cache variant &lt;code&gt;"use cache"&lt;/code&gt; currently targets, not harder, because almost every decision has one canonical answer at the request scope.&lt;/p&gt;

&lt;p&gt;The "we left it open because needs diverge" defense applies to primitives where they really do diverge. Application caches diverge — TTLs, tags, storage. Request-scoped caches do not. The needs are: dedupe within a render, survive into hydration, evaporate at request end. That is the specification. There is no second team whose answer to those questions is meaningfully different.&lt;/p&gt;

&lt;p&gt;The honest version of the architectural choice is: "we don't want to commit to the model yet, and we don't want to ship a caching directive without committing to the model, so we shipped the renderer as a library and let users assemble what they need." That is a coherent position. It is just not the position the docs reflect.&lt;/p&gt;

&lt;h2&gt;
  
  
  What is actually being shipped
&lt;/h2&gt;

&lt;p&gt;To make this caching primitive work, the framework has to bundle the RSC renderer, the RSC serializer, the matching deserializer, the streaming format reader, and enough of the React internals to drive all of it. That is the entire RSC machinery, intact — including the parts the framework will not let the developer use.&lt;/p&gt;

&lt;p&gt;The framework does not have a &lt;code&gt;"use client"&lt;/code&gt; boundary at all. Interactivity is injected through slots on &lt;code&gt;&amp;lt;CompositeComponent&amp;gt;&lt;/code&gt; — &lt;code&gt;children&lt;/code&gt;, render props, component props — not through client component references resolved at hydration. The reference-resolution machinery RSC was designed around, the matched identifiers across server and client builds, the client manifest, the hydration-time dispatch, all of it is bundled but inert. The framework has actively engineered an alternative path (the slot pattern) that routes around the very mechanism RSC exists to provide. The user experience of writing a TanStack Start application is the user experience of writing a router-driven app that occasionally summons the RSC serializer for caching.&lt;/p&gt;

&lt;p&gt;So the framework pays the full cost of the protocol — every byte of the serializer, every transitive dependency, every line of React internals it has to remain compatible with — and uses it to power a feature that, on its own, could be expressed as a directive over a much smaller mechanism. The bundle includes the engine of a model the framework has chosen not to adopt.&lt;/p&gt;

&lt;p&gt;There is a third option the design did not take: a fit-for-purpose serializer. The capability TanStack Start actually needs — turning a rendered component tree, slot placeholders and non-JSON values included, into bytes and back — is a small one. It is not the RSC protocol. RSC is that capability &lt;em&gt;plus&lt;/em&gt; a streaming format with suspense boundaries, &lt;em&gt;plus&lt;/em&gt; server reference dispatch, &lt;em&gt;plus&lt;/em&gt; the entire client-resolution layer the framework has already chosen not to use. A protocol designed for the narrower job would be smaller, evolvable on the framework's own cadence, and decoupled from React's internals. That a project willing to ship its own router, its own loader system, its own server-function wrapper, and its own composite-component primitive nevertheless reached for the entire RSC implementation tells you the choice was not driven by what the feature needs.&lt;/p&gt;

&lt;h2&gt;
  
  
  The agent reading the code
&lt;/h2&gt;

&lt;p&gt;There is one more lens worth applying, because in 2026 it is the lens that shapes a growing share of code: the AI coding agent.&lt;/p&gt;

&lt;p&gt;The relevant claim is narrower than "agents will get directives right and wrappers wrong." It is structural: a directive compresses the model into the syntax; a protocol wrapper requires the model to be reconstructed at every call site, and that reconstruction is where coordination bugs compound.&lt;/p&gt;

&lt;p&gt;An agent producing a directive-based caching feature emits a marker, a function body, and the inputs the developer already named. The keying, scope, invalidation, and lifetime are the runtime's contract — the agent does not produce them, so it cannot produce them wrong.&lt;/p&gt;

&lt;p&gt;An agent producing a protocol-wrapper caching feature has to emit a serialization call, a storage decision, a key derivation, an invalidation hook, a deserialization call, a rehydration site, and the glue holding all of them in agreement. The shape of each piece is locally reasonable. The bug lives in the relationships — a stale key, a missed invalidation hook, a request-scoped value silently captured in the payload, a payload rehydrated at the wrong boundary. An agent reviewing its own output will not flag these, because each part on its own makes sense.&lt;/p&gt;

&lt;p&gt;Less code is not only easier for humans. It is easier for agents — fewer tokens to generate, fewer relationships to track, fewer places to be subtly wrong. Boundary primitives compress an enormous amount of model into a tiny amount of syntax. Protocol APIs decompress the model back into surface area and ask the agent to operate it. Increasingly, an API surface that is hostile to agents is an API surface that is hostile to its own users.&lt;/p&gt;

&lt;h2&gt;
  
  
  What gets lost
&lt;/h2&gt;

&lt;p&gt;A framework gets to choose the level of abstraction it commits to. If it commits to RSC as a model, the developer writes components and the runtime handles the seams. If it commits to caching as a model, the developer writes directives and the runtime handles serialization. If it commits to neither and exposes the protocol as a library, the developer writes glue.&lt;/p&gt;

&lt;p&gt;Glue is the most expensive code in any application. It is the code that does not solve the problem; it only connects the things that do. It also ages worst, because every change to the surrounding ecosystem requires the glue to be rewritten while the parts on either side of it stay the same. The moment a request-scoped value becomes something the developer has to manually serialize, key, store, and revive, the framework has already lost the lifecycle it was supposed to protect.&lt;/p&gt;

&lt;p&gt;A framework whose RSC story is a library call is asking every team that uses it to write and maintain that glue forever. The convenience is moved out of the runtime and into a thousand small repositories that all rediscover the same patterns and the same failure modes. The framework gets to ship a thinner runtime; the ecosystem absorbs the cost.&lt;/p&gt;

&lt;p&gt;That trade is sometimes worth it — for a primitive that genuinely has no canonical shape. Caching component output is not that primitive. It has a canonical shape. The shape is a directive. The reason it is not the API is not that the shape is wrong. It is that adopting the shape would require committing to the surrounding model, and that commitment was the thing the framework was avoiding from the start.&lt;/p&gt;

&lt;h2&gt;
  
  
  The smaller point
&lt;/h2&gt;

&lt;p&gt;Strip the protocol away and the design becomes legible. The team wanted a way to cache values that JSON could not represent. They had access to a serializer that could. They exposed the serializer. They called it RSC support.&lt;/p&gt;

&lt;p&gt;It is RSC support in the sense that the renderer is in the bundle. It is not RSC support in the sense that the developer is asked to think in the RSC model, write in its idioms, or benefit from its composition story. The renderer is a load-bearing dependency for a feature that is not, itself, the model the renderer was designed to power.&lt;/p&gt;

&lt;p&gt;When a framework treats a model as a primitive instead of as the model, it is telling you that it wanted something the model happened to make possible, not the model itself. There is nothing wrong with that as a product decision — but it should be named accurately. The honest version of this feature is a caching directive. The honest version of the framing is "we ship a serializer-backed cache." Everything else is a story told in the vocabulary of a model the framework chose not to adopt.&lt;/p&gt;

</description>
      <category>rsc</category>
      <category>tanstack</category>
      <category>react</category>
      <category>architecture</category>
    </item>
    <item>
      <title>Project, Don't Embed: Introducing Virtual Frame</title>
      <dc:creator>Viktor Lázár</dc:creator>
      <pubDate>Tue, 21 Apr 2026 08:48:43 +0000</pubDate>
      <link>https://dev.to/lazarv/project-dont-embed-introducing-virtual-frame-klb</link>
      <guid>https://dev.to/lazarv/project-dont-embed-introducing-virtual-frame-klb</guid>
      <description>&lt;p&gt;&lt;em&gt;A first-time introduction to Virtual Frame — what it is, why it exists, and how it composes independently deployed web apps into a single page without a shared build.&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;There's a problem every large frontend eventually runs into, and it goes something like this: you have five teams, five stacks, five deployment pipelines — and one page. The design system lives in team A's repo. The checkout widget belongs to team B. The dashboard you're trying to embed was last touched by a team that no longer exists. None of these things ship together. All of them need to render on the same page, at the same time, and feel like one product.&lt;/p&gt;

&lt;p&gt;The industry has been trying to solve this for a decade. The solutions all have a tax.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Module federation&lt;/strong&gt; is great when host and remote share a build pipeline, and useless the moment they don't. The first time you need a coordinated upgrade across three repos, the "independent teams" story evaporates. &lt;strong&gt;Iframes&lt;/strong&gt; give you perfect isolation and zero composability — you get a rigid rectangle that doesn't flow with your layout, can't inherit your theme, and picks fights about scroll, focus, and accessibility. &lt;strong&gt;Edge-side includes and server fragments&lt;/strong&gt; work beautifully for static markup and fall apart the moment the remote needs its own runtime. &lt;strong&gt;Ad-hoc SPA shells&lt;/strong&gt; work until they don't; then you're debugging a shared React instance that sees two different versions of the same context.&lt;/p&gt;

&lt;p&gt;Each of these is the right answer to a different question. None of them is the right answer to the question most teams are actually asking: &lt;em&gt;how do I compose fully independent web applications — different repos, different frameworks, different deploys — into one page, with real layout flow and real interactivity, without coupling any of them at build time?&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;Virtual Frame is a bet that you can do that if you stop thinking about &lt;em&gt;embedding&lt;/em&gt; remote applications and start thinking about &lt;em&gt;projecting&lt;/em&gt; them.&lt;/p&gt;

&lt;h2&gt;
  
  
  The one-paragraph version
&lt;/h2&gt;

&lt;p&gt;Picture a hidden iframe loading another team's page. The page runs normally in there — same as if you'd opened it in a new tab. Virtual Frame takes what that hidden page is drawing and paints it into a slot on your own page, live. When the remote updates, your slot updates. When your users click, scroll, or type inside the slot, Virtual Frame forwards those interactions back to the hidden page so its app keeps working like it's running in a real browser tab.&lt;/p&gt;

&lt;p&gt;The result: the remote app's output is part of your page. It flows with your layout, picks up your theme, and behaves like any other piece of your UI — because, as far as the browser is concerned, it &lt;em&gt;is&lt;/em&gt; any other piece of your UI. The remote keeps running in its own world; its output lives in yours.&lt;/p&gt;

&lt;p&gt;That's the entire pitch.&lt;/p&gt;

&lt;h2&gt;
  
  
  Why this mental shift matters
&lt;/h2&gt;

&lt;p&gt;The most important word in the previous section is &lt;em&gt;projecting&lt;/em&gt;, and it's worth slowing down on.&lt;/p&gt;

&lt;p&gt;When you embed something — an iframe, a web component, a third-party widget — you're accepting its frame. The embedded thing has a boundary, and things stop at that boundary. Layout stops there. Events stop there. Themes stop there. You're not composing, you're docking.&lt;/p&gt;

&lt;p&gt;When you &lt;em&gt;project&lt;/em&gt;, the boundary collapses. The remote application still runs in its own browsing context — that part is non-negotiable, it's how you keep the remote's runtime from tripping over yours — but its rendered output flows into your page the way a child component's output would. The &lt;em&gt;execution&lt;/em&gt; is isolated; the &lt;em&gt;presentation&lt;/em&gt; is composed.&lt;/p&gt;

&lt;p&gt;This is a surprisingly slippery idea the first time you encounter it, because we've been trained by twenty years of iframe behavior to think of "other-origin content" and "rigid rectangle" as the same thing. They're not. The rigidity is an implementation detail of how browsers expose iframe contents, not a law of nature.&lt;/p&gt;

&lt;h2&gt;
  
  
  How it actually works
&lt;/h2&gt;

&lt;p&gt;Three primitives, no magic.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;A source iframe&lt;/strong&gt;, hidden off-screen. The remote runs as a complete, standalone application inside it — its framework boots, its router runs, its effects fire, its fonts load. Virtual Frame doesn't re-execute your app; it observes it. That's the key constraint: nothing about your remote has to change to be projectable.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;A host element&lt;/strong&gt;, which is any element you put on the page — a &lt;code&gt;&amp;lt;div&amp;gt;&lt;/code&gt;, a &lt;code&gt;&amp;lt;section&amp;gt;&lt;/code&gt;, a component root. Virtual Frame optionally attaches a Shadow DOM to it (open or closed) and mirrors the remote's &lt;code&gt;&amp;lt;body&amp;gt;&lt;/code&gt; subtree into that shadow root. Shadow DOM gives you CSS isolation without giving up custom-property inheritance, which is the sweet spot: the remote's styles can't bleed into your page, but your theme still crosses the boundary.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;A sync layer&lt;/strong&gt;. Same-origin, this is a MutationObserver on the source plus CSS rewriting and event re-dispatch. Cross-origin, a bridge script loaded once in the remote serializes the DOM and events over &lt;code&gt;postMessage&lt;/code&gt;, and the host reconstructs them. The cross-origin story is worth dwelling on, because this is where iframes usually give up: &lt;em&gt;there is no host-side configuration&lt;/em&gt;. Drop the bridge into the remote, and every host on every origin can project it.&lt;/p&gt;

&lt;p&gt;Everything else — selector-based projection, canvas streaming, SSR with resumption, the shared reactive store — is built on top of these three primitives. But the primitives are simple, and that's on purpose. Simple primitives compose.&lt;/p&gt;

&lt;h2&gt;
  
  
  A taste of code
&lt;/h2&gt;

&lt;p&gt;The shortest path is the custom element. This is a complete working projection:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight html"&gt;&lt;code&gt;&lt;span class="nt"&gt;&amp;lt;script &lt;/span&gt;&lt;span class="na"&gt;type=&lt;/span&gt;&lt;span class="s"&gt;"module"&lt;/span&gt;&lt;span class="nt"&gt;&amp;gt;&lt;/span&gt;
  &lt;span class="k"&gt;import&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;virtual-frame/element&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;span class="nt"&gt;&amp;lt;/script&amp;gt;&lt;/span&gt;

&lt;span class="nt"&gt;&amp;lt;virtual-frame&lt;/span&gt;
  &lt;span class="na"&gt;src=&lt;/span&gt;&lt;span class="s"&gt;"https://dashboard.example.com"&lt;/span&gt;
  &lt;span class="na"&gt;isolate=&lt;/span&gt;&lt;span class="s"&gt;"open"&lt;/span&gt;
  &lt;span class="na"&gt;style=&lt;/span&gt;&lt;span class="s"&gt;"width: 100%; height: 600px"&lt;/span&gt;
&lt;span class="nt"&gt;&amp;gt;&amp;lt;/virtual-frame&amp;gt;&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;That's it. No build plugin. No config file. No framework buy-in. When the element connects to the DOM, it creates a hidden iframe at &lt;code&gt;src&lt;/code&gt;, attaches a shadow root to the custom element, and starts projecting. When the element is removed, everything tears down.&lt;/p&gt;

&lt;p&gt;If you want to project only a piece of the remote — a chart, a panel, a sidebar — add a selector:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight html"&gt;&lt;code&gt;&lt;span class="nt"&gt;&amp;lt;virtual-frame&lt;/span&gt;
  &lt;span class="na"&gt;src=&lt;/span&gt;&lt;span class="s"&gt;"https://dashboard.example.com"&lt;/span&gt;
  &lt;span class="na"&gt;selector=&lt;/span&gt;&lt;span class="s"&gt;"#metrics-chart"&lt;/span&gt;
  &lt;span class="na"&gt;isolate=&lt;/span&gt;&lt;span class="s"&gt;"open"&lt;/span&gt;
&lt;span class="nt"&gt;&amp;gt;&amp;lt;/virtual-frame&amp;gt;&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The full remote still runs in the background, so the selected subtree behaves exactly as it would in its native page. Its event handlers work. Its data fetches work. Its animations work. You just pulled a widget out of another team's app without negotiating an API contract.&lt;/p&gt;

&lt;p&gt;If you're in React, it's a component:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight jsx"&gt;&lt;code&gt;&lt;span class="k"&gt;import&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="nx"&gt;VirtualFrame&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;useVirtualFrame&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;useStore&lt;/span&gt; &lt;span class="p"&gt;}&lt;/span&gt; &lt;span class="k"&gt;from&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;@virtual-frame/react&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;span class="k"&gt;import&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="nx"&gt;createStore&lt;/span&gt; &lt;span class="p"&gt;}&lt;/span&gt; &lt;span class="k"&gt;from&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;@virtual-frame/store&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;

&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;store&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nf"&gt;createStore&lt;/span&gt;&lt;span class="p"&gt;();&lt;/span&gt;

&lt;span class="kd"&gt;function&lt;/span&gt; &lt;span class="nf"&gt;Dashboard&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;count&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nf"&gt;useStore&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;store&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;count&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;]);&lt;/span&gt;
  &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;frame&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nf"&gt;useVirtualFrame&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;https://remote.example.com&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="nx"&gt;store&lt;/span&gt; &lt;span class="p"&gt;});&lt;/span&gt;

  &lt;span class="k"&gt;return &lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;
    &lt;span class="p"&gt;&amp;lt;&amp;gt;&lt;/span&gt;
      &lt;span class="p"&gt;&amp;lt;&lt;/span&gt;&lt;span class="nt"&gt;p&lt;/span&gt;&lt;span class="p"&gt;&amp;gt;&lt;/span&gt;Count: &lt;span class="si"&gt;{&lt;/span&gt;&lt;span class="nx"&gt;count&lt;/span&gt; &lt;span class="o"&gt;??&lt;/span&gt; &lt;span class="mi"&gt;0&lt;/span&gt;&lt;span class="si"&gt;}&lt;/span&gt;&lt;span class="p"&gt;&amp;lt;/&lt;/span&gt;&lt;span class="nt"&gt;p&lt;/span&gt;&lt;span class="p"&gt;&amp;gt;&lt;/span&gt;
      &lt;span class="p"&gt;&amp;lt;&lt;/span&gt;&lt;span class="nc"&gt;VirtualFrame&lt;/span&gt; &lt;span class="na"&gt;frame&lt;/span&gt;&lt;span class="p"&gt;=&lt;/span&gt;&lt;span class="si"&gt;{&lt;/span&gt;&lt;span class="nx"&gt;frame&lt;/span&gt;&lt;span class="si"&gt;}&lt;/span&gt; &lt;span class="na"&gt;selector&lt;/span&gt;&lt;span class="p"&gt;=&lt;/span&gt;&lt;span class="s"&gt;"#counter"&lt;/span&gt; &lt;span class="p"&gt;/&amp;gt;&lt;/span&gt;
    &lt;span class="p"&gt;&amp;lt;/&amp;gt;&lt;/span&gt;
  &lt;span class="p"&gt;);&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The &lt;code&gt;store&lt;/code&gt; there is worth a second look. It's an event-sourced, proxy-based reactive object that synchronizes between host and remote automatically. Read and write it like a plain object on either side; mutations propagate over the same message channel the projection uses. It's the piece you reach for when "host and remote need to share state" comes up, and it means you don't have to invent a coordination protocol.&lt;/p&gt;

&lt;p&gt;There are first-class bindings for Vue, Svelte, Solid, Angular, Next.js, Nuxt, SvelteKit, TanStack Start, SolidStart, Analog, React Router, and a few more. They all sit on the same core engine. Pick whichever is closest to how your page is built.&lt;/p&gt;

&lt;h2&gt;
  
  
  What you actually get
&lt;/h2&gt;

&lt;p&gt;A few consequences of the projection model that are worth naming explicitly, because they're the things that tend to surprise people:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Layout flow just works.&lt;/strong&gt; Projected content fills its host element's box and participates in flex and grid like any other child. No &lt;code&gt;width="100%"&lt;/code&gt;-then-cry dance. No resize observer hacks. No scroll-inside-scroll confusion.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Theming crosses the boundary.&lt;/strong&gt; CSS custom properties inherit into the shadow root, so your design tokens — colors, spacing, fonts, dark mode — reach the projection without any coordination with the remote team. They don't even need to know your tokens exist; they just need to use &lt;code&gt;var()&lt;/code&gt; for things they want themeable.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Accessibility is inherited.&lt;/strong&gt; The projected DOM is real DOM in your tree, so focus traversal, screen readers, and keyboard navigation see it as part of the page. You're not fighting the iframe a11y model.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;SSR with resumption.&lt;/strong&gt; The meta-framework integrations can server-fetch the remote, inline the projection inside declarative Shadow DOM, and resume on the client without a second round-trip. First paint is styled and interactive content arrives without the iframe flash.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Cross-origin without proxy gymnastics.&lt;/strong&gt; No CORS negotiations. No server-side proxy. Ship the bridge once in the remote, and every host everywhere can project it.&lt;/p&gt;

&lt;h2&gt;
  
  
  What it isn't
&lt;/h2&gt;

&lt;p&gt;Worth being honest about the non-goals, because they come up in design reviews and getting them wrong wastes everyone's time.&lt;/p&gt;

&lt;p&gt;Virtual Frame is &lt;strong&gt;not an iframe replacement&lt;/strong&gt; — it still uses one under the hood, because that's how you give the remote its own browsing context. What's different is that you don't &lt;em&gt;see&lt;/em&gt; the iframe; its DOM is projected into your host.&lt;/p&gt;

&lt;p&gt;It is &lt;strong&gt;not a trust boundary from the host's side&lt;/strong&gt;. The iframe sandboxes the remote's script execution (same-origin policy still applies; the remote's JS can't touch host DOM or globals), but once you project the remote's DOM into your page, your code can read and manipulate that projected tree. If the remote is untrusted, keep the iframe visible and don't project.&lt;/p&gt;

&lt;p&gt;It is &lt;strong&gt;not module federation&lt;/strong&gt;. Nothing is shared at build time. No shared React instance, no shared bundle graph. If you need runtime coordination, use the shared store — a typed message channel, not a hidden dependency on a shared import.&lt;/p&gt;

&lt;p&gt;It is &lt;strong&gt;not a hydration framework&lt;/strong&gt;. The remote hydrates normally inside its own iframe. You don't rewrite your remote app to make it projectable; any web page will do.&lt;/p&gt;

&lt;h2&gt;
  
  
  When to reach for it
&lt;/h2&gt;

&lt;p&gt;Virtual Frame fits well when you need to compose multiple independently deployed apps into one page without coordinating a build. When you want to embed UI from another team, tenant, or origin while keeping layout flow and interactivity native. When you want to retire a user-visible iframe that looks and feels like one. When you want to project only a slice of a remote app and let the rest keep running in the background.&lt;/p&gt;

&lt;p&gt;Skip it when host and remote already share a build — module federation or a plain component export is lighter. Skip it when you need a hard security boundary against an untrusted remote — keep the iframe visible. Skip it when the remote doesn't need interactivity and doesn't change — a server-rendered fragment is simpler.&lt;/p&gt;

&lt;h2&gt;
  
  
  Where to go from here
&lt;/h2&gt;

&lt;p&gt;The shortest possible next step: &lt;code&gt;npm install virtual-frame&lt;/code&gt;, drop in the custom-element snippet above with a URL you own, and watch it work. Most of the mental model survives contact with five minutes of playing with it.&lt;/p&gt;

&lt;p&gt;When you're ready for more:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;&lt;a href="https://virtual-frame.level0x40.com/guide/what-is-virtual-frame" rel="noopener noreferrer"&gt;What is Virtual Frame?&lt;/a&gt;&lt;/strong&gt; — the conceptual overview in depth, including the parts of the mental model this article skipped.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;&lt;a href="https://virtual-frame.level0x40.com/guide/getting-started" rel="noopener noreferrer"&gt;Getting Started&lt;/a&gt;&lt;/strong&gt; — installation, the three integration paths, and the first-projection checklist.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;&lt;a href="https://virtual-frame.level0x40.com/guide/getting-started#framework-components" rel="noopener noreferrer"&gt;Framework guides&lt;/a&gt;&lt;/strong&gt; — React, Vue, Svelte, Solid, Angular, Next.js, Nuxt, and the rest. Pick yours.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;&lt;a href="https://virtual-frame.level0x40.com/guide/cross-origin" rel="noopener noreferrer"&gt;Cross-Origin&lt;/a&gt;&lt;/strong&gt; — the bridge protocol, CSP requirements, and the &lt;code&gt;proxy&lt;/code&gt; option for keeping first-party cookies.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;&lt;a href="https://virtual-frame.level0x40.com/guide/store" rel="noopener noreferrer"&gt;Store&lt;/a&gt;&lt;/strong&gt; — the reactive message channel for bidirectional state between host and remote.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;&lt;a href="https://virtual-frame.level0x40.com/api/" rel="noopener noreferrer"&gt;API reference&lt;/a&gt;&lt;/strong&gt; — every option, property, and method.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The thesis, one more time: stop embedding remote applications. Project them. The boundary you've been working around doesn't have to be there.&lt;/p&gt;

</description>
      <category>architecture</category>
      <category>frontend</category>
      <category>showdev</category>
      <category>webdev</category>
    </item>
    <item>
      <title>The Illusion of Language: What Directives Really Are</title>
      <dc:creator>Viktor Lázár</dc:creator>
      <pubDate>Sun, 09 Nov 2025 11:35:27 +0000</pubDate>
      <link>https://dev.to/lazarv/the-illusion-of-language-what-directives-really-are-445</link>
      <guid>https://dev.to/lazarv/the-illusion-of-language-what-directives-really-are-445</guid>
      <description>&lt;h2&gt;
  
  
  Preface &amp;amp; Introduction — Why This Post Exists
&lt;/h2&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;em&gt;This post is not a rebuttal.&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;It’s a reflection inspired by the great &lt;strong&gt;“Directives and the Platform Boundary”&lt;/strong&gt; article by Tanner Linsley. I genuinely enjoyed the piece and agree with many of its points — especially around ownership, provenance, and the value of explicit APIs.&lt;br&gt;
What I wanted to add is a slightly different lens: one that comes from building directive-driven tooling and from having lived through a very similar chapter in programming history, long before JavaScript had &lt;code&gt;'use client'&lt;/code&gt; or &lt;code&gt;'use server'&lt;/code&gt;.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;Over the past year, I’ve watched countless developers treat directives in JavaScript as if they were &lt;strong&gt;actual language features&lt;/strong&gt; — something built into the JavaScript spec or the runtime environment. And to be fair, at a glance, it &lt;em&gt;does&lt;/em&gt; look that way. A string at the top of a file that magically changes how the code behaves feels authoritative.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;If you haven’t been following the evolution of React Server Components and frameworks like Next.js: modern React apps now use file-level directives such as &lt;code&gt;'use client'&lt;/code&gt; and &lt;code&gt;'use server'&lt;/code&gt; to control where code runs. They look like simple string literals at the top of a file, but they decide whether a component executes on the server or the client — shaping bundling, rendering and data-flow. That’s where the confusion begins.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;But here’s the tension:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Directives look like language features.&lt;br&gt;
Directives feel like language features.&lt;br&gt;
Yet directives &lt;em&gt;are&lt;/em&gt; not language features.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;They are &lt;strong&gt;tooling-level signals&lt;/strong&gt; — consumed by bundlers, compilers, and build pipelines.&lt;/p&gt;

&lt;p&gt;This misunderstanding is not a new phenomenon.&lt;br&gt;
We’ve been here before.&lt;/p&gt;
&lt;h3&gt;
  
  
  A Personal Flashback
&lt;/h3&gt;

&lt;p&gt;The first time I saw a &lt;code&gt;#pragma once&lt;/code&gt; in a C++ codebase as a young developer, I thought:&lt;br&gt;
“Wow, the language has a keyword for this! Why doesn’t JavaScript?”&lt;/p&gt;

&lt;p&gt;Except… it wasn’t a language keyword.&lt;br&gt;
It was a compiler instruction. A &lt;em&gt;directive&lt;/em&gt;. A hint to the &lt;strong&gt;preprocessor&lt;/strong&gt;, not to the language itself.&lt;/p&gt;

&lt;p&gt;Just like &lt;code&gt;'use strict'&lt;/code&gt; wasn’t “JavaScript syntax” at first — and just like &lt;code&gt;'use client'&lt;/code&gt; isn’t “JavaScript syntax” today.&lt;/p&gt;

&lt;p&gt;This post explores that parallel. Because understanding the &lt;strong&gt;C/C++ preprocessor&lt;/strong&gt; era is a surprisingly powerful way to understand modern JS directives — why they confuse people, and what we could learn from the past to teach them better.&lt;/p&gt;
&lt;h2&gt;
  
  
  Why Do Directives Feel Like Language Features?
&lt;/h2&gt;

&lt;p&gt;If you show a newcomer the following code:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight tsx"&gt;&lt;code&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;use client&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;

&lt;span class="k"&gt;export&lt;/span&gt; &lt;span class="kd"&gt;function&lt;/span&gt; &lt;span class="nf"&gt;Button&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="p"&gt;&amp;lt;&lt;/span&gt;&lt;span class="nt"&gt;button&lt;/span&gt;&lt;span class="p"&gt;&amp;gt;&lt;/span&gt;Click&lt;span class="p"&gt;&amp;lt;/&lt;/span&gt;&lt;span class="nt"&gt;button&lt;/span&gt;&lt;span class="p"&gt;&amp;gt;;&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;and ask them what &lt;code&gt;'use client'&lt;/code&gt; is, most will confidently answer:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;“It’s part of JavaScript.”&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;This isn’t ignorance. It’s &lt;strong&gt;pattern recognition&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;Throughout their entire programming life, file-level “magic statements” have almost always been &lt;strong&gt;language semantics&lt;/strong&gt;, not tooling semantics:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;code&gt;"use strict"&lt;/code&gt;&lt;/li&gt;
&lt;li&gt;&lt;code&gt;"use asm"&lt;/code&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;They look like syntax.&lt;br&gt;
They live at the top of the file.&lt;br&gt;
They change behavior.&lt;/p&gt;

&lt;p&gt;So the brain does the obvious thing: it classifies them as language.&lt;/p&gt;

&lt;p&gt;The TanStack article captures part of this well when it says:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;“A directive at the top of a file looks authoritative. It gives the impression of being a language-level truth, not a framework hint.”&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;This is the crux of the confusion.&lt;/p&gt;
&lt;h3&gt;
  
  
  The Hidden Mechanism Makes It Worse
&lt;/h3&gt;

&lt;p&gt;Directives blur boundaries because the code that reacts to them is:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;strong&gt;invisible&lt;/strong&gt;&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;non-local&lt;/strong&gt;&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;not traceable through imports&lt;/strong&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;If I write:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight typescript"&gt;&lt;code&gt;&lt;span class="k"&gt;import&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="nx"&gt;client&lt;/span&gt; &lt;span class="p"&gt;}&lt;/span&gt; &lt;span class="k"&gt;from&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;somewhere&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;→ I can follow the import. I see who owns it. I can version it. I know what documentation to search for.&lt;/p&gt;

&lt;p&gt;But with directives:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;there is &lt;strong&gt;no import&lt;/strong&gt;
&lt;/li&gt;
&lt;li&gt;no namespace&lt;/li&gt;
&lt;li&gt;no ownership reference&lt;/li&gt;
&lt;li&gt;no callsite to inspect&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The behavior “comes from nowhere”.&lt;/p&gt;

&lt;p&gt;Which is exactly why they &lt;em&gt;feel&lt;/em&gt; like language.&lt;/p&gt;

&lt;h3&gt;
  
  
  Add Build Tools… and the Illusion Solidifies
&lt;/h3&gt;

&lt;p&gt;Unlike JavaScript keywords, directives do nothing by themselves.&lt;/p&gt;

&lt;p&gt;A runtime engine won’t throw if you mistype &lt;code&gt;'use clinet'&lt;/code&gt;.&lt;br&gt;
A bundler or transform plugin decides what to do with it. Some ignore it, some error, some treat it as a string literal.&lt;/p&gt;

&lt;p&gt;This leads to the second major confusion:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;“If it requires a bundler to work, isn’t it part of the ecosystem platform, not the language?”&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;From the developer’s perspective, &lt;strong&gt;if code changes behavior without explicit imports&lt;/strong&gt;, the mind defaults to:&lt;/p&gt;

&lt;p&gt;✅ language feature&lt;br&gt;
❌ library feature&lt;/p&gt;

&lt;p&gt;And that is the trap.&lt;/p&gt;

&lt;p&gt;Because what we call “platform” today is actually a &lt;strong&gt;stack of tools&lt;/strong&gt; — not a coherent language surface. Exactly like in the C/C++ era.&lt;/p&gt;
&lt;h2&gt;
  
  
  A Look Back: C/C++ Macros and the Preprocessor
&lt;/h2&gt;

&lt;p&gt;Long before modern JavaScript build pipelines, the C and C++ world lived through a very similar confusion. And it all started with a layer that sat before compilation: the &lt;strong&gt;preprocessor&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;To understand today’s directives, we need to briefly revisit three pillars of that era — because they map surprisingly well to what we see today.&lt;/p&gt;
&lt;h3&gt;
  
  
  The C Preprocessor Was Not the Language — But It Felt Like It
&lt;/h3&gt;

&lt;p&gt;Consider this:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight c"&gt;&lt;code&gt;&lt;span class="cp"&gt;#define PI 3.14
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Many beginners encountering this for the first time assume:&lt;br&gt;
“Oh, C has a special language keyword for defining constants.”&lt;/p&gt;

&lt;p&gt;Except… &lt;code&gt;#define&lt;/code&gt; is not part of the C language grammar.&lt;br&gt;
It’s an instruction to a separate tool that runs before the compiler.&lt;/p&gt;

&lt;p&gt;It performs text substitution — not type checking, not syntax analysis, not semantic validation.&lt;/p&gt;

&lt;p&gt;Yet, because it lived inside &lt;code&gt;.c/.h&lt;/code&gt; files and looked like “code”, generations of developers perceived it as &lt;strong&gt;the language&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;Sound familiar?&lt;/p&gt;

&lt;p&gt;A directive in JS behaves the same way:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight javascript"&gt;&lt;code&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;use server&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The JavaScript engine doesn’t know what that means.&lt;br&gt;
The bundler decides what to &lt;em&gt;rewrite&lt;/em&gt; before execution.&lt;/p&gt;

&lt;p&gt;Both are &lt;strong&gt;pre-language phases&lt;/strong&gt; that masquerade as language.&lt;/p&gt;
&lt;h3&gt;
  
  
  Pragmas: The Original “Framework Directives”
&lt;/h3&gt;

&lt;p&gt;If macros were like today’s codegen utilities, then &lt;code&gt;#pragma&lt;/code&gt; was the closest ancestor to modern directives:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight c"&gt;&lt;code&gt;&lt;span class="cp"&gt;#pragma once
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This line is not part of the C++ language spec.&lt;br&gt;
It’s a compiler-specific “hint” — an instruction that affects how the build system treats this file.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;No import&lt;/li&gt;
&lt;li&gt;No namespace or provenance&lt;/li&gt;
&lt;li&gt;No indication of ownership&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Different compilers supported different pragmas. Some ignored them. Some produced warnings. Some behaved differently.&lt;/p&gt;

&lt;p&gt;Replace “compiler” with “bundler” and you get 2025.&lt;/p&gt;

&lt;p&gt;For example:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Next.js reacts to &lt;code&gt;'use client'&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;Vite might ignore it&lt;/li&gt;
&lt;li&gt;A custom RSC bundler might interpret it differently&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;It’s the same fragmentation pattern seen 30 years ago.&lt;/p&gt;
&lt;h3&gt;
  
  
  Include Guards: The First “Invisible Build-Time Behavior”
&lt;/h3&gt;

&lt;p&gt;Before &lt;code&gt;#pragma once&lt;/code&gt; became common, C/C++ used &lt;strong&gt;include guards&lt;/strong&gt; to prevent double inclusion:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight c"&gt;&lt;code&gt;&lt;span class="cp"&gt;#ifndef MY_HEADER_H
#define MY_HEADER_H
&lt;/span&gt;
&lt;span class="c1"&gt;// header contents&lt;/span&gt;

&lt;span class="cp"&gt;#endif
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This pattern:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;altered program behavior&lt;/li&gt;
&lt;li&gt;existed only for tooling&lt;/li&gt;
&lt;li&gt;had no runtime meaning&lt;/li&gt;
&lt;li&gt;required knowledge of the &lt;strong&gt;build model&lt;/strong&gt;, not just the language&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;This is important:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;C/C++ forced developers to learn that “code you write in a file” and “the language itself” are not the same thing.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;This was a painful but transformative lesson.&lt;/p&gt;

&lt;p&gt;We are now at the &lt;strong&gt;same educational moment&lt;/strong&gt; in JavaScript.&lt;/p&gt;

&lt;h3&gt;
  
  
  Multi-Stage Compilation: A Familiar Pipeline
&lt;/h3&gt;

&lt;p&gt;C/C++ had distinct layers:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Preprocessor → Compiler → Linker → Executable
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Modern JS tooling has the &lt;strong&gt;same separation&lt;/strong&gt;, just with cooler names:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Directive Scanner / Loader → AST Transforms → Bundler → Output Chunks
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;And just like beginners blamed “C++ the language” for preprocessor quirks, today developers blame:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;“JavaScript”&lt;/li&gt;
&lt;li&gt;“React”&lt;/li&gt;
&lt;li&gt;“the platform”&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;…for behaviors that actually originate from build tooling.&lt;/p&gt;

&lt;p&gt;History is repeating itself — almost line for line.&lt;/p&gt;

&lt;h2&gt;
  
  
  Modern Directives Through the Lens of the Preprocessor Era
&lt;/h2&gt;

&lt;p&gt;Once you see the C/C++ parallels, modern JavaScript directives suddenly make a lot more sense. They are not an evolution of JavaScript syntax — they are an evolution of compiler hints.&lt;/p&gt;

&lt;p&gt;Let’s make the mapping explicit:&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Preprocessor Era Concept&lt;/th&gt;
&lt;th&gt;Modern JS Equivalent&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;&lt;code&gt;#pragma&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;
&lt;code&gt;'use client'&lt;/code&gt;, &lt;code&gt;'use server'&lt;/code&gt;
&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;
&lt;code&gt;#define&lt;/code&gt; macros&lt;/td&gt;
&lt;td&gt;code transforms / auto-generated wrappers&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;
&lt;code&gt;#include&lt;/code&gt; guards&lt;/td&gt;
&lt;td&gt;module boundary / hydration boundaries&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;compiler-specific behavior&lt;/td&gt;
&lt;td&gt;bundler-specific behavior&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;multi-stage builds&lt;/td&gt;
&lt;td&gt;loader → transform → bundle pipeline&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;Just like in the 90s, the surface looks deceptively simple — but the real action happens underneath.&lt;/p&gt;

&lt;h3&gt;
  
  
  Directives Don’t Execute — They Instruct a Tool
&lt;/h3&gt;

&lt;p&gt;A key property of directives is this:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;They don’t do anything &lt;em&gt;themselves&lt;/em&gt; — they only change how something else behaves.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;Here’s a breakdown of what typically happens in a directive-driven pipeline (simplified, but accurate enough):&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Read file → Check for directives → Decide environment / boundary →
Run transforms → Generate client/server bundles → Output
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Let’s compare that to the C pipeline:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Read file → Preprocessor expands macros + handles pragmas →
Compile → Link → Output
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;They are &lt;strong&gt;structurally identical concepts&lt;/strong&gt;, just applied to different languages and eras.&lt;/p&gt;

&lt;h3&gt;
  
  
  Why the Illusion of a Language Feature Persists
&lt;/h3&gt;

&lt;p&gt;Three psychological factors contribute to the confusion:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;They look like syntax&lt;/strong&gt;&lt;br&gt;
Both &lt;code&gt;#pragma once&lt;/code&gt; and &lt;code&gt;'use client'&lt;/code&gt; feel like reserved keywords.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;They live at the top of the file&lt;/strong&gt;&lt;br&gt;
Anything that shapes the &lt;em&gt;entire file&lt;/em&gt; is often assumed to be language.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;They act globally and implicitly&lt;/strong&gt;&lt;br&gt;
They don’t require instantiation or import.&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;In other words, directives successfully exploit a &lt;strong&gt;linguistic illusion&lt;/strong&gt;.&lt;br&gt;
They act like a programming language — while not being one.&lt;/p&gt;
&lt;h3&gt;
  
  
  But Modern Directives Go Further Than C Pragmas
&lt;/h3&gt;

&lt;p&gt;Here’s where things get more interesting:&lt;/p&gt;

&lt;p&gt;C pragmas only affected compilation behavior.&lt;br&gt;
Modern directives often affect &lt;strong&gt;execution model, code placement, bundling, and runtime boundaries&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;For example, &lt;code&gt;'use server'&lt;/code&gt; might:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;move code into a server-only chunk&lt;/li&gt;
&lt;li&gt;replace calls with RPC stubs&lt;/li&gt;
&lt;li&gt;add serialization wrappers&lt;/li&gt;
&lt;li&gt;enforce data-flow constraints&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;This is already beyond what the C preprocessor did.&lt;/p&gt;

&lt;p&gt;It’s closer to a &lt;strong&gt;macro system + compiler pass&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;Which is exactly why understanding the distinction matters:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;If something affects runtime execution and code placement, we should not treat it as “just a string at the top of the file”.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;And yet, ironically, most misconceptions arise because that is &lt;em&gt;exactly how it appears&lt;/em&gt;.&lt;/p&gt;
&lt;h3&gt;
  
  
  Build Tools Are the Real Interpreter of Directives
&lt;/h3&gt;

&lt;p&gt;Just like different C compilers treated pragmas differently, modern JS tooling varies, too:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Next.js interprets &lt;code&gt;'use client'&lt;/code&gt; one way&lt;/li&gt;
&lt;li&gt;Vite + RSC implementations another&lt;/li&gt;
&lt;li&gt;Third-party bundlers a third way&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;If tomorrow another framework introduced:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight javascript"&gt;&lt;code&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;use streaming-server&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;JavaScript engines wouldn’t care.&lt;br&gt;
Tools would decide what it means.&lt;/p&gt;

&lt;p&gt;And that is the mental shift we need developers to make:&lt;/p&gt;

&lt;p&gt;Directive ≠ language&lt;br&gt;
Directive = build instruction&lt;/p&gt;

&lt;p&gt;They live in userland — not in the spec.&lt;/p&gt;

&lt;p&gt;Before we move on, it’s worth pausing for a moment. The parallels with C and C++ are useful for understanding the shape of directives, but they can still feel abstract until you’ve seen them in practice. To bring the idea into clearer focus, let’s look at a concrete example from today’s ecosystem — one that makes the theory a bit more tangible, and shows how these compiler hints manifest in real, modern code.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;📦 &lt;strong&gt;When Inline Server Functions Reveal the Need for Directives&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;I once ran into a case that perfectly exposed why some server behaviors cannot be left to runtime.&lt;/p&gt;

&lt;p&gt;Consider an inline server function defined inside a Server Component, capturing variables from its local scope:&lt;/p&gt;


&lt;pre class="highlight jsx"&gt;&lt;code&gt;&lt;span class="k"&gt;export&lt;/span&gt; &lt;span class="k"&gt;default&lt;/span&gt; &lt;span class="kd"&gt;function&lt;/span&gt; &lt;span class="nf"&gt;Dashboard&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;rate&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="mf"&gt;0.27&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;

  &lt;span class="k"&gt;async&lt;/span&gt; &lt;span class="kd"&gt;function&lt;/span&gt; &lt;span class="nf"&gt;save&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;value&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;use server&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
    &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="nx"&gt;value&lt;/span&gt; &lt;span class="o"&gt;*&lt;/span&gt; &lt;span class="nx"&gt;rate&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt; &lt;span class="c1"&gt;// ← captures "rate" from the component scope&lt;/span&gt;
  &lt;span class="p"&gt;}&lt;/span&gt;

  &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="p"&gt;&amp;lt;&lt;/span&gt;&lt;span class="nt"&gt;button&lt;/span&gt; &lt;span class="na"&gt;onClick&lt;/span&gt;&lt;span class="p"&gt;=&lt;/span&gt;&lt;span class="si"&gt;{&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="nf"&gt;save&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="mi"&gt;10&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;&lt;span class="si"&gt;}&lt;/span&gt;&lt;span class="p"&gt;&amp;gt;&lt;/span&gt;Save&lt;span class="p"&gt;&amp;lt;/&lt;/span&gt;&lt;span class="nt"&gt;button&lt;/span&gt;&lt;span class="p"&gt;&amp;gt;;&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;


&lt;p&gt;At first glance, this feels like a normal function call. But for this to work, the bundler must do something subtle and non-negotiable:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;detect that save is a server function,&lt;/li&gt;
&lt;li&gt;hoist it into a server-only module,&lt;/li&gt;
&lt;li&gt;perform AST-level closure analysis to capture rate,&lt;/li&gt;
&lt;li&gt;and generate a stable, directly callable reference for the client — not a runtime-constructed wrapper.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;A runtime helper cannot solve this.&lt;br&gt;
By the time the code runs, the closure is gone and the intent is already lost.&lt;br&gt;
The function must be transformed before the program exists, so that the client can call it as a direct function, not a proxy we stitched together too late.&lt;/p&gt;

&lt;p&gt;This is why directives fit this space so well: they tell the build tools at the moment of definition what this function truly is, giving them time to shape it accordingly.&lt;/p&gt;

&lt;p&gt;Some decisions must happen while the code is still being woven — not after it is already alive.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;h2&gt;
  
  
  The Educational Gap — and What We Can Learn From History
&lt;/h2&gt;

&lt;p&gt;If you step back and look at the confusion around directives today, you’ll notice something striking:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;The problem is no longer technical — it’s educational.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;We’ve repeated a historical pattern:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;A tool-level construct &lt;strong&gt;looks&lt;/strong&gt; like language&lt;/li&gt;
&lt;li&gt;Developers assume it is language&lt;/li&gt;
&lt;li&gt;The mental model becomes wrong&lt;/li&gt;
&lt;li&gt;Confusion spreads faster than documentation can correct it&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;This happened with the C preprocessor.&lt;br&gt;
It’s happening again with JavaScript directives.&lt;/p&gt;

&lt;h3&gt;
  
  
  We Need to Teach the Layering — Not Just the Feature
&lt;/h3&gt;

&lt;p&gt;If we teach &lt;code&gt;'use server'&lt;/code&gt; as:&lt;/p&gt;

&lt;p&gt;“This makes your function run on the server.”&lt;/p&gt;

&lt;p&gt;…we’ve already lost.&lt;/p&gt;

&lt;p&gt;Because that sentence hides 4 separate layers:&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Layer&lt;/th&gt;
&lt;th&gt;Responsible For&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;Syntax&lt;/td&gt;
&lt;td&gt;writing a string literal&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Loader&lt;/td&gt;
&lt;td&gt;detecting the directive&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Build tools&lt;/td&gt;
&lt;td&gt;transforming / splitting code&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Runtime&lt;/td&gt;
&lt;td&gt;enforcing the boundary&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;If developers don’t understand which layer is responsible for what, they will blame “JavaScript” for a bundler problem — just like C developers once blamed “the language” for preprocessor bugs.&lt;/p&gt;

&lt;h3&gt;
  
  
  What the C/C++ Community Eventually Learned
&lt;/h3&gt;

&lt;p&gt;Over time, C/C++ education evolved:&lt;/p&gt;

&lt;p&gt;Early teaching:&lt;br&gt;
“Here’s how to use &lt;code&gt;#define&lt;/code&gt; and &lt;code&gt;#pragma&lt;/code&gt;.”&lt;/p&gt;

&lt;p&gt;Mature teaching:&lt;br&gt;
“Here’s what the &lt;strong&gt;preprocessor&lt;/strong&gt; is, and why it’s separate from the language.”&lt;/p&gt;

&lt;p&gt;After that shift, confusion dropped dramatically. No serious C++ course today teaches macros without first teaching the &lt;strong&gt;mental model&lt;/strong&gt; of the compilation stages.&lt;/p&gt;

&lt;p&gt;We need the same shift for JS directives.&lt;/p&gt;

&lt;h3&gt;
  
  
  Directives Aren’t Bad — They’re Powerful, If Understood
&lt;/h3&gt;

&lt;p&gt;This post is not an argument against directives. They serve a purpose:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Conveying intent declaratively&lt;/li&gt;
&lt;li&gt;Reducing boilerplate&lt;/li&gt;
&lt;li&gt;Helping tools optimize and separate code&lt;/li&gt;
&lt;li&gt;Giving the developer a simple switch for complex behaviors&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;In fact, some of the most ergonomic server/serverless features would be far more cumbersome without them.&lt;/p&gt;

&lt;p&gt;But the price of ergonomics is &lt;strong&gt;clarity debt&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;If we don’t teach where the magic comes from, developers misattribute the source of truth — and debugging collapses.&lt;/p&gt;

&lt;h3&gt;
  
  
  A Simple Mental Model to Teach (Starting Tomorrow)
&lt;/h3&gt;

&lt;p&gt;I like to explain directives with a single sentence:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;“A directive is a note you leave for your build tools — not for JavaScript.”&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;That line alone fixes 70% of misunderstandings.&lt;/p&gt;

&lt;p&gt;Add one analogy:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;“If &lt;code&gt;#pragma once&lt;/code&gt; was not C++ syntax, then &lt;code&gt;'use client'&lt;/code&gt; is not JavaScript syntax either.”&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;And suddenly, people get it.&lt;/p&gt;

&lt;p&gt;This doesn’t require more docs — it requires better &lt;strong&gt;framing&lt;/strong&gt;.&lt;/p&gt;

&lt;h2&gt;
  
  
  A Practical Step Toward Clarity: A TypeScript Plugin for Directives
&lt;/h2&gt;

&lt;p&gt;Before wrapping up, I want to share one more concrete step I took to reduce this confusion in real-world codebases. If part of the problem comes from directives &lt;em&gt;looking&lt;/em&gt; like language yet lacking any formal structure, then giving them &lt;strong&gt;type-level meaning&lt;/strong&gt; is one way to bridge the gap.&lt;/p&gt;

&lt;p&gt;I built a small TypeScript plugin called &lt;code&gt;typescript-plugin-directives&lt;/code&gt; that brings &lt;em&gt;type safety and IntelliSense awareness&lt;/em&gt; to directives. It allows teams to:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;define their own directive vocabulary,&lt;/li&gt;
&lt;li&gt;validate them at compile time,&lt;/li&gt;
&lt;li&gt;get editor hints and autocomplete,&lt;/li&gt;
&lt;li&gt;and avoid silent typos like &lt;code&gt;'use clinet'&lt;/code&gt;.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The goal isn’t to “standardize” directives, but to &lt;strong&gt;make their intent explicit and visible to both developers and tooling&lt;/strong&gt; — without needing a bundler to interpret them first.&lt;/p&gt;

&lt;p&gt;You can try it here:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;npm&lt;/strong&gt;: &lt;a href="https://www.npmjs.com/package/typescript-plugin-directives" rel="noopener noreferrer"&gt;https://www.npmjs.com/package/typescript-plugin-directives&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;GitHub&lt;/strong&gt;: &lt;a href="https://github.com/lazarv/typescript-plugin-directives" rel="noopener noreferrer"&gt;https://github.com/lazarv/typescript-plugin-directives&lt;/a&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;It’s intentionally lightweight — just a small layer to help the mental model click earlier, and to give directives a more formal shape inside TypeScript projects.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;There’s a common assumption that a TypeScript plugin could “enforce” directives — that if the plugin knows about them, the system becomes safe by default.&lt;br&gt;
But a TS plugin lives in the Language Service. It can provide awareness, warnings, and guidance — yet it still doesn’t run where code is actually transformed. It doesn’t participate in compilation or bundling.&lt;/p&gt;

&lt;p&gt;A plugin can help identify directive usage, but &lt;strong&gt;it cannot enforce their semantics&lt;/strong&gt;.&lt;br&gt;
For that, the compiler needs a signal of intent — something the type system can understand.&lt;/p&gt;

&lt;p&gt;To enable this, I expose a global &lt;code&gt;Directive&lt;/code&gt; type in my plugin. Authors can use &lt;code&gt;satisfies&lt;/code&gt; not to inform the editor, but to inform the &lt;strong&gt;TypeScript compiler&lt;/strong&gt; that a given string is intended to be a directive:&lt;/p&gt;


&lt;pre class="highlight typescript"&gt;&lt;code&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;use server&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt; &lt;span class="nx"&gt;satisfies&lt;/span&gt; &lt;span class="nx"&gt;Directive&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;


&lt;p&gt;This doesn’t change behavior or trigger any transformation.&lt;br&gt;
What it does is far simpler and more fundamental:&lt;/p&gt;

&lt;p&gt;It tells the type system:&lt;br&gt;
&lt;strong&gt;“Treat this as a directive — and validate it as such.”&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;It aligns intent with the compiler, not with runtime, and not only with the IDE.&lt;/p&gt;

&lt;p&gt;The actual separation of worlds — hoisting inline server functions out of a component, analyzing captured scope, generating a directly callable boundary — still belongs entirely to the build. The type system can acknowledge intent, but the bundler is the one who must act on it.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;One practical caveat:&lt;/strong&gt; today, some tooling — including Next.js — expects directives to appear as &lt;em&gt;bare string literals&lt;/em&gt;. When written as &lt;code&gt;'use server' satisfies Directive&lt;/code&gt;, the directive may no longer be detected, since the literal is no longer in the exact form the framework scans for. Until this changes, this pattern won’t be picked up by Next.js.&lt;/p&gt;

&lt;p&gt;There is one more subtlety worth mentioning. This type-level intent only matters if a type-checker is actually running. Many modern toolchains — esbuild, SWC, Oxc, Bun, even most Deno and Vite setups — do not type-check at all. They simply strip types and move on. In those environments, the &lt;code&gt;Directive&lt;/code&gt; + &lt;code&gt;satisfies&lt;/code&gt; expression becomes a silent note to the compiler that never had a chance to listen.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;h3&gt;
  
  
  Closing Reflection
&lt;/h3&gt;

&lt;p&gt;The article from Tanner raises a valuable conversation — one worth having early, before bad habits ossify. I don’t believe the goal should be to eliminate directives; they clearly solve real problems. But we &lt;em&gt;can&lt;/em&gt; learn from history, and avoid the confusion that entire generations of C/C++ developers had to unlearn.&lt;/p&gt;

&lt;p&gt;We’ve been here before.&lt;br&gt;
We know how this story goes.&lt;br&gt;
This time, we can skip the decade of confusion in the middle.&lt;/p&gt;

&lt;p&gt;Teach the layers.&lt;br&gt;
Teach the provenance.&lt;br&gt;
Teach the mental model.&lt;/p&gt;

&lt;p&gt;And directives will stop feeling like “secret language features” — and start feeling like the powerful, intentional compiler hints they actually are.&lt;/p&gt;

</description>
      <category>typescript</category>
      <category>nextjs</category>
      <category>react</category>
      <category>reactserver</category>
    </item>
  </channel>
</rss>
