Developers have never been shy about disliking certain React APIs. They feel awkward, restrictive, or just plain counterintuitive. But the reality is that the two most complained‑about design choices in React weren’t arbitrary at all — they were early signs of deeper constraints that every UI model eventually runs into.
As many of you know, I’ve been working on Solid 2.0 for the last couple of years. It’s been a journey. I’d already been using Signals for over a decade, and I thought I understood the entire design space. But the deeper I went, the more I found myself in unexpected territory.
And somewhere along the way, I realized something uncomfortable. React was right about those design decisions that people absolutely cannot stand. Not React’s model — I’m not here to defend that. But React did correctly identify two invariants that the rest of the ecosystem, including Solid 1.x, glossed over.
I'm talking about deferred state commits:
const [state, setState] = useState(1);
// later
setState(2);
state === 1; //not committed yet
And dependency arrays on Effects:
useEffect(() => console.log(state), [state]);
These are the two things Signals were supposed to “fix.” And in a sense, they did. But not in the way people think. Today, we’re going to look at why that isn’t the full story.
Living in an Async World
Everything we do on the web is built on asynchronicity. The entire platform is defined by a client and server separated by a network boundary. Streaming, data fetching, distributed updates, transactional mutations, optimistic UI — all of it branches from that simple truth.
Async pushes us out of our imperative comfort zone. Imperative code is about writes: “set this, then read it back.” Async is about reads: “is this value available, stale, or still in flight?” It’s the question every UI must answer before it renders anything: can I show this, or will I expose something inconsistent?
To most frameworks, async looks like ephemeral state flitting in and out of a synchronous declarative world. It feels unpredictable because we only see the moments where async intersects with our computation. But async isn’t chaos — it’s just time. And if we want to reason about it, we need the language to represent it directly.
It starts with how we represent state. If a value isn’t available yet, there is no placeholder it can safely substitute. Returning null, undefined, or a wrapper breaks determinism. Continuing anyway produces a result that never corresponds to any actual moment in time. The only way to keep consistent is to stop.
It also takes respecting the declarative model. What makes reactive systems (including React) compelling is their ability to represent UI as state at a given moment in time. All architectural clarity and execution guarantees stem from this. Determinism is the goal: the same inputs produce the same outputs, timing doesn’t alter the shape, and the UI is always consistent.
When async leaks into user space — through conditional branches or alternate value shapes — we force the user to manually manage consistency, and the declarative model collapses.
// Derived computation forced to branch on async state
const firstInitial = user.loading ? "" : user.name[0];
UI affordances for async—loading indicators, skeletons, fallbacks—are not the problem. Those are presentation concerns. The problem is when async becomes part of the value flowing through the state graph. It forces every consumer to branch. UI can show whatever it wants, but the graph must only ever see real values.
1. Async Must Be Isolated from Commits
Unlike other reactive systems, React’s tight coupling of state and rendering forced it to confront this problem early. When every state change triggers a re-render, you can’t hide inconsistencies behind synchronous derivation. Signals avoid this because everything is always up to date by the time you read it—no re-renders, no orchestration, no wasted work.
But those characteristics only hide a fundamental truth. You cannot let async work interleave with synchronous commits. If a computation is still waiting on async, any writes it performs are speculative. You can’t show the user UI based on state you don’t have yet, because if they interact with it they expect to be interacting with what they see—not some intermediate state the framework is holding.
Consider:
let count = 0;
let doubleCount = count * 2;
function increment() {
count++;
console.log(`${count} * 2 = ${doubleCount}`);
}
<button onClick={increment}>{count} * 2 = {doubleCount}</div>
I've used this example many times in the past but it captures the nature of the problem. See:
In plain JavaScript, count and doubleCount drift apart. Signals fix this by updating doubleCount on read. But that still leaves the question. When does this update reach the DOM? If you flush immediately (like Solid 1.x), consecutive updates can be expensive. If you don't you don't that acknowledges that some amount of scheduling is inherent to the system.
React was the only system that didn’t update count immediately, and people hated it. But the motivation was sound. React wanted event handlers to see consistent state, and it had no way to update derived values until the component re-ran.
Now imagine the handler is:
function onClick(event) {
setBooks([]);
// derived value
if (booksLength) {
books[booksLength - 1]
}
}
If books updates but booksLength doesn’t, you’re reading out of bounds.
Signals keep state and derived state perfectly in sync, and that gives developers a strong sense of safety. You write the code once and it just works. But that confidence becomes a liability the moment a derived value turns async as there is no guarantee that it will keep in sync.
Return to count and doubleCount, but make doubleCount async. If you want the UI to stay consistent — to keep showing 1 * 2 = 2 until the async doubleCount resolves — then you must delay updating count as well. Otherwise you end up in a strange situation. The UI is still showing 1 * 2 = 2, but the console is already logging 2 * 2 = 2 because the underlying data has moved on to count = 2.
Once you see that mismatch — the UI waiting for consistency while the data has already advanced — the conclusion becomes unavoidable. The synchronous world made you feel safe because everything updated together, but that safety was an illusion built on the assumption that all derived values were immediately available. The moment one of them becomes async, that assumption collapses. If you want the UI to remain consistent, you have to delay the commit. And once you delay the commit in the UI, you have to delay it in the data as well, or the two drift apart in ways that violate the very guarantees you relied on. Async doesn’t just add latency; it forces a different execution model.
2. Dependencies of Effects must be known at Computation Time
React’s re‑render model forced it to confront another truth long before anyone else. Derivations and side effects obey different rules.
When components re-run on every change, recalculating everything every time would be wasteful. So when Hooks were introduced, dependency arrays came with them — a crude but effective form of memoization.
Compared to Signals, where dependencies are discovered dynamically and only the necessary computations re-run, this looks limited. But it had an important consequence. React knew all the dependencies of the tree before running any rendering or side effects.
That detail becomes vital the moment async enters the picture. If rendering can be interrupted at any time — paused, replayed, or aborted — then no side effects can have run yet. A side effect that fires before all dependencies are known risks running with partial or speculative state. React’s architecture exposed this immediately. Rendering was not guaranteed to complete, so effects could not be tied to rendering.
Signals, with their surgical precision, avoided this problem for years. Change propagation is synchronous and isolated, so derivations and side effects appear to run in a single, predictable flow. But that predictability evaporates the moment async enters the graph.
Because if async is only discovered during side effects, it’s already too late. And if async is interruptible — say by throwing a promise and re-executing on resolution — execution becomes completely unpredictable.
Consider:
const a = asyncSignal(fetchA());
const b = asyncSignal(fetchB());
const c = asyncSignal(fetchC());
effect(() => {
console.log(a());
console.log(b());
console.log(c());
});
What does the effect log? How many times does it run? In a purely synchronous world, these questions barely matter — derivations are stable, and effects run once per commit. But with async, they become unanswerable. Each async source may resolve at a different time. Each resolution may re-trigger the effect. And if any of them suspends or retries, the entire execution order becomes nondeterministic.
And that’s just the initial load. If these async sources can update independently over time, the unpredictability compounds. You can’t reason about side effects if you can’t reason about when the effect runs or what values it sees.
The solution is simple and unavoidable. Effects must only run after all async sources they depend on have settled. And to do that, you must know all dependencies before executing any effect. You must seperate collecting the dependencies from executing the effect.
const a = asyncSignal(fetchA());
const b = asyncSignal(fetchB());
const c = asyncSignal(fetchC());
effect(
() => [a(), b(), c()] // capture deps
([a, b, c]) => { // do side effects
console.log(a);
console.log(b);
console.log(c);
}
);
What This Means for Signal‑Based Solutions
At this point the architecture forces a choice. Either confront async head‑on or continue pretending synchronous guarantees hold in an async world. Async is real. It will appear somewhere in the graph. And once it does, the guarantees you relied on in the synchronous case no longer hold unless the system acknowledges it.
Can a Compiler Solve This?
No. A compiler can’t fix a semantic problem by rearranging syntax. Early commits aren’t a mechanical limitation — they’re a correctness limitation. The moment async enters the graph, the system must know when a value is real and when it is speculative. No amount of static analysis can change that.
Could a compiler extract dependencies from a single effect function? In a shallow sense, yes — React’s compiler does exactly that. But compiler‑based extraction only sees what’s in scope. It can’t see the whole graph. If your sources are functions that call signals rather than signals themselves, the compiler has no way to know whether those functions are pure or whether they hide side effects.
This is exactly why Svelte 5 moved to Runes (Signals). Compiler‑time dependency capture hit a hard limit. It couldn’t track sources that weren’t syntactically visible.
let count = 0;
function getDoubleCount() {
return count * 2;
}
// never updates because count is not
// visible in this scope
$: doubled = getDoubleCount();
Once you hit these edges, you have to ask whether the added complexity, hidden rules, and incomplete coverage are worth it. Compiler inference can paper over the problem, but it can’t solve it. Async is a runtime phenomenon. The guarantees must be enforced at runtime.
Does This Mean We’re Doomed to Mimic React?
Not at all. This isn’t copying React. It’s acknowledging the same fundamental truth React ran into first. Async forces commit isolation. Async forces effect splitting. Vue has had this split in its watchers(effects) for years. These aren’t React‑isms. They’re invariants of any system that wants to preserve consistency in the presence of async.
Adopting these invariants doesn’t erase the advantages of Signals. Updates remain surgically fine-grained. Components never re-render. Dependencies are deeply discoverable and dynamic.
Only effects require separation. Pure computations do not. This marries the expressive power of Signals with the correctness discipline of functional programming. It acknowledges reality instead of fighting it. And it gives async the same determinism and clarity that Signals already give to synchronous computation.
Conclusion
Solid has always pushed the boundaries of frontend architecture, not by chasing novelty but by uncovering the underlying rules that make UI predictable, consistent, and fast. React encountered these rules first because its architecture forced it to. It didn’t choose these constraints — it ran into them. Calling them “design decisions” almost overstates the agency involved. They were discoveries.
Choosing to embrace those same invariants from a position of strength is something entirely different. We aren’t adopting these constraints because we’re boxed in — we’re adopting them because they are true. Async forces commit isolation. Async forces effect splitting. Async forces a consistent snapshot. These aren’t React‑isms; they’re the physics of UI.
Embracing this isn’t mimicry. It’s maturity. It’s choosing the inevitable path with eyes open, and building a system that treats async not as an edge case but as a first‑class part of the architecture. It’s the next step in making Solid not just fast, but fundamentally right.
Clarity doesn’t simplify the world, but it does make the direction unmistakable.




Top comments (34)
Honestly, the dependency array really annoys me. I always miss something, and then the whole site goes berserk.
I think one difference with Signals is it won't update if you miss. And you shouldn't be reading them in the effect itself so it will basically tell you if you miss something.
There are React recommend eslint rules to prevent that from happening
honestly the hooks dependency array is the one I've just made my peace with. it felt wrong at first - like you're describing side effects to the compiler instead of just writing them. but once I stopped fighting it and started treating it as documentation of what the function actually depends on, things clicked. still think the mental model is genuinely awkward to explain to people new to React though.
The uncomfortable truth you're naming: signals didn't fix the async problem, they just delayed it. React's awkwardness wasn't a mistake it was the shape of the constraint becoming visible. The frameworks that pretend async is a special case are the ones that will break first when the graph gets complex enough.
Thankfully my work on Solid 2.0 has shown me that these don't need to be mutually exclusive things. We can consistently address Async and still keep all the advantages of Signals. We just have to be open to it.
The difference is you're building the constraint into the design instead of pretending it doesn't exist. That's the line between a framework that teaches you something and one that just gets out of the way until it doesn't. Looking forward to seeing where Solid 2.0 lands.
There’s something funny about realize why something that didn’t make sense at the time makes perfect sense after facing the problem it already solved. Really humbles once self. Thanks for sharing your experience. Thanks for reminding me to never stop learning.
Agreed
This is one of the most intellectually honest pieces I've read about framework design. The way you distinguish between React-isms and actual invariants that any async UI system must confront is refreshing. Most people stop at 'React bad, Signals good you've actually done the hard work of asking why React made those choices and whether they point to deeper truths. Respect.
Interesting read, thanks Ryan!
Isn't an answer to the asynchronicity problem in types? It seems complicated because standard
Promiseimplementation has no state and can not observe resolved/unresolved status. But this can be fixed with a simple subclass: github.com/canonic-epicure/siesta/...After that, one can simple wrap the
PromiseSyncvalue into a signal and then, on the consuming side, differentiate the resolved/unresolved and render different content based on that.With this approach there's no need for asynchronicity in the reactive layer. Of course that requires a dependency on the internal "resolved" field, so need a special kind of signal.
It helps if sync and async resolution could be handled more smoothly. But the coloration of async does sort of contaminate everything. I also don't think it removes the need. Like sure it works, but composition/derivation still needs to flow and ultimately the hardest choice for a lot of these is setting async boundaries. This impacts things like streaming but also gives us clear loading boundaries that can discover their async via inversion of control. Like you don't know all th async that will be below you, but you'd like to capture it. For those mechanisms to work you need to stamp somewhere in the UI where we care.
And for Solid I think it makes sense to make that in the front half of rendering effects because that is the most leaf you can be. Whereas other systems end up using compiled
awaitsor like auseto do that a more component level. Seems very restrictive in a similar way that I find React's state to component coupling restrictive.I agree that setting proper boundaries to avoid excessive coloration / changes contamination is important. I believe this is a stylistic change on the consuming side:
This requires users to be educated of the "reactive boundary" for the values they use in the templates. Probably a matter of good docs.
I guess that's why I always disliked the "component as function" thinking in React. I actually like the primitives (HTML, CSS, and JavaScript) and understanding it as a render loop, that simply displays the state as it is, in the moment of the tick, is actually sufficient. You just fill in the "gaps" (async resolves) when they are available. So Svelte5 way is much more natural to this and "more correct" for my understanding.
I mean Svelte 5 is basically Solid 1 mechanically (until they added the new async stuff). I don't think the container being a function or not matters. React's rerenders do versus Svelte/Solid's surgical reactive updates. But that doesn't completely tell the story though because the changes themselves are still subject to the same constraints as React even if more granular. Like the physics is undeniable.
Svelte 5's async solution is aware of these constraints. They discover deps early through compiler extraction of the await keyword to block at that scope. So to be fair they can solve the split effects issue without the split. But it comes at the cost of potentially higher blocking and graph coloration. But the fundamental truth is the same.
Svelte 5 doesn't defer flush but it also doesn't carry the same consistency guarentees. It is subject to the breaking I was talking about in the article but honestly it probably doesn't happen too often. People really shouldn't do much read after writes. So while React's approach is undeniably more correct there it probably won't bite you.
OK. Thank your for answering, I'm not very familiar with the underlying implementation details of Svelte5 and/or Solid and come from a framework user perspective. So my point was basically, that Reacts "functions approach" felt misplaced, because I don't want to think about the problem (displaying HTML based on state) in a functional way (idempotent - the input defines the output).
I guess I can see/understand your point. But I also don't see why it has to be so restrictive. So because of the async nature of the web, I don't expect the view to be in complete sync, like a database transaction, all or nothing. Why should I expect "$derived" to be in-sync/simultaneously change with it's "$state". And I also can see how you can code your self into weird situations. But that's an inherent property of async stuff. Not so much a problem a framework can solve, without making it sluggish (after 15 levels of nested $derived with async calls mixed in).
You can not / or should not abstract async away from the user, but maybe that's your whole point.
Yeah as a framework author those consistency goals are very high priority as to have users not worry about it. Runes(Signals) are exactly that for synchronous updates, and as you explore new async features in Svelte you will see similar tie in in the UI, you won't see state update until async derived on it do. The part I don't cover in this article is the other side of the representation. In Svelte they have eager, and in Solid we've been building a number of helpers to basically allow show state both in current and speculative states. This is all very fresh so I expect shifts over the next couple years as it all unrolls.
So the idea isn't to hide async but better adapt it to our declarative representation of state.
When I'm interpreting your right, you propose to wait for all dependent $derived (which may include async, e.g. fetch calls) to be resolved before actually updating the $state.
If so, I have the following problem with that.
The original $state is bound to all the dependents, which the component/state class doesn't know. I see you could use "eager" to actually present the (maybe) coming change, but that's extra work, and introduces more problems, like the user interacting with the state again, before the first roundtrip finished.
Since the state change, is usually coming from UI interaction, this will make things unpredictable and/or sluggish (if you wait for dependents to be resolved).
My point might be actually "why bother". Can't you just treat the $derived as generally async. Update $state immediately, call and forget all $derived which call their $derived ... and so on. When a $derived is called before the previous call has finished, just throw the previous execution away and call it again.
You then have the problems with side-effects inside derived, bluntly speaking, don't do it.
Sorry for the Svelte wording, that's what my brain is currently wired to. And sorry again if that's naive.
Maybe you could even make the $derived communicate that they are "pending", so that you don't have the mismatch (your 2 * 2 = 2 example, would become 2 * 2 = [calculating]). This information could be synchronously communicated to all dependets (in a compiled world). Actually when I think about it, it's probably a similar pattern then the SvelteKit RemoteFunctions (where everybody acknowledges their async'ness).
Don't worry about Svelte wording. To clarify this is how Svelte's new Async works as well. The only difference is it only obeys that in the UI and instead tears in like event handlers which I find inconsistent. The actual UI stuff isn't a problem. I realize I probably just need to write an article that really explains the model.
In Svelte they have $eager and I think.. effect.pending() or something like that.. In Solid 2.0 we have latest, and isPending(() => any expression). So it is very similar except we can more granular query pending state. There are other differences in terms of entanglement conditions, the mechanism for capturing effect dependencies early. They afford split effects by doing some compiler stuff, and not being colorless. But the same underlying physics is involved that would make you want to defer commits.
It's actually pretty interesting how similar the signal mechanics for these systems ended up developing very similarly most independently. Svelte 5's lack of wrappers was definitely a place where I was inspired by their work but otherwise these were 2 independent threads being developed independently mostly over the late 2024 to mid 2025 time period.. I just had other things I needed to do for Solid 2.0 or this would have been released by now.
Remote Functions or Server Functions as we've called them in Solid the last 4 years push to these things being known but are sort of orthogonal. They likely lead Svelte to exploring this direction more thoroughly (as they did us) but the same could be true of API endpoints generally. I think the custom serialization patterns .. ie supporting more than promises.. like streams etc though do direct the focus a bit more.
The flaw in React is the rerender of a publish/subscribe pattern. The publish subcribe pattern should be outside of its own dependencies.
UseEffect is essentially a subscribe pattern
UseState or any other data input is the publish pattern.
A few useEffects is fine but realistically any decent sized monirepo will exceed this. So to overcome this perpetual problem we use useRefs and useCallbacks to detach the publisher from the rerender. It looks ugly but we accept this floor because React encapsulation of components has become the easiest front end library.
What you are describing is neither easy or simple. At this point component encapsulation is everywhere. At the point you need to break the update flow the way you are describing you'd be better reversing the model. Ie.. components don't rerun. People need to look outside of React. I think it would help even if continuing to use it to find better patterns.
Same story with dependency arrays. Teams hate writing them, but large React codebases quickly become unpredictable without them. In one of our projects we tried hiding dependencies behind custom hooks. Six months later debugging effects became painful because nobody could tell what actually triggered them. Explicit dependencies looked ugly, but they made the update graph visible.
I think side effects is where this most often manifests. Pure computations usually require all their deps and it is pretty clear the path. Effects tend to close over a different world so to speak so it's why the separation clarity is more important.
A few ways of looking at things 🤝
Some comments may only be visible to logged-in visitors. Sign in to view all comments.