It has been a long time since I last wrote React. Recently, I had a final project that called for a report, and I decided to get a little fancy: I built a presentation in React. Animations, canvas, the whole thing. Trying to do that purely in React after being away from it for a while was bound to be painful. So I figured I would write a note, mostly for myself, and leave something useful for anyone else who might need it.

Let me put on some armor before I begin. I know the frontend community has a long tradition of holy wars. Even though this piece may read like I am biting a cyber cigarette lighter, I have no intention of joining the fight. I am just someone who writes code and goes home. If you think I have gotten something wrong, you are probably right. I am an idiot, and idiots do not care. Please do not make a scene on my turf.

What React Actually Is

This is a big topic. If you go to certain frontend job interviews, the interviewer, motivated by some peculiar sense of superiority, may expect you to dissect implementation details and recite source code internals in order to demonstrate the depth of their own erudition. But anyone who has actually shipped things will agree: the important thing is not the rote memorization. It is the mental model. This is something even very experienced React developers often fail to internalize.

React's core mental model has two parts: the internal React environment (State and the virtual DOM), and a large collection of platform-specific external things. On the web, the most common external entity is the DOM object.

What React does is bridge the two. More concretely, React binds entities inside the virtual DOM to the external environment through References. Developers use Effects (side effects) to push React's internal state out to the external environment. The external environment sends data back via asynchronous callbacks, such as event handlers and async message streams.

React is something like an octopus gripping the environment, sending and receiving information through its bindings. From this, you get UI = f(state).

That is it.

With this mental model in place, you start to notice how deeply strange a lot of common development practices are. For instance, some developers understand Effects as a tool that says "when variable X changes, run some function." But that understanding contains two mistakes. First, X is not a plain variable; it is a piece of state, because values not managed by React cannot reliably trigger re-renders of the virtual DOM. Second, it is not "run some function" in the general sense; it is specifically "synchronize React's internal information to the external environment," meaning a "impure" side effect.

A lot of developers who do not quite get this will watch some Prop change, then manually update some state inside a component to keep things in sync. You will almost certainly get burned by timing issues. The correct approach is to derive state inside the render function instead.

Another classic headache is manipulating Canvas from inside React. You end up with enormous, timing-unclear Effects tangled around each other until you are fully debugged into oblivion. The tricky part is that Canvas has its own internal state machine. Every operation you perform on it is a side effect, entirely unrelated to React. The best practice is to keep all of that outside of React, exposing only two sets of interfaces for communication: an Effect that pushes from React to Canvas, and a manager stored in a Ref that holds the Canvas Reference and dispatches events.

Libraries like react-three-fiber or react-konva handle this for you. If you choose not to use them (I personally find these libraries somewhat irritating), you have to sort out the right development paradigm yourself.

On the topic of References: some developers package State and Callbacks into a Ref to expose to a parent component, which then reaches inside the child via useRef to retrieve values, achieving what they call "reverse communication." This is a clear anti-pattern, because References are not meant for this purpose. More fundamentally, "reverse data flow" has no place in the React conceptual model at all. What flows upward should only ever be event callbacks.

SSR Is a Strange Idea

I did not say it was a bad idea. I said it was a strange idea.

One of SSR's selling points is that it avoids shuttling data back and forth like a ping-pong ball between the client and server. The server renders the HTML and sends it down; the client then "hydrates" the state. Done.

But if we return to our earlier discussion of what React actually is, we notice something peculiar: React rendering involves an "internal environment" and an "external environment." The internal one is React itself, written in JavaScript, which is fine. But if you want the server to produce something that looks exactly like what the browser would render, you also need to provide an equivalent "external environment" on the server. And to allow the client-side React to match DOM nodes to their corresponding "external environment" entities, you must ensure that everything in the "external environment" is serializable.

That is how things go wrong.

A common example is "isomorphic Fetch," where you simulate an identical Fetch API on the server to accomplish data fetching.

But there are many things in the world that you cannot make behave consistently between server and client:

  • Browser APIs (window, document, localStorage)
  • User device information (screen size, touch support, User Agent)
  • Time-related values (timestamps, dates, Date.now())
  • Random numbers (Math.random())
  • Canvas state, WebGL contexts
  • WebSocket connections, timers

And while you may avoid the ping-pong between client and server, those "isomorphic Fetch" implementations just move the ping-pong to happen between your SSR server and your backend server. You can make it asymmetric, of course: have the SSR server call an RPC to pull data directly. But then you are writing two versions of everything. Exciting.

And Node.js-based SSR performance is genuinely poor. Not only is it slow, but it demands hefty memory on the server side, because the Node.js runtime is extraordinarily memory-hungry.

At this point, the "experienced" developer will object: But we have Edge Workers! Why must the SSR runtime be Node.js? Is a bare V8 not enough?

The brutal reality is that every Edge Worker runner looks different. Every vendor has its own implementation, its own SDK, its own Adapter. There is TC55 trying to manage standards, but one glance at the sparse standard document tells you how much "standard" those "standardized solutions" actually contain.

And when you move rendering to edge servers, you then have to figure out how they communicate with your data center. The performance you gained from the edge gets eaten back up by the communication overhead. You will say: we can use distributed databases! More face, more flour; more flour, more face. After all that frenzied effort, you look up and realize you started out trying to solve a "frontend rendering performance" problem. Even the Gundam would feel outclassed.

The Boundary Problem of Data and State Ownership

Let us climb into a time machine and go back twenty years, to the era of JSP, PHP, and ASP, when nearly all dynamic content was produced by the server. Later came fancier things: Ruby on Rails, Django, still working from the same principle. Pure Server Side Rendering. Wow, amazing.

Their workflow was beautifully simple and uniform: a request arrives, query the database, fill a template, return an HTML string, hand it to the browser for rendering.

But why does React's SSR feel "strange," while PHP's approach felt "normal"?

The reason, as we just discussed, is that the traditional approach never tried to simulate a browser on the server. It does not need window. It does not need document. It certainly does not need some twisted "isomorphic Fetch." The server's only job was to turn data into HTML.

There was no "hydration," because in the traditional model, the moment HTML arrived at the browser, the server's mission was complete. If you wanted interactivity, you wrote a separate JavaScript snippet, grabbed DOM elements via document.querySelector, and bound events manually.

In that paradigm, React's internal environment (State and virtual DOM) and external environment (DOM) and the whole octopus metaphor simply did not exist. The server produced documents, the browser consumed them, and JavaScript patched things on top.

Between server and client, there was a clear and tacit agreement: whoever was the authoritative source of truth for a given piece of data was responsible for it.

In the traditional stack, state flowed in one direction. When you clicked a button and triggered a page reload, all client-side state was instantly wiped. All authority returned to the server. The server re-queried the database, decided what the UI should look like, and once again pushed its conclusion (HTML) to the browser.

No "state synchronization." No "isomorphism." No "hydration." Because there was only one source of truth, and that was the server.

The entirely new problem we have faced over the past decade is this: what we want now is an "application," not a "document." Document-based solutions can no longer support the complexity we are dealing with.

We want millisecond responses. We want to keep typing when the network drops. We want to drag an element without triggering a page reload. This means we must build our own state management system on the client side. And so the source of truth fractured: part of it lives on the server (the database), and part of it lives on the client (memory).

That is the moment when React SSR, this "strange idea," appeared on stage. It tries, through a fantastically complex mechanism, to simulate a client-side runtime on the server, so that the server can "rehearse" the client's state logic before sending HTML. It wants one codebase where a single piece of logic seamlessly switches between two completely different sources of authority: server and browser.

You might ask: is there a way to let the server and the client each own their respective source of truth? That is a good question, and it is worth pursuing.

Such an approach properly respects the fact that server and client live in fundamentally different internal and external environments. The server's external environment is the database; its internal data is the result of queries and computations. The client's external environment is the browser; its internal environment is the data that determines the UI. For a single request, the server's internal state generally does not change in response to external async events. It is essentially one-shot 1. The client, on the other hand, must respond to a continuous stream of events, updating and mutating over time, so it must maintain a living, dynamic state.

1

I know there is another approach: abandon the fantasy of mirroring state on the client, and let the server drive the DOM directly, as in HTMX or Phoenix LiveView. But I have always felt that approach is a bit unhinged.

Congratulations! You have just discovered Server Components.

Server Components Are Even Stranger

React's answer to this observation is to stitch the two worlds together, using use client or use server to annotate which source of truth owns a given piece of data.

And as we noted earlier, server-side data is generally one-shot, so you cannot use Hooks there, and you cannot have local state. If you genuinely understand the internal and external environment distinction, the overall system design does make sense. I think the Server Component direction is actually quite promising. What I cannot understand is why, under React's grand narrative, it has been assembled into such an incomprehensible mess.

On the surface, Server and Client Components can be nested inside each other, making a complicated system seem elegantly unified. But this apparent peace is a surface illusion. The advertisements, the keynotes, the breathless articles and the self-satisfied advocates have not told you about the enormous mental burden this creates.

The Complex Property

When you write code in a plain React application and pass a Prop from a parent to a child component, the mental model is simple: you are moving data around inside the browser. You can pass strings, objects, callback functions, or even class instances with all their internal methods. Everything feels natural.

But in a world where Server Components and Client Components intermingle, passing data becomes something else entirely. It is no longer a trivial local transfer. It is crossing a physical network boundary spanning the two ends of the internet.

When you nest a Client Component inside a Server Component and try to pass Props, those Props must travel across the network. This means they must be serializable. You cannot pass functions. You cannot pass complex object instances. You cannot pass anything that cannot be understood by JSON.stringify (or more precisely, React's own Flight protocol serializer).

Your code appears to live in the same file system, even in the same component tree, happily nested together. But in reality, this is a schizophrenic octopus. Its upper half is on the server; its lower half is in the user's browser.

As a developer, you are now compelled to run two completely independent runtimes simultaneously in your head. Every line of code you write demands a mental judgment: am I currently in a server context or a client context? Can this component use Hooks? Can this piece of data cross the boundary?

You think you saved yourself a few API Endpoints. But what you actually did was take the clear interface contracts that used to live in your routing layer and scatter them as implicit, obscure, and highly error-prone 'use client' boundaries throughout your entire codebase. The API did not disappear. The framework just forced it into the crevices of the component tree.

Enormous Mental Overhead and the Question of Trade-offs

Let us do the math.

One of Server Components' biggest selling points is keeping heavy dependencies (say, a multi-megabyte syntax highlighting library, or a massive Markdown parser) on the server, reducing the JavaScript bundle sent to the client.

That sounds nice. But what is the cost?

First, you have forcibly split a single module graph into three: the Server module graph, the Client module graph, and the shared module graph between them. You can no longer refactor code as freely as before, because moving a pure function from one file to another might inadvertently cross that invisible serialization boundary, leaving you scratching your head at a pile of incomprehensible compiler errors.

Second, debugging becomes a nightmare. When something breaks in a traditional app, you check the browser's Network tab to see if an API call failed, or look at the Console for error messages. Now? The error might happen during server-side rendering of a Server Component. It might happen while the Flight protocol serializes the entire tree into a mixed binary-and-string blob. Or it might happen on the client side, while React tries to Hydrate and reassemble the DOM from that garbled payload.

For the sake of "better user experience," the engineering team pays a geometrically multiplied complexity cost. More absurdly, the vast majority of real-world CRUD backends, dashboards, and simple content sites probably never needed any of this extreme optimization in the first place. You spend weeks wrestling with RSC boundary issues and third-party library incompatibilities. Meanwhile, with ordinary React plus a simple skeleton screen and a bit of progressive disclosure animation, you could have achieved 95% of the same result.

The Original Sin and Arrogance of JavaScript

This may be Server Components' most uncomfortable aspect: it locks your entire backend stack into JavaScript.

Purely personal opinion: I am inclined by default to oppose using Node.js as a backend, unless you can fully justify the necessity (say, you are building a simple blog, and yes I am laughing). The reason is simple: Node.js's backend ecosystem, notoriously, after years of crawling through the mud, still does not have a remotely decent ORM. If your task is simple enough to run on an Edge Runtime without Node.js, and you have no complex database work, then perhaps that is another matter (like building a blog, and again yes, I am laughing).

With the traditional "frontend React plus backend API" model, your backend can be blazingly fast Go, or Python with its seamless AI and scraping integrations, or Java with its rock-solid ecosystem. The backend team can do whatever they want; the frontend only needs to care about data structures.

But what are the prerequisites for Server Components? Your server must be capable of executing React components and generating React's proprietary Flight protocol format. This requires a Node.js or Edge V8 runtime on the server.

What if your data and core business logic live in Go microservices? You will probably need to insert a middleware layer between frontend and backend. A simple architecture where the client connected directly to the backend now has three tiers.

Not only are you maintaining an extra runtime, but you have stepped on the exact same rake we discussed earlier: "the SSR server and the backend server playing ping-pong."

And then there are the security implications. Since Flight is a serialization format for transmitting component instructions and data between server and client, it inherits all the headaches of deserialization. For evidence, see the string of perfect-score CVEs that Vercel and React conjured up together. Because Server Actions must be triggered from the client while passing complex state, the underlying protocol's payload validation is one small mistake away from letting an attacker execute arbitrary code on your Node.js server. On one hand, I am completely unsurprised that this happened in JavaScript, because it is JavaScript. On the other hand, deserializing data is one thing; deserializing business logic is quite another. In the current implementation, while no directly executable code is transmitted, the Server Component protocol is no longer a pure data transport. It also carries component structure, reference relationships, and invocation capabilities. This makes it a rare and peculiar composite description. Such a design, in engineering terms, takes on the character of business logic, blurring the boundary between data and execution, and thereby carrying inherently greater security risk.

Following the Money: The Entanglement of Interests

Given such steep mental costs and such absurd architectural constraints, you might wonder: why are there so many people in the community, so many prominent voices, zealously promoting all of this? Why has the whole thing been dressed up to look like the salvation of frontend development?

This requires a look at the commercial logic behind the technology.

The React team conceived of these ideas inside Meta (Facebook), but Meta has its own unique and enormous problems.

We don't really use SSR at Facebook, which is why this has come last.

by Andrew Clark, May 2017

So the React team needed a brave volunteer to take this experiment into production. Can you guess which bitch got volunteered? None other than that evil black triangle, Vercel!

Do not misunderstand me. I am not peddling conspiracy theories. This is a perfectly legitimate and rational business decision. But you have to see clearly what Vercel's business model actually is: they are a cloud provider selling server and edge computing resources (Serverless and Edge Functions).

If everyone writes pure client-side SPAs or generates static sites, the build artifact is just a pile of HTML, CSS, and JavaScript files. You can throw them on GitHub Pages, Cloudflare Pages, or cheap and cheerful AWS S3, and it barely costs any server compute at all.

But how can that work? If no CPU burns, what is there to sell?

Server Components and Server Actions take things that could have been computed statically (or computed in the user's browser on the user's electricity bill) and forcibly pull them back to the server. Every interaction, every streaming page update, consumes Serverless compute time.

How about that.

Pushing SSR and RSC as the "default option" for all developers dramatically increases the demand for compute resources. The entire ecosystem is swept along by this narrative. Developers, in order to shave a few milliseconds off a first-paint problem that may not even affect them, willingly rewrite their codebases, stumble through innumerable compatibility landmines, and ultimately deploy their projects to pay-per-request cloud functions.

If this work had been done by someone other than Vercel, if the React team could have taken greater responsibility as open-source maintainers and formalized the communication protocol as an RFC-governed standard, overseen by a committee, things might have looked very different. Even from within the React codebase, one could imagine extracting Server Components into a standard template sequence, generating something like a shared Protobuf protocol for the backend to fill in, combined with SSG, all without chaining the entire ecosystem to JavaScript. A long time ago, this kind of thing was called "granular caching" in PHP-land: partial HTML output cached in place, with the full HTML assembly reading from disk when a cache hit was found, skipping both the database query and the template pass. And just like Server Components, the output was stateless and non-interactive. The only difference might be whether you could embed dynamic interactive content inside. But can that be solved with something other than Server Components? Obviously yes.

I have always believed that server-side things should be written honestly with server-side toolkits. A Linter rule requiring a special naming prefix for such code would be even better. When dynamic content is needed, the client side should declare explicit Slots to be filled, rather than this everything-in-one-pot approach where it all looks homogenously uniform but is anything but. Yet React has shown remarkably little backbone here:

it's not a stable format today because it's not clear if many people would use one and we want to leave room to improve the format as new optimization opportunities arise.

Sophie Alpert

I sincerely hope some community wild-card implementation eventually cleans up this mess. XHP-JS was clearly more pragmatic, though that project's grave is now thick with weeds. At least Strike for Golang and rari for Rust are still alive, even if they have not yet made a name for themselves.

After all, the approach of not needing to fully simulate a browser environment on the server looks very much like the right answer. Static information fragments have no state; they can emit static render output directly. Perhaps with a little magic from the React Compiler, one could even establish a transparent hydration-skipping mechanism for further performance gains. At the very least, it would return us to the thinking of twenty years ago: humble, but working.

Again: the Server Component concept genuinely solves difficult problems. But I am not satisfied with the answer that React and Vercel hand in hand have provided, because I see no sign of the responsible, ecosystem-steward attitude that the situation demands.

The Fragmentation of the Community

React 18 delivered a massive bloodbath to the ecosystem, with the CSS-in-JS space taking the worst of it. Before React 18, we were accustomed to solutions that injected styles dynamically at runtime. My personal favorites were Styletron and Griffel. Their logic was simple: generate styles and construct stylesheets during component rendering. This fit React's philosophy of putting everything into JavaScript, and the experience was smooth and natural.

But when the Fiber architecture and Concurrent Mode arrived, this approach collapsed entirely. Concurrent Mode's core concept is "interruptible rendering": React may begin rendering a component, pause mid-way to handle a higher-priority task, and potentially discard the render and start over.

This effectively sentenced dynamic CSS-in-JS to death. Style injection order determines CSS priority. When the rendering process becomes unpredictable, interruptible, and potentially re-executed, the order of style injection becomes completely chaotic.

To adapt to this architecture, many libraries had to undergo breaking rewrites or introduce extraordinarily complex patches. A large number of smaller solutions simply died, or became "zombie libraries": they still run, but they cannot support React's new features.

What survived were statically analyzable approaches. You think you are writing JavaScript, but you are forced into a mentally split state where you can no longer treat styles as part of JavaScript, much like the dissociation SSR and Server Components created.

Or just embrace Tailwind, my friend! I just love writing CSS inside a class list! Welcome to the new world!

Ugh.

You can imagine how this fragmentation inevitably spreads into the component library ecosystem.

In the past, a frontend developer choosing a UI library considered: how does it look? Is it compatible with my designer's work? Is the API intuitive?

Now, on top of all of that, you need an entire additional set of evaluation criteria: does this library support RSC? Can it run inside a Server Component? Will it cause hydration errors in streaming SSR?

A library may work perfectly in traditional client-side React but require layers upon layers of 'use client' wrappers to barely function in an RSC environment, or it supports streaming but not certain concurrent features.

A strange thing has risen from the east: inside React, an unofficial and fragmented "compatibility matrix" has taken shape. The browser world has caniuse.com, which tells you clearly what API is available in what version. But in the React world, there is no official, unified React version matrix. You get fragments of answers from scattered blog posts, GitHub Issues, and the influential figures of the tech world. By the way, in my experience, technically strong people tend to have terrible taste in aesthetics, and they will almost invariably recommend some collection of dead-white-background components. Your designer, upon seeing these options, will feel an urge to unscrew your head.

This fragmentation can in some sense be understood as an offloading of complexity.

The React team, in order to solve the 1% of extreme-scale problems faced at Facebook, introduced Fiber, Concurrent Mode, the Flight protocol, and RSC. But these capabilities are not activated through a simple toggle. They silently reshape the operating assumptions of the entire framework.

Interestingly, remember what we mentioned earlier? The React that Facebook actually uses is not the same as the open-source React. They have their own internal solutions.

The result is that library authors building general-purpose tools had to rewrite 100% of their code to accommodate the extreme 1% use case. And developers at the end of the chain, before they can enjoy any incremental performance improvement, must first thrash through endless compatibility pitfalls.

We thought we were embracing "the future." In reality, we used the engineering stability of the entire community to pay the tab for a handful of giant companies' extreme edge cases.

This is the legacy of this "architectural revolution": a world shattered into pieces, riddled with invisible boundaries, requiring an elaborate mental framework just to get started. And all of it has been packaged and sold as the inevitable path to "modern frontend development."

Exhaustion

In summary, here are the questions I have thought about until my head hurts and still cannot answer:

  1. What tangible benefit does Concurrent Mode's interruptible rendering actually deliver in a single-threaded environment, and what are its genuine use cases?
  2. Given that Web Workers can handle compute-heavy tasks and virtualization can address DOM scale problems, is the enormous complexity cost of the Fiber architecture actually justified?
  3. Do SSR's core value propositions (SEO, performance, first-paint experience) hold up to scrutiny, or would SSG plus CSR be sufficient for the vast majority of scenarios?
  4. To what extent is the React team's push toward SSR and RSC driven by technical judgment, and to what extent is it shaped by two non-technical factors: the inability to run experiments inside Meta, and the commercial interests of Vercel?
  5. Does the growing complexity of the React ecosystem correspond to its actual benefits, or is the entire community bearing an unnecessary cost for an architecture that primarily serves a small number of extreme-scale scenarios?

For a long time, my love of React came from its simplicity. The philosophy was elegant: it abstracted complex DOM manipulation into a pure mapping relationship and a few basic principles. The most captivating thing about this philosophy was that you did not need to be a compiler expert or a low-level architect. Keep the philosophy in mind, follow best practices, and you would very likely produce high-quality code.

But this elegance has a price. Because philosophy cannot be quantified by a Linter, it requires the developer to keep their brain "always on" with every single line of code they write. Linters will catch simple things like dependency arrays, but more complex data flow management and state design are beyond the Linter's reach. Many developers with their brains in standby mode frequently produce timing bugs or find themselves needing to push data in reverse. This highlights the enormous barrier that comes with a framework whose core is a philosophy.

I used to be able to talk myself through the exhaustion. React forced me to think through my business logic, to straighten out the direction of data flow. As long as I followed the philosophy, my codebase was maintainable. In that sense, my mental overhead was being spent fighting the complexity of the business domain, not fighting the tool itself. And the React Compiler is one of the few genuinely good things I have seen: it really can free developers a little from the mental burden of the philosophy.

This is also why I once praised Create React App. It represented a rare and precious restraint: it told developers that the complexity of the build toolchain should not be your problem. I cannot see any real business problem that a custom-hacked Webpack configuration actually solves, beyond giving developers a false sense of accomplishment and a hollow feeling of superiority.

But now the exhaustion has become something different. It has become the exhaustion of being tricked.

To solve the extreme-scale problems faced by Facebook or Wix, affecting maybe 1% of the world. To make a Node.js runtime running on an edge server run just a little bit faster. To push Vercel's compute billing a few digits higher. I am being compelled to wrestle with problems I do not care about.

It forces us to confront an ecosystem shattered into pieces, to learn a patchwork of solutions designed to compensate for the "original sin of hydration." It tries to use a single unified framework to forcibly smooth over the irreconcilable difference between "server-authoritative state" and "client-persistent state." It refuses to acknowledge the boundary's existence, choosing instead to increase complexity to paper over it.

For a long time, React positioned itself as a "library" and handed off the problems a library cannot solve to the community. The community accepted this tacit agreement, and in my view, that dynamic was a vital source of React's vibrant ecosystem. But now React is stepping out of its original lane and reaching into "framework" territory. React is trying to become a god. It expects its runtime to take over everything. It expects its model to solve problems ranging from a simple blog to a massive collaborative tool. But I am human, and so are the other developers in the ecosystem.

Watching this spectacle unfold, I only feel tired. I am weary of trying to find meaning inside a narrative of "optimizing for the sake of optimizing."

So I have jumped ship to Svelte. Some friends I used to write React with have jumped to Solid.js, others to Preact, others to HTMX. All of these libraries or frameworks are more honest about their own boundaries. None of them tries to solve every problem. To me, that is a virtue. Shopify is also working on Remix V3, and as an ordinary bystander munching on melon seeds, I am curious to see what they come up with.

Astro from next door also smells enticing. It draws a clear line around content that does not need hydration, which is a smarter and more pragmatic solution. Following that thread further, issues like Bundle Size and First Contentful Paint may eventually find much lighter-weight treatments.

I am not very good at accepting the "but what about your side" style of argument, where when your own religion cannot answer a question you pivot to attacking other tools for being imperfect. Everyone knows every house has its own difficult sutras to recite. All the alternatives mentioned above have their own troubles, and may share some of the same pitfalls described in this article. But at least none of them have made me "tired in such an evenly distributed and comprehensive manner" the way React has.

Of course, to prevent the flames of war from spreading, we could drag out one more culprit: the WhatWG, an organization that does not appear to have been particularly busy lately. In the transition from the Web as Documents to the Web as Applications, they did contribute a fair amount of genuinely useful work. But for solving the current mess, of which the framework war is a major front, Web Components are clearly not the right answer, and they fall quite short, and nobody cares about them anyway.

What a sad story.

No solution lasts forever. The interesting thing about the JavaScript ecosystem is precisely its low barrier to entry, which has produced a spectacular explosion of wheels and libraries. A million new wheels appear every day. Every second, someone is attempting to define the next "standard." In this high-velocity, trigger-happy era, we do not need a "god" who tries to solve everything. We just need, whenever the next problem surfaces, some wheel-building idiot to hit the bullseye and hand us something simple, useful, and not exhausting.

For now, my only remaining hope for React is this: please do not become a piece of pure garbage like Windows 11, crushed under the weight of everyone's competing interests. (Cue the soprano voice singing praises to Copilot.)

Finally, a word for the "holy warriors" on social media who zealously promote RSC and Concurrent Mode: if what you are building is nothing but a dead-white, aesthetics-free personal blog, you have no right to talk about the supposed "benefits" of these shiny new things. Because you have never set foot on the battlefield where these weapons are actually needed. You are just swinging the most expensive heavy ordnance at a weed by the roadside, and then marveling at the scattered rubble.

I genuinely do not know what Dan was thinking when he looked at this entire mess and said: "You may not like it, but React is basically Haskell." Perhaps that is what it means to be a god.