Member-only story
Why React is surprisingly the best model for LLM workflows
3 min readMar 24, 2025
Building LLM Apps Today Still Sucks
If you’ve tried to build anything beyond a simple single turn chat interface with LLMs, you know the pain. The current ecosystem is a mess:
- Everything is Python-first. JavaScript and TypeScript are eating the world, powering frontends and backends alike. And yet when it comes to building AI and agents, frameworks are stuck in an overabstracted, global-state Python world. No JavaScript or TypeScript to be found, no concept of declarative, repeatable components.
- Current workflow abstractions are wrong. The popular frameworks force you into static graph definitions that are inflexible and impossible to reason about. I’ve lost too much time standing at a whiteboard trying to understand what my own code is doing.
- Global state is a nightmare. Backend devs have created these crazy rube goldberg machines where you have to forward all of the state to all parts of the workflow. This doesn’t work when you’re trying to experiment with your agent and need to quickly make changes.
It might sound surprising at first, but React is emerging as an unexpectedly great fit for LLM-based workflows — especially when you’re building tools, apps, or UIs that leverage large language models.