Industry
If you've built an AI agent product in the last two years, you've almost certainly done this: opened a blank file, created a messages array, written a streaming fetch loop, and started building a chat UI from scratch. Again.
You're not alone. We've talked to dozens of frontend engineers at AI startups, and they all describe the same experience. The product requirement is clear — build an interface for our agent. The components that interface requires are well-understood. And somehow, every team builds every component themselves, from zero.
Look at any production AI agent interface, from a customer service bot to an internal analyst tool to a coding assistant, and you'll find the same handful of components:
Some products add more: approval flows for high-stakes actions, citation display, confidence scores. But the core is almost always the same five primitives. The variability in what agents do is enormous. The variability in how their UIs look is surprisingly small.
The obvious question: if the pattern is so consistent, why doesn't a standard component library exist that everyone uses?
A few reasons.
The problem is newer than it looks. Production AI agent interfaces at scale are a 2023–2025 phenomenon. Libraries take time to emerge after a new pattern is established. The gap is closing, but slowly.
The components feel simple until they aren't. "A chat message list" sounds like a two-hour task. Then you need streaming. Then markdown rendering with code highlighting. Then citations that link back to sources. Then proper handling of tool call messages mixed in with user messages. Then feedback capture that doesn't block the thread. Then mobile layout. Then accessibility. The estimate was two hours; the reality was two weeks.
Everyone thinks their requirements are unique. They rarely are. "We need to display tool call outputs" isn't a unique requirement — it's a universal one. But teams convince themselves their specific format or interaction model is too bespoke to be handled by a generic component, so they build it themselves. Usually, they're wrong.
There was nothing good to reach for. You could bolt together shadcn components and Tailwind and get most of the way there. But "most of the way" still meant two weeks of integration work, and the result looked like a hastily assembled UI, because it was.
"We rebuilt the same chat thread component three times across three different products in one year. Each time we thought we'd do it properly. Each time it took six weeks and still had bugs."
Teams that reinvent these components every time pay a real cost. It's not just the engineering hours (though two to six weeks per component adds up fast). It's the opportunity cost: every week spent on UI plumbing is a week not spent on the model integration, the product logic, the things that actually differentiate your agent.
There's also a quality tax. Custom-built chat threads typically have more bugs, worse accessibility, and worse mobile layout than they would if they'd been built by someone whose entire focus was that component. The component isn't the product's core competency, so it never gets the attention it deserves.
And there's a design consistency cost. Every team that builds their own agent UI produces something that looks slightly different from every other agent UI — not because the design should be different, but because the wheel was reinvented in a slightly different shape each time. This fragments user mental models across products and makes the whole AI agent space feel less mature than it should.
The cycle breaks when someone builds the components well enough, openly enough, and with enough community investment that teams default to reaching for them rather than building from scratch.
It happened with form libraries (React Hook Form, Formik). It happened with table libraries (TanStack Table). It happened with date pickers (react-datepicker, then Radix). In each case, there was a period where everyone built their own, followed by a period where everyone used the good open source option that emerged.
Agent UI components are in the "everyone builds their own" phase. That phase will end. The faster it ends, the faster the whole industry can focus on the agents themselves rather than the wrappers around them.
That's what we're building with Agent Interface.
Agent Interface is a component library for AI agent UIs. ChatThread, ToolTrace, ApprovalFlow, FeedbackCapture, and SessionHistory — MIT-licensed, production-ready. Get early access →
Continue reading
A breakdown of the UI primitives that appear in every production agent interface.
Read →Cognitive load, trust calibration, and the UX decisions that matter.
Read →A taxonomy of interruption patterns for AI agents that need human oversight.
Read →