The six UI components every AI agent product needs

After looking at dozens of production AI agent interfaces — from internal enterprise tools to consumer-facing products — a clear pattern emerges. The same six UI components appear in virtually every one. The form they take varies. The need for them doesn't.

Here's a breakdown of each, what they do, and why they're harder to build correctly than most teams expect.

1. ChatThread

The message thread is the foundation of most agent interfaces. It looks simple: a list of messages, alternating between user and assistant. In practice, it's a minefield.

Streaming. Modern LLMs stream their output token by token. Your thread needs to handle partial responses, update in real time, and avoid layout thrash as content arrives. Getting smooth streaming right — without flickering, without scroll jumps — is genuinely hard.

Mixed message types. A real agent conversation includes user messages, assistant messages, tool call messages (with structured input/output), system messages, and error states. Each has different visual treatment. Handling them in a single unified thread requires careful data modeling.

Markdown and code. Agents frequently respond with markdown — headers, bold, lists, code blocks with syntax highlighting. Rendering this correctly and safely (without XSS) requires a proper markdown library and sanitization pipeline.

Auto-scroll behavior. Users expect the thread to scroll to the bottom as new content arrives — unless they've scrolled up to read older content. This seemingly simple behavior requires careful event handling to feel right.

Estimated first-pass build time: 1–2 weeks. Time to production-quality with edge cases handled: 3–5 weeks.

2. ToolTrace

When an agent uses tools — querying a database, calling an API, reading a file — users benefit from seeing what happened. A ToolTrace component shows the tree of tool calls: what was called, with what arguments, and what came back.

The design challenge is layered. Engineers want full fidelity — all inputs, all outputs, timing data. End users want enough transparency to understand and trust what the agent did, without being overwhelmed by raw JSON.

Good ToolTrace components are expandable: collapsed by default (showing just the tool name and outcome), expandable to show inputs and outputs. They handle nested tool calls, error states, and partial completion gracefully.

The component also needs to handle large outputs without breaking the layout — a database query that returns 10,000 rows needs to be truncated and signaled as truncated.

3. ApprovalFlow

For agents that take consequential actions — sending emails, making purchases, modifying records — human approval is often required before execution. The ApprovalFlow component presents these decision points in a way that's clear, complete, and actionable.

The UX requirements here are demanding. The component must convey: what action is being proposed, what context triggered it, what the consequences of approving or rejecting are, and what will happen next. It must do this in seconds of reading time, not minutes.

Approval components also need to handle state clearly: pending, approved, rejected, expired. They need audit trail support. And they should feel safe — like approving a meaningful decision — not alarming.

4. FeedbackCapture

Collecting feedback on agent responses is how you improve the model. Thumbs up/down is the minimum; star ratings and freeform comments are common additions. The component seems trivial. The implementation details that matter:

The implementation is not the challenge. The design of the capture UX — how visible, how prominent, when to show it — is where most teams make mistakes.

5. SessionHistory

If your agent interface is used repeatedly, users need a way to navigate back to past conversations. SessionHistory provides a searchable, organized list of sessions with metadata (title, timestamp, message count, status).

The implementation involves persistence (local storage or a backend), session naming (auto-generated or user-editable), search, and potentially branching (starting a new session from a historical one). Mobile layout for session browsing is consistently under-designed.

6. AgentShell

The container that holds everything together. Layout, navigation, auth state, responsive breakpoints, keyboard shortcuts, loading states. Not a single component — more a set of structural primitives that every agent interface needs and no one enjoys building.

AgentShell is the component most teams build last and regret building themselves. It's pure infrastructure: it needs to work reliably, it's not differentiating, and it's surprisingly complex to get right across screen sizes and auth states.

The compounding cost

Each of these components takes one to six weeks to build at production quality. Building all six represents two to six months of frontend engineering time, depending on team size and experience. That's time not spent on the agent logic, the model integration, or the product features that actually differentiate your offering.

The irony: none of these components are the hard part of building an AI agent product. They're the wrapper around the hard part. And they consume a disproportionate amount of the total build time.

All six of these components are available in Agent Interface, MIT-licensed. Get early access →

More from the blog