Stop Streaming Plain Text: Unlock Interactive UIs with React Server Components

Published: (February 3, 2026 at 03:00 PM EST)
6 min read
Source: Dev.to

Source: Dev.to

The Evolution: From News Ticker to Live Broadcast

The Old Way (Text‑Only)

Imagine a live news ticker. Information flows continuously, but it is static. The client is a passive recipient.
If a user asks, “Show me a chart of Q3 sales,” the AI streams back text describing the chart, or perhaps a JSON blob. The user must wait for the stream to finish before the UI can render.

The New Way (Streaming UI)

Now, imagine a live broadcast where the correspondent can dynamically insert interactive dashboards, charts, and forms into the feed. This is the streamable‑ui pattern.
Instead of sending {"chart":"data…"}, the server sends a serialized React component:

// Example placeholder – the actual component is streamed from the server

The client receives this, hydrates it immediately, and the user can hover, zoom, and click while the AI is still generating the rest of the response.

The Architecture: RSCs and Server Actions

How does this work under the hood? It relies on two pillars:

  1. React Server Components (RSCs) – The server renders the component tree and serializes it into a special payload (using React’s Flight protocol). This payload is streamed over SSE (Server‑Sent Events).
  2. Server Actions – The streamed components aren’t just static HTML. They can contain interactive elements (buttons, forms) that trigger secure functions on the server via Server Actions.

This creates a bi‑directional flow:

DirectionDescription
Server → ClientStreams a UI component.
Client → ServerUser clicks a button inside that component.
Server → ClientThe Server Action executes, potentially triggering more AI generation and streaming new components.

The “Live Shopping Cart” Analogy

Think of a live streamer selling a product.

ModeExperience
Text‑OnlyThe streamer describes the item. You type in chat to ask questions.
Streaming UIThe streamer overlays an “Add to Cart” button and a size selector directly onto the video feed. You click it right now without stopping the video.

That is the experience we are building.

Code Example: Streaming an AI Dashboard

We’ll build a SaaS feature where an AI generates a summary report and streams it as an interactive React component.

1️⃣ Server‑Side Implementation

File: app/api/generate-report/route.ts

// app/api/generate-report/route.ts
import { streamUI } from 'ai/rsc';
import { OpenAI } from 'openai';

const openai = new OpenAI({
  apiKey: process.env.OPENAI_API_KEY || '',
});

// The component to be streamed
const ReportComponent = ({ data }: { data: string }) => {
  return (
    <div>
      <h3>AI Generated Report</h3>
      <p>{data}</p>
      <button
        onClick={() => alert('Report acknowledged!')}
        className="mt-3 px-3 py-1 text-xs bg-blue-600 text-white rounded hover:bg-blue-700"
      >
        Acknowledge
      </button>
    </div>
  );
};

export async function POST(req: Request) {
  const { prompt } = await req.json();

  const result = await streamUI({
    model: 'gpt-4-turbo-preview',
    system: 'You are a helpful assistant that generates concise reports.',
    prompt: `Generate a summary report for: ${prompt}`,

    // The Magic Mapping:
    // When the AI generates text, we wrap it in our React Component
    text: ({ content }) => {
      return <ReportComponent data={content} />;
    },

    initial: 'Generating report...',
  });

  return result.toAIStreamResponse();
}

2️⃣ Client‑Side Implementation

File: app/page.tsx

// app/page.tsx
'use client';

import { useCompletion } from 'ai/react';

export default function DashboardPage() {
  const {
    completion,
    input,
    handleInputChange,
    handleSubmit,
    isLoading,
  } = useCompletion({
    api: '/api/generate-report',
  });

  return (
    <div>
      <h2>SaaS Dashboard</h2>

      <form onSubmit={handleSubmit}>
        <input
          value={input}
          onChange={handleInputChange}
          placeholder="Enter prompt"
        />
        <button type="submit" disabled={isLoading}>
          Generate
        </button>
      </form>

      <h3>Output:</h3>
      {/* The SDK handles the deserialization of the RSC payload */}
      {completion ? (
        // The payload is already a React element, so we can render it directly.
        <>{completion}</>
      ) : (
        <p>No report generated yet.</p>
      )}
    </div>
  );
}

Note: completion returned by useCompletion is a React element (or a tree of elements) that the SDK reconstructs from the streamed RSC payload. No dangerouslySetInnerHTML is required.

Recap

  • Streaming UI lets the server push interactive React components to the client while the AI is still generating.
  • React Server Components + Server Actions give us a secure, bi‑directional, real‑time UI pipeline.
  • The pattern works with the Vercel AI SDK (streamUI on the server, useCompletion on the client) and can be adapted to any LLM‑driven workflow.

Now you can go beyond static text and build truly live AI‑augmented experiences! 🚀

{isLoading && (
  <div>
    ● Streaming component...
  </div>
)}

Fullscreen Controls

Enter fullscreen mode
Exit fullscreen mode

Advanced Patterns: LangGraph and Max‑Iteration Policies

When you combine streaming UI with AI agents, you enter a cyclical workflow:

  1. The AI generates a UI.
  2. The user interacts with it.
  3. That interaction feeds back into the AI to generate the next step.

This is powerful, but dangerous. Without guardrails, an AI can get stuck in an infinite loop of generating components.

The Solution: Max‑Iteration Policies

Using LangGraph, we can structure our AI logic as a stateful graph.
Add a conditional edge (a “Policy”) that checks the iteration count. If the count exceeds a limit (e.g., 5 steps), the graph forces a transition to the END node, terminating the process gracefully.

This ensures your application remains stable even if the AI logic gets confused.

Common Pitfalls to Avoid

  • Hallucinated JSON – Don’t ask the LLM to generate the React component structure (JSON/JSX). It will fail. Instead, ask it to generate content and map that content to a pre‑defined component on the server (as shown in the code example).
  • Vercel Timeouts – Serverless functions have timeouts (10 s–15 s). If your AI generation is slow, the stream might cut off. Always use streamUI (which keeps the connection alive efficiently) and optimise your prompts.
  • Hydration Errors – Server Components cannot access browser APIs (window, document). If you need client‑side interactivity (like the onClick in our example), ensure the event‑handling logic is handled by the client hydration process or wrapped in a Client Component.

Conclusion

Streaming React components moves us from “Generative Text” to “Generative UI.”

It changes the user experience from a passive read‑and‑wait cycle to an active, iterative collaboration. By leveraging the Vercel AI SDK and React Server Components, you can build applications that feel instantaneous and deeply interactive. The AI isn’t just telling you what to do; it’s building the tools for you to do it, right in front of your eyes.

The concepts and code demonstrated here are drawn directly from the comprehensive roadmap laid out in the book The Modern Stack: Building Generative UI with Next.js, Vercel AI SDK, and React Server ComponentsAmazon Link – part of the AI with JavaScript & TypeScript Series (Amazon Link).

Check also all the other programming e‑books on Leanpub.

Back to Blog

Related posts

Read more »

ReactJS ~React Server Components~

!Cover image for ReactJS ~React Server Components~https://media2.dev.to/dynamic/image/width=1000,height=420,fit=cover,gravity=auto,format=auto/https%3A%2F%2Fdev...