
# Stream Text with Chat Prompt

Chat completion can sometimes take a long time to finish, especially when the response is big. In such cases, it is useful to stream the chat completion to the client in real-time. This allows the client to display the new message as it is being generated by the model, rather than have users wait for it to finish.

<Browser>
  <ChatGeneration
    stream
    history={[
      { role: 'User', content: 'How is it going?' },
      { role: 'Assistant', content: 'All good, how may I help you?' },
    ]}
    inputMessage={{ role: 'User', content: 'Why is the sky blue?' }}
    outputMessage={{
      role: 'Assistant',
      content: 'The sky is blue because of rayleigh scattering.',
    }}
  />
</Browser>

## Client

Let's create a React component that imports the `useChat` hook from the `@ai-sdk/react` module. The `useChat` hook will call the `/api/chat` endpoint when the user sends a message. The endpoint will generate the assistant's response based on the conversation history and stream it to the client.

```tsx filename='app/page.tsx'
'use client';

import { useChat } from '@ai-sdk/react';

export default function Page() {
  const { messages, input, setInput, append } = useChat();

  return (
    <div>
      <input
        value={input}
        onChange={event => {
          setInput(event.target.value);
        }}
        onKeyDown={async event => {
          if (event.key === 'Enter') {
            append({ content: input, role: 'user' });
          }
        }}
      />

      {messages.map((message, index) => (
        <div key={index}>{message.content}</div>
      ))}
    </div>
  );
}
```

## Server

Next, let's create the `/api/chat` endpoint that generates the assistant's response based on the conversation history.

```typescript filename='app/api/chat/route.ts'
import { streamText, UIMessage } from 'ai';
import { openai } from '@ai-sdk/openai';

export async function POST(req: Request) {
  const { messages }: { messages: UIMessage[] } = await req.json();

  const result = streamText({
    model: openai('gpt-4o'),
    system: 'You are a helpful assistant.',
    messages,
  });

  return result.toDataStreamResponse();
}
```

---

<GithubLink link="https://github.com/vercel/ai/blob/main/examples/next-openai-pages/pages/chat/stream-chat/index.tsx" />


## Navigation

- [Generate Text](/v4/cookbook/next/generate-text)
- [Generate Text with Chat Prompt](/v4/cookbook/next/generate-text-with-chat-prompt)
- [Generate Image with Chat Prompt](/v4/cookbook/next/generate-image-with-chat-prompt)
- [Stream Assistant Response](/v4/cookbook/next/stream-assistant-response)
- [Stream Assistant Response with Tools](/v4/cookbook/next/stream-assistant-response-with-tools)
- [Caching Middleware](/v4/cookbook/next/caching-middleware)
- [Stream Text](/v4/cookbook/next/stream-text)
- [Stream Text with Chat Prompt](/v4/cookbook/next/stream-text-with-chat-prompt)
- [Stream Text with Image Prompt](/v4/cookbook/next/stream-text-with-image-prompt)
- [Chat with PDFs](/v4/cookbook/next/chat-with-pdf)
- [streamText Multi-Step Cookbook](/v4/cookbook/next/stream-text-multistep)
- [Markdown Chatbot with Memoization](/v4/cookbook/next/markdown-chatbot-with-memoization)
- [Generate Object](/v4/cookbook/next/generate-object)
- [Generate Object with File Prompt through Form Submission](/v4/cookbook/next/generate-object-with-file-prompt)
- [Stream Object](/v4/cookbook/next/stream-object)
- [Call Tools](/v4/cookbook/next/call-tools)
- [Call Tools in Parallel](/v4/cookbook/next/call-tools-in-parallel)
- [Call Tools in Multiple Steps](/v4/cookbook/next/call-tools-multiple-steps)
- [Model Context Protocol (MCP) Tools](/v4/cookbook/next/mcp-tools)
- [Human-in-the-Loop with Next.js](/v4/cookbook/next/human-in-the-loop)
- [Send Custom Body from useChat](/v4/cookbook/next/send-custom-body-from-use-chat)
- [Render Visual Interface in Chat](/v4/cookbook/next/render-visual-interface-in-chat)


[Full Sitemap](/sitemap.md)
