pipeAgentUIStreamToResponse

The pipeAgentUIStreamToResponse function runs an Agent and streams the resulting UI message output directly to a Node.js ServerResponse object. This is ideal for building real-time streaming API endpoints (for chat, tool use, etc.) in Node.js-based frameworks like Express, Hono, or custom Node servers.

Import

import { pipeAgentUIStreamToResponse } from "ai"

Usage

import { pipeAgentUIStreamToResponse } from 'ai';
import { MyAgent } from './agent';
export async function handler(req, res) {
const { messages } = JSON.parse(req.body);
await pipeAgentUIStreamToResponse({
response: res, // Node.js ServerResponse
agent: MyAgent,
uiMessages: messages, // Required: array of input UI messages
// abortSignal: optional AbortSignal for cancellation
// status: 200,
// headers: { ... },
// ...other optional UI message stream options
});
}

Parameters

response:

ServerResponse
The Node.js ServerResponse object to pipe UI message stream output into.

agent:

Agent
An agent instance implementing `.stream({ prompt, ... })` and defining a `tools` property.

uiMessages:

unknown[]
Array of input UI messages sent to the agent (such as user/assistant message objects).

abortSignal:

AbortSignal
Optional abort signal to cancel streaming (e.g., on client disconnect). Supply an [`AbortSignal`](https://developer.mozilla.org/en-US/docs/Web/API/AbortSignal), for example from an `AbortController`.

options:

CALL_OPTIONS
Optional agent call options, for agents configured with generic parameter `CALL_OPTIONS`.

experimental_transform:

StreamTextTransform | StreamTextTransform[]
Optional stream text transformation(s) applied to agent output.

...UIMessageStreamResponseInit & UIMessageStreamOptions:

object
Options for streaming headers, status, SSE stream config, and additional UI message stream control.

Returns

A Promise<void>. The function completes when the UI message stream has been fully sent to the provided ServerResponse.

Example: Express Route Handler

import { pipeAgentUIStreamToResponse } from 'ai';
import { openaiWebSearchAgent } from './openai-web-search-agent';
app.post('/chat', async (req, res) => {
// Use req.body.messages as input UI messages
await pipeAgentUIStreamToResponse({
response: res,
agent: openaiWebSearchAgent,
uiMessages: req.body.messages,
// abortSignal: yourController.signal
// status: 200,
// headers: { ... },
// ...more options
});
});

How It Works

  1. Runs the Agent: Calls the agent’s .stream method with the provided UI messages and options, converting them into model messages as needed.
  2. Streams UI Message Output: Pipes the agent output as a UI message stream to the ServerResponse, sending data via streaming HTTP responses (including appropriate headers).
  3. Abort Signal Handling: If abortSignal is supplied, streaming is cancelled as soon as the signal is triggered (such as on client disconnect).
  4. No Response Return: Unlike Edge/serverless APIs that return a Response, this function writes bytes directly to the ServerResponse and does not return a response object.

Notes

  • Abort Handling: For best robustness, use an AbortSignal (for example, wired to Express/Hono client disconnects) to ensure quick cancellation of agent computation and streaming.
  • Node.js Only: Only works with Node.js ServerResponse objects (e.g., in Express, Hono’s node adapter, etc.), not Edge/serverless/web Response APIs.
  • Streaming Support: Make sure your client (and any proxies) correctly support streaming HTTP responses for full effect.
  • Parameter Names: The property for input messages is uiMessages (not messages) for consistency with SDK agent utilities.

See Also