Announcing AI SDK 5 Alpha

This is an early preview — AI SDK 5 is under active development. APIs may change without notice. Pin to specific versions as breaking changes may occur even in patch releases.

Alpha Version Guidance

The AI SDK 5 Alpha is intended for:

  • Exploration and early prototypes
  • Green-field projects where you can experiment freely
  • Development environments where you can tolerate breaking changes

This Alpha release is not recommended for:

  • Production applications
  • Projects that require stable APIs
  • Existing applications that would need migration paths

During this Alpha phase, we expect to make significant, potentially breaking changes to the API surface. We're sharing early to gather feedback and improve the SDK before stabilization. Your input is invaluable—please share your experiences through GitHub issues or discussions to help shape the final release.

We expect bugs in this Alpha release. To help us improve the SDK, please file bug reports on GitHub. Your reports directly contribute to making the final release more stable and reliable.

Installation

To install the AI SDK 5 - Alpha, run the following command:

# replace with your provider and framework
npm install ai@alpha @ai-sdk/[your-provider]@alpha @ai-sdk/[your-framework]@alpha

APIs may change without notice. Pin to specific versions as breaking changes may occur even in patch releases.

What's new in AI SDK 5?

AI SDK 5 is a complete redesign of the AI SDK's protocol and architecture based on everything we've learned over the last two years of real-world usage. We've also modernized the UI and protocols that have remained largely unchanged since AI SDK v2/3, creating a strong foundation for the future.

Why AI SDK 5?

When we originally designed the v1 protocol over a year ago, the standard interaction pattern with language models was simple: text in, text or tool call out. But today's LLMs go way beyond text and tool calls, generating reasoning, sources, images and more. Additionally, new use-cases like computer using agents introduce a fundamentally new approach to interacting with language models that made it near-impossible to support in a unified approach with our original architecture.

We needed a protocol designed for this new reality. While this is a breaking change that we don't take lightly, it's provided an opportunity to rebuild the foundation and add powerful new features.

While we've designed AI SDK 5 to be a substantial improvement over previous versions, we're still in active development. You might encounter bugs or unexpected behavior. We'd greatly appreciate your feedback and bug reports—they're essential to making this release better. Please share your experiences and suggestions with us through GitHub issues or GitHub discussions.

New Features

LanguageModelV2

LanguageModelV2 represents a complete redesign of how the AI SDK communicates with language models, adapting to the increasingly complex outputs modern AI systems generate. The new LanguageModelV2 treats all LLM outputs as content parts, enabling more consistent handling of text, images, reasoning, sources, and other response types. It now has:

  • Content-First Design - Rather than separating text, reasoning, and tool calls, everything is now represented as ordered content parts in a unified array
  • Improved Type Safety - The new LanguageModelV2 provides better TypeScript type guarantees, making it easier to work with different content types
  • Simplified Extensibility - Adding support for new model capabilities no longer requires changes to the core structure

Message Overhaul

AI SDK 5 introduces a completely redesigned message system with two message types that address the dual needs of what you render in your UI and what you send to the model. Context is crucial for effective language model generations, and these two message types serve distinct purposes:

  • UIMessage represents the complete conversation history for your interface, preserving all message parts (text, images, data), metadata (creation timestamps, generation times), and UI state—regardless of length.

  • ModelMessage is optimized for sending to language models, considering token input constraints. It strips away UI-specific metadata and irrelevant content.

With this change, you will be required to explicitly convert your UIMessages to ModelMessages before sending them to the model.

import { openai } from '@ai-sdk/openai';
import { convertToModelMessages, streamText, UIMessage } from 'ai';
export async function POST(req: Request) {
const { messages }: { messages: UIMessage[] } = await req.json();
const result = streamText({
model: openai('gpt-4o'),
messages: convertToModelMessages(messages),
});
return result.toUIMessageStreamResponse();
}

This separation is essential as you cannot use a single message format for both purposes. The state you save should always be the UIMessage format to prevent information loss, with explicit conversion to ModelMessage when communicating with language models.

The new message system has made possible several highly requested features:

  • Type-safe Message Metadata - add structured information per message
  • New Stream Writer - stream any part type (reasoning, sources, etc.) retaining proper order
  • Data Parts - stream type-safe arbitrary data parts for dynamic UI components

Message metadata

Metadata allows you to attach structured information to individual messages, making it easier to track important details like response time, token usage, or model specifications. This information can enhance your UI with contextual data without embedding it in the message content itself.

To add metadata to a message, first define the metadata schema:

app/api/chat/example-metadata-schema.ts
export const exampleMetadataSchema = z.object({
duration: z.number().optional(),
model: z.string().optional(),
totalTokens: z.number().optional(),
});
export type ExampleMetadata = z.infer<typeof exampleMetadataSchema>;

Then add the metadata using the message.metadata property on the toUIMessageStreamResponse() utility:

app/api/chat/route.ts
import { openai } from '@ai-sdk/openai';
import { convertToModelMessages, streamText, UIMessage } from 'ai';
import { ExampleMetadata } from './example-metadata-schema';
export async function POST(req: Request) {
const { messages }: { messages: UIMessage[] } = await req.json();
const startTime = Date.now();
const result = streamText({
model: openai('gpt-4o'),
prompt: convertToModelMessages(messages),
});
return result.toUIMessageStreamResponse({
messageMetadata: ({ part }): ExampleMetadata | undefined => {
// send custom information to the client on start:
if (part.type === 'start') {
return {
model: 'gpt-4o', // initial model id
};
}
// send additional model information on finish-step:
if (part.type === 'finish-step') {
return {
model: part.response.modelId, // update with the actual model id
duration: Date.now() - startTime,
};
}
// when the message is finished, send additional information:
if (part.type === 'finish') {
return {
totalTokens: part.totalUsage.totalTokens,
};
}
},
});
}

Finally, specify the message metadata schema on the client and then render the (type-safe) metadata in your UI:

app/page.tsx
import { zodSchema } from '@ai-sdk/provider-utils';
import { useChat } from '@ai-sdk/react';
import { defaultChatStore } from 'ai';
import { exampleMetadataSchema } from '@/api/chat/example-metadata-schema';
export default function Chat() {
const { messages } = useChat({
chatStore: defaultChatStore({
api: '/api/use-chat',
messageMetadataSchema: zodSchema(exampleMetadataSchema),
}),
});
return (
<div>
{messages.map(message => {
const { metadata } = message;
return (
<div key={message.id} className="whitespace-pre-wrap">
{metadata?.duration && <div>Duration: {metadata.duration}ms</div>}
{metadata?.model && <div>Model: {metadata.model}</div>}
{metadata?.totalTokens && (
<div>Total tokens: {metadata.totalTokens}</div>
)}
</div>
);
})}
</div>
);
}

UI Message Stream

The UI Message Stream enables streaming any content parts from the server to the client. With this stream, you can send structured data like custom sources from your RAG pipeline directly to your UI. The stream writer is simply a utility that makes it easy to write to this message stream.

const stream = createUIMessageStream({
execute: writer => {
// stream custom sources
writer.write({
type: 'source',
value: {
type: 'source',
sourceType: 'url',
id: 'source-1',
url: 'https://example.com',
title: 'Example Source',
},
});
},
});

On the client, these will be added to the ordered message.parts array.

Data Parts

The new stream writer also enables a type-safe way to stream arbitrary data from the server to the client and display it in your UI.

You can create and stream custom data parts on the server:

// On the server
const stream = createUIMessageStream({
execute: writer => {
// Initial update
writer.write({
type: 'data-weather', // Custom type
id: toolCallId, // ID for updates
data: { city, status: 'loading' }, // Your data
});
// Later, update the same part
writer.write({
type: 'data-weather',
id: toolCallId,
data: { city, weather, status: 'success' },
});
},
});

On the client, you can render these parts with full type safety:

{
message.parts
.filter(part => part.type === 'data-weather') // type-safe
.map((part, index) => (
<Weather
key={index}
city={part.data.city} // type-safe
weather={part.data.weather} // type-safe
status={part.data.status} // type-safe
/>
));
}

Data parts appear in the message.parts array along with other content, maintaining the proper ordering of the conversation. You can update parts by referencing the same ID, enabling dynamic experiences like collaborative artifacts.

ChatStore

AI SDK 5 introduces a new useChat architecture with ChatStore and ChatTransport components. These two core building blocks make state management and API integration more flexible, allowing you to compose reactive UI bindings, share chat state across multiple instances, and swap out your backend protocol without rewriting application logic.

The ChatStore is responsible for:

  • Managing multiple chats – access and switch between conversations seamlessly.
  • Processing response streams – handle streams from the server and synchronize state (e.g. when there are concurrent client-side tool results).
  • Caching and synchronizing – share state (messages, status, errors) between useChat hooks.

You can create a basic ChatStore with the helper function:

import { defaultChatStore } from 'ai';
const chatStore = defaultChatStore({
api: '/api/chat', // your chat endpoint
maxSteps: 5, // optional: limit LLM calls in tool chains
chats: {}, // optional: preload previous chat sessions
});
import { useChat } from '@ai-sdk/react';
const { messages, input, handleSubmit } = useChat({ chatStore });

Server-Sent Events (SSE)

AI SDK 5 now uses Server-Sent Events (SSE) instead of a custom streaming protocol. SSE is a common web standard for sending data from servers to browsers. This switch has several advantages:

  • Works everywhere - Uses technology that works in all major browsers and platforms
  • Easier to troubleshoot - You can see the data stream in browser developer tools
  • Simple to build upon - Adding new features is more straightforward
  • More stable - Built on proven technology that many developers already use

Agentic Control

AI SDK 5 introduces new features for building agents that help you control model behavior more precisely.

prepareStep

The prepareStep function gives you fine-grained control over each step in a multi-step agent. It's called before a step starts and allows you to:

  • Dynamically change the model used for specific steps
  • Force specific tool selections for particular steps
  • Limit which tools are available during specific steps
  • Examine the context of previous steps before proceeding
const result = await generateText({
// ...
experimental_prepareStep: async ({ model, stepNumber, maxSteps, steps }) => {
if (stepNumber === 0) {
return {
// use a different model for this step:
model: modelForThisParticularStep,
// force a tool choice for this step:
toolChoice: { type: 'tool', toolName: 'tool1' },
// limit the tools that are available for this step:
experimental_activeTools: ['tool1'],
};
}
// when nothing is returned, the default settings are used
},
});

This makes it easier to build AI systems that adapt their capabilities based on the current context and task requirements.

continueUntil

The continueUntil parameter lets you define stopping conditions for your agent. Instead of running indefinitely, you can specify exactly when the agent should terminate based on various conditions:

  • Reaching a maximum number of steps
  • Calling a specific tool
  • Satisfying any custom condition you define
const result = generateText({
// ...
// stop loop at 5 steps
continueUntil: maxSteps(5),
});
const result = generateText({
// ...
// stop loop when weather tool called
continueUntil: hasToolCall('weather'),
});
const result = generateText({
// ...
// stop loop at your own custom condition
continueUntil: maxTotalTokens(20000),
});

These agentic controls form the foundation for building more reliable, controllable AI systems that can tackle complex problems while remaining within well-defined constraints.