Migrate AI SDK 4.0 to 5.0

  1. Backup your project. If you use a versioning control system, make sure all previous versions are committed.
  2. Upgrade to AI SDK 5.0.
  3. Automatically migrate your code using codemods.

    If you don't want to use codemods, we recommend resolving all deprecation warnings before upgrading to AI SDK 5.0.

  4. Follow the breaking changes guide below.
  5. Verify your project is working as expected.
  6. Commit your changes.

AI SDK 5.0 Package Versions

You need to update the following packages to the following versions in your package.json file(s):

  • ai package: 5.0.0
  • @ai-sdk/provider package: 2.0.0
  • @ai-sdk/provider-utils package: 3.0.0
  • @ai-sdk/* packages: 2.0.0 (other @ai-sdk packages)

Additionally, you need to update the following peer dependencies:

  • zod package: 3.25.0 or later

An example upgrade command would be:

npm install ai @ai-sdk/react @ai-sdk/openai zod@3.25.0

Codemods

The AI SDK provides Codemod transformations to help upgrade your codebase when a feature is deprecated, removed, or otherwise changed.

Codemods are transformations that run on your codebase automatically. They allow you to easily apply many changes without having to manually go through every file.

Codemods are intended as a tool to help you with the upgrade process. They may not cover all of the changes you need to make. You may need to make additional changes manually.

You can run all codemods provided as part of the 5.0 upgrade process by running the following command from the root of your project:

npx @ai-sdk/codemod upgrade

To run only the v5 codemods (v4 → v5 migration):

npx @ai-sdk/codemod v5

Individual codemods can be run by specifying the name of the codemod:

npx @ai-sdk/codemod <codemod-name> <path>

For example, to run a specific v5 codemod:

npx @ai-sdk/codemod v5/rename-format-stream-part src/

See also the table of codemods. In addition, the latest set of codemods can be found in the @ai-sdk/codemod repository.

AI SDK Core Changes

generateText and streamText Changes

Maximum Output Tokens

The maxTokens parameter has been renamed to maxOutputTokens for clarity.

AI SDK 4.0
const result = await generateText({
model: openai('gpt-4.1'),
maxTokens: 1024,
prompt: 'Hello, world!',
});
AI SDK 5.0
const result = await generateText({
model: openai('gpt-4.1'),
maxOutputTokens: 1024,
prompt: 'Hello, world!',
});

Message and Type System Changes

Core Type Renames

CoreMessageModelMessage
AI SDK 4.0
import { CoreMessage } from 'ai';
AI SDK 5.0
import { ModelMessage } from 'ai';
MessageUIMessage
AI SDK 4.0
import { Message, CreateMessage } from 'ai';
AI SDK 5.0
import { UIMessage, CreateUIMessage } from 'ai';
convertToCoreMessagesconvertToModelMessages
AI SDK 4.0
import { convertToCoreMessages, streamText } from 'ai';
import { openai } from '@ai-sdk/openai';
const result = await streamText({
model: openai('gpt-4'),
messages: convertToCoreMessages(messages),
});
AI SDK 5.0
import { convertToModelMessages, streamText } from 'ai';
import { openai } from '@ai-sdk/openai';
const result = await streamText({
model: openai('gpt-4'),
messages: convertToModelMessages(messages),
});

For more information about model messages, see the Model Message reference.

UIMessage Changes

Content → Parts Array

For UIMessages (previously called Message), the .content property has been replaced with a parts array structure.

AI SDK 4.0
import { type Message } from 'ai'; // v4 Message type
// Messages (useChat) - had content property
const message: Message = {
id: '1',
role: 'user',
content: 'Bonjour!',
};
AI SDK 5.0
import { type UIMessage, type ModelMessage } from 'ai';
// UIMessages (useChat) - now use parts array
const uiMessage: UIMessage = {
id: '1',
role: 'user',
parts: [{ type: 'text', text: 'Bonjour!' }],
};

Data Role Removed

The data role has been removed from UI messages.

AI SDK 4.0
const message = {
role: 'data',
content: 'Some content',
data: { customField: 'value' },
};
AI SDK 5.0
// V5: Use UI message streams with custom data parts
const stream = createUIMessageStream({
execute({ writer }) {
// Write custom data instead of message annotations
writer.write({
type: 'data-custom',
id: 'custom-1',
data: { customField: 'value' },
});
},
});

UIMessage Reasoning Structure

The reasoning property on UI messages has been moved to parts.

AI SDK 4.0
const message: Message = {
role: 'assistant',
content: 'Hello',
reasoning: 'I will greet the user',
};
AI SDK 5.0
const message: UIMessage = {
role: 'assistant',
parts: [
{
type: 'reasoning',
text: 'I will greet the user',
},
{
type: 'text',
text: 'Hello',
},
],
};

Reasoning Part Property Rename

The reasoning property on reasoning UI parts has been renamed to text.

AI SDK 4.0
{
message.parts.map((part, index) => {
if (part.type === 'reasoning') {
return (
<div key={index} className="reasoning-display">
{part.reasoning}
</div>
);
}
});
}
AI SDK 5.0
{
message.parts.map((part, index) => {
if (part.type === 'reasoning') {
return (
<div key={index} className="reasoning-display">
{part.text}
</div>
);
}
});
}

File Part Changes

File parts now use .url instead of .data and .mimeType.

AI SDK 4.0
{
messages.map(message => (
<div key={message.id}>
{message.parts.map((part, index) => {
if (part.type === 'text') {
return <div key={index}>{part.text}</div>;
} else if (part.type === 'file' && part.mimeType.startsWith('image/')) {
return (
<img
key={index}
src={`data:${part.mimeType};base64,${part.data}`}
/>
);
}
})}
</div>
));
}
AI SDK 5.0
{
messages.map(message => (
<div key={message.id}>
{message.parts.map((part, index) => {
if (part.type === 'text') {
return <div key={index}>{part.text}</div>;
} else if (
part.type === 'file' &&
part.mediaType.startsWith('image/')
) {
return <img key={index} src={part.url} />;
}
})}
</div>
));
}

Stream Data Removal

The StreamData class has been completely removed and replaced with UI message streams for custom data.

AI SDK 4.0
import { StreamData } from 'ai';
const streamData = new StreamData();
streamData.append('custom-data');
streamData.close();
AI SDK 5.0
import { createUIMessageStream, createUIMessageStreamResponse } from 'ai';
const stream = createUIMessageStream({
execute({ writer }) {
// Write custom data parts
writer.write({
type: 'data-custom',
id: 'custom-1',
data: 'custom-data',
});
// Can merge with LLM streams
const result = streamText({
model: openai('gpt-4.1'),
messages,
});
writer.merge(result.toUIMessageStream());
},
});
return createUIMessageStreamResponse({ stream });

Custom Data Streaming: writeMessageAnnotation/writeData Removed

The writeMessageAnnotation and writeData methods from DataStreamWriter have been removed. Instead, use custom data parts with the new UIMessage stream architecture.

AI SDK 4.0
import { openai } from '@ai-sdk/openai';
import { createDataStreamResponse, streamText } from 'ai';
export async function POST(req: Request) {
const { messages } = await req.json();
return createDataStreamResponse({
execute: dataStream => {
// Write general data
dataStream.writeData('call started');
const result = streamText({
model: openai('gpt-4o'),
messages,
onChunk() {
// Write message annotations
dataStream.writeMessageAnnotation({
status: 'streaming',
timestamp: Date.now(),
});
},
onFinish() {
// Write final annotations
dataStream.writeMessageAnnotation({
id: generateId(),
completed: true,
});
dataStream.writeData('call completed');
},
});
result.mergeIntoDataStream(dataStream);
},
});
}
AI SDK 5.0
import { openai } from '@ai-sdk/openai';
import {
createUIMessageStream,
createUIMessageStreamResponse,
streamText,
generateId,
} from 'ai';
export async function POST(req: Request) {
const { messages } = await req.json();
const stream = createUIMessageStream({
execute: ({ writer }) => {
const statusId = generateId();
// Write general data (transient - not added to message history)
writer.write({
type: 'data-status',
id: statusId,
data: { status: 'call started' },
});
// Generate shared IDs for data parts
const completionId = generateId();
const result = streamText({
model: openai('gpt-4o'),
messages,
onChunk() {
// Write data parts that update during streaming
writer.write({
type: 'data-status',
id: statusId,
data: {
status: 'streaming',
timestamp: Date.now(),
},
});
},
onFinish() {
// Write final data parts
writer.write({
type: 'data-status',
id: statusId,
data: {
status: 'completed',
},
});
},
});
writer.merge(result.toUIMessageStream());
},
});
return createUIMessageStreamResponse({ stream });
}

For more detailed information about streaming custom data in v5, see the Streaming Data guide.

Provider Metadata → Provider Options

The providerMetadata input parameter has been renamed to providerOptions. Note that the returned metadata in results is still called providerMetadata.

AI SDK 4.0
const result = await generateText({
model: openai('gpt-4'),
prompt: 'Hello',
providerMetadata: {
openai: { store: false },
},
});
AI SDK 5.0
const result = await generateText({
model: openai('gpt-4'),
prompt: 'Hello',
providerOptions: {
// Input parameter renamed
openai: { store: false },
},
});
// Returned metadata still uses providerMetadata:
console.log(result.providerMetadata?.openai);

Tool Definition Changes (parameters → inputSchema)

Tool definitions have been updated to use inputSchema instead of parameters and error classes have been renamed.

AI SDK 4.0
import { tool } from 'ai';
const weatherTool = tool({
description: 'Get the weather for a city',
parameters: z.object({
city: z.string(),
}),
execute: async ({ city }) => {
return `Weather in ${city}`;
},
});
AI SDK 5.0
import { tool } from 'ai';
const weatherTool = tool({
description: 'Get the weather for a city',
inputSchema: z.object({
city: z.string(),
}),
execute: async ({ city }) => {
return `Weather in ${city}`;
},
});

Tool Result Content: experimental_toToolResultContent → toModelOutput

The experimental_toToolResultContent option has been renamed to toModelOutput and is no longer experimental.

AI SDK 4.0
const screenshotTool = tool({
description: 'Take a screenshot',
parameters: z.object({}),
execute: async () => {
const imageData = await takeScreenshot();
return imageData; // base64 string
},
experimental_toToolResultContent: result => [{ type: 'image', data: result }],
});
AI SDK 5.0
const screenshotTool = tool({
description: 'Take a screenshot',
inputSchema: z.object({}),
execute: async () => {
const imageData = await takeScreenshot();
return imageData;
},
toModelOutput: result => ({
type: 'content',
value: [{ type: 'media', mediaType: 'image/png', data: result }],
}),
});

Tool Property Changes (args/result → input/output)

Tool call and result properties have been renamed for better consistency with schemas.

AI SDK 4.0
// Tool calls used "args" and "result"
for await (const part of result.fullStream) {
switch (part.type) {
case 'tool-call':
console.log('Tool args:', part.args);
break;
case 'tool-result':
console.log('Tool result:', part.result);
break;
}
}
AI SDK 5.0
// Tool calls now use "input" and "output"
for await (const part of result.fullStream) {
switch (part.type) {
case 'tool-call':
console.log('Tool input:', part.input);
break;
case 'tool-result':
console.log('Tool output:', part.output);
break;
}
}

Tool Call Streaming Now Default (toolCallStreaming Removed)

The toolCallStreaming option has been removed in AI SDK 5.0. Tool call streaming is now always enabled by default.

AI SDK 4.0
const result = streamText({
model: openai('gpt-4o'),
messages,
toolCallStreaming: true, // Optional parameter to enable streaming
tools: {
weatherTool,
searchTool,
},
});
AI SDK 5.0
const result = streamText({
model: openai('gpt-4o'),
messages: convertToModelMessages(messages),
// toolCallStreaming removed - streaming is always enabled
tools: {
weatherTool,
searchTool,
},
});

Tool Part Type Changes (UIMessage)

In v5, UI tool parts use typed naming: tool-${toolName} instead of generic types.

AI SDK 4.0
// Generic tool-invocation type
{
message.parts.map(part => {
if (part.type === 'tool-invocation') {
return <div>{part.toolInvocation.toolName}</div>;
}
});
}
AI SDK 5.0
// Type-safe tool parts with specific names
{
message.parts.map(part => {
switch (part.type) {
case 'tool-getWeatherInformation':
return <div>Getting weather...</div>;
case 'tool-askForConfirmation':
return <div>Asking for confirmation...</div>;
}
});
}

Dynamic Tools Support

AI SDK 5.0 introduces dynamic tools for handling tools with unknown types at development time, such as MCP tools without schemas or user-defined functions at runtime.

New dynamicTool Helper

The new dynamicTool helper function allows you to define tools where the input and output types are not known at compile time.

AI SDK 5.0
import { dynamicTool } from 'ai';
import { z } from 'zod';
// Define a dynamic tool
const runtimeTool = dynamicTool({
description: 'A tool defined at runtime',
inputSchema: z.object({}),
execute: async input => {
// Input and output are typed as 'unknown'
return { result: `Processed: ${input.query}` };
},
});

MCP Tools Without Schemas

MCP tools that don't provide schemas are now automatically treated as dynamic tools:

AI SDK 5.0
import { MCPClient } from 'ai';
const client = new MCPClient({
/* ... */
});
const tools = await client.getTools();
// Tools without schemas are now 'dynamic' type
// and won't break type inference when mixed with static tools

Type-Safe Handling with Mixed Tools

When using both static and dynamic tools together, use the dynamic flag for type narrowing:

AI SDK 5.0
const result = await generateText({
model: openai('gpt-4'),
tools: {
// Static tool with known types
weather: weatherTool,
// Dynamic tool with unknown types
customDynamicTool: dynamicTool({
/* ... */
}),
},
onStepFinish: step => {
// Handle tool calls with type safety
for (const toolCall of step.toolCalls) {
if (toolCall.dynamic) {
// Dynamic tool: input/output are 'unknown'
console.log('Dynamic tool called:', toolCall.toolName);
continue;
}
// Static tools have full type inference
switch (toolCall.toolName) {
case 'weather':
// TypeScript knows the exact types
console.log(toolCall.input.location); // string
break;
}
}
},
});

New dynamic-tool UI Part

UI messages now include a dynamic-tool part type for rendering dynamic tool invocations:

AI SDK 5.0
{
message.parts.map((part, index) => {
switch (part.type) {
// Static tools use specific types
case 'tool-weather':
return <div>Weather: {part.input.city}</div>;
// Dynamic tools use the generic dynamic-tool type
case 'dynamic-tool':
return (
<div>
Dynamic tool: {part.toolName}
<pre>{JSON.stringify(part.input, null, 2)}</pre>
</div>
);
}
});
}

Breaking Change: Type Narrowing Required for Tool Calls and Results

When iterating over toolCalls and toolResults, you now need to check the dynamic flag first for proper type narrowing:

AI SDK 4.0
// Direct type checking worked without dynamic flag
onStepFinish: step => {
for (const toolCall of step.toolCalls) {
switch (toolCall.toolName) {
case 'weather':
console.log(toolCall.input.location); // typed as string
break;
case 'search':
console.log(toolCall.input.query); // typed as string
break;
}
}
};
AI SDK 5.0
// Must check dynamic flag first for type narrowing
onStepFinish: step => {
for (const toolCall of step.toolCalls) {
// Check if it's a dynamic tool first
if (toolCall.dynamic) {
console.log('Dynamic tool:', toolCall.toolName);
console.log('Input:', toolCall.input); // typed as unknown
continue;
}
// Now TypeScript knows it's a static tool
switch (toolCall.toolName) {
case 'weather':
console.log(toolCall.input.location); // typed as string
break;
case 'search':
console.log(toolCall.input.query); // typed as string
break;
}
}
};

Tool UI Part State Changes

Tool UI parts now use more granular states that better represent the streaming lifecycle and error handling.

AI SDK 4.0
// Old states
{
message.parts.map(part => {
if (part.type === 'tool-invocation') {
switch (part.toolInvocation.state) {
case 'partial-call':
return <div>Loading...</div>;
case 'call':
return (
<div>
Tool called with {JSON.stringify(part.toolInvocation.args)}
</div>
);
case 'result':
return <div>Result: {part.toolInvocation.result}</div>;
}
}
});
}
AI SDK 5.0
// New granular states
{
message.parts.map(part => {
switch (part.type) {
case 'tool-getWeatherInformation':
switch (part.state) {
case 'input-streaming':
return <pre>{JSON.stringify(part.input, null, 2)}</pre>;
case 'input-available':
return <div>Getting weather for {part.input.city}...</div>;
case 'output-available':
return <div>Weather: {part.output}</div>;
case 'output-error':
return <div>Error: {part.errorText}</div>;
}
}
});
}

State Changes:

  • partial-callinput-streaming (tool input being streamed)
  • callinput-available (tool input complete, ready to execute)
  • resultoutput-available (tool execution successful)
  • New: output-error (tool execution failed)

Media Type Standardization

mimeType has been renamed to mediaType for consistency. Both image and file types are supported in model messages.

AI SDK 4.0
const result = await generateText({
model: someModel,
messages: [
{
role: 'user',
content: [
{ type: 'text', text: 'What do you see?' },
{
type: 'image',
image: new Uint8Array([0, 1, 2, 3]),
mimeType: 'image/png',
},
{
type: 'file',
data: contents,
mimeType: 'application/pdf',
},
],
},
],
});
AI SDK 5.0
const result = await generateText({
model: someModel,
messages: [
{
role: 'user',
content: [
{ type: 'text', text: 'What do you see?' },
{
type: 'image',
image: new Uint8Array([0, 1, 2, 3]),
mediaType: 'image/png',
},
{
type: 'file',
data: contents,
mediaType: 'application/pdf',
},
],
},
],
});

Reasoning Support

Reasoning Text Property Rename

The .reasoning property has been renamed to .reasoningText for multi-step generations.

AI SDK 4.0
for (const step of steps) {
console.log(step.reasoning);
}
AI SDK 5.0
for (const step of steps) {
console.log(step.reasoningText);
}

Generate Text Reasoning Property Changes

In generateText() and streamText() results, reasoning properties have been renamed.

AI SDK 4.0
const result = await generateText({
model: anthropic('claude-sonnet-4-20250514'),
prompt: 'Explain your reasoning',
});
console.log(result.reasoning); // String reasoning text
console.log(result.reasoningDetails); // Array of reasoning details
AI SDK 5.0
const result = await generateText({
model: anthropic('claude-sonnet-4-20250514'),
prompt: 'Explain your reasoning',
});
console.log(result.reasoningText); // String reasoning text
console.log(result.reasoning); // Array of reasoning details

Continuation Steps Removal

The experimental_continueSteps option has been removed from generateText().

AI SDK 4.0
const result = await generateText({
experimental_continueSteps: true,
// ...
});
AI SDK 5.0
const result = await generateText({
// experimental_continueSteps has been removed
// Use newer models with higher output token limits instead
// ...
});

Image Generation Changes

Image model settings have been moved to providerOptions.

AI SDK 4.0
await generateImage({
model: luma.image('photon-flash-1', {
maxImagesPerCall: 5,
pollIntervalMillis: 500,
}),
prompt,
n: 10,
});
AI SDK 5.0
await generateImage({
model: luma.image('photon-flash-1'),
prompt,
n: 10,
maxImagesPerCall: 5,
providerOptions: {
luma: { pollIntervalMillis: 500 },
},
});

Step Result Changes

Step Type Removal

The stepType property has been removed from step results.

AI SDK 4.0
steps.forEach(step => {
switch (step.stepType) {
case 'initial':
console.log('Initial step');
break;
case 'tool-result':
console.log('Tool result step');
break;
case 'done':
console.log('Final step');
break;
}
});
AI SDK 5.0
steps.forEach((step, index) => {
if (index === 0) {
console.log('Initial step');
} else if (step.toolResults.length > 0) {
console.log('Tool result step');
} else {
console.log('Final step');
}
});

Step Control: maxSteps → stopWhen

For core functions like generateText and streamText, the maxSteps parameter has been replaced with stopWhen, which provides more flexible control over multi-step execution. The stopWhen parameter defines conditions for stopping the generation when the last step contains tool results. When multiple conditions are provided as an array, the generation stops if any condition is met.

AI SDK 4.0
// V4: Simple numeric limit
const result = await generateText({
model: openai('gpt-4'),
messages,
maxSteps: 5, // Stop after a maximum of 5 steps
});
// useChat with maxSteps
const { messages } = useChat({
maxSteps: 3, // Stop after a maximum of 3 steps
});
AI SDK 5.0
import { stepCountIs, hasToolCall } from 'ai';
// V5: Server-side - flexible stopping conditions with stopWhen
const result = await generateText({
model: openai('gpt-4'),
messages,
// Only triggers when last step has tool results
stopWhen: stepCountIs(5), // Stop at step 5 if tools were called
});
// Server-side - stop when specific tool is called
const result = await generateText({
model: openai('gpt-4'),
messages,
stopWhen: hasToolCall('finalizeTask'), // Stop when finalizeTask tool is called
});

Common stopping patterns:

AI SDK 5.0
// Stop after N steps (equivalent to old maxSteps)
// Note: Only applies when the last step has tool results
stopWhen: stepCountIs(5);
// Stop when specific tool is called
stopWhen: hasToolCall('finalizeTask');
// Multiple conditions (stops if ANY condition is met)
stopWhen: [
stepCountIs(10), // Maximum 10 steps
hasToolCall('submitOrder'), // Or when order is submitted
];
// Custom condition based on step content
stopWhen: ({ steps }) => {
const lastStep = steps[steps.length - 1];
// Custom logic - only triggers if last step has tool results
return lastStep?.text?.includes('COMPLETE');
};

Important: The stopWhen conditions are only evaluated when the last step contains tool results.

Usage vs Total Usage

Usage properties now distinguish between single step and total usage.

AI SDK 4.0
// usage contained total token usage across all steps
console.log(result.usage);
AI SDK 5.0
// usage contains token usage from the final step only
console.log(result.usage);
// totalUsage contains total token usage across all steps
console.log(result.totalUsage);

AI SDK UI Changes

Package Structure Changes

@ai-sdk/rsc Package Extraction

The ai/rsc export has been extracted to a separate package @ai-sdk/rsc.

AI SDK 4.0
import { createStreamableValue } from 'ai/rsc';
AI SDK 5.0
import { createStreamableValue } from '@ai-sdk/rsc';
Don't forget to install the new package: npm install @ai-sdk/rsc

React UI Hooks Moved to @ai-sdk/react

The deprecated ai/react export has been removed in favor of @ai-sdk/react.

AI SDK 4.0
import { useChat } from 'ai/react';
AI SDK 5.0
import { useChat } from '@ai-sdk/react';

Don't forget to install the new package: npm install @ai-sdk/react

useChat Changes

The useChat hook has undergone significant changes in v5, with new transport architecture, removal of managed input state, and more.

maxSteps Removal

The maxSteps parameter has been removed from useChat. You should now use server-side stopWhen conditions for multi-step tool execution control, and manually submit tool results and trigger new messages for client-side tool calls.

AI SDK 4.0
const { messages, sendMessage } = useChat({
maxSteps: 5, // Automatic tool result submission
});
AI SDK 5.0
// Server-side: Use stopWhen for multi-step control
import { streamText, convertToModelMessages, stepCountIs } from 'ai';
import { openai } from '@ai-sdk/openai';
const result = await streamText({
model: openai('gpt-4'),
messages: convertToModelMessages(messages),
stopWhen: stepCountIs(5), // Stop after 5 steps with tool calls
});
// Client-side: Configure automatic submission
import { useChat } from '@ai-sdk/react';
import {
DefaultChatTransport,
lastAssistantMessageIsCompleteWithToolCalls,
} from 'ai';
const { messages, sendMessage, addToolResult } = useChat({
// Automatically submit when all tool results are available
sendAutomaticallyWhen: lastAssistantMessageIsCompleteWithToolCalls,
async onToolCall({ toolCall }) {
const result = await executeToolCall(toolCall);
// Important: Don't await addToolResult inside onToolCall to avoid deadlocks
addToolResult({
tool: toolCall.toolName,
toolCallId: toolCall.toolCallId,
output: result,
});
},
});

Important: When using sendAutomaticallyWhen, don't use await with addToolResult inside onToolCall as it can cause deadlocks. The await is useful when you're not using automatic submission and need to ensure the messages are updated before manually calling sendMessage().

This change provides more flexibility for handling tool calls and aligns client behavior with server-side multi-step execution patterns.

For more details on the new tool submission approach, see the Tool Result Submission Changes section below.

Initial Messages Renamed

The initialMessages option has been renamed to messages.

AI SDK 4.0
import { useChat, type Message } from '@ai-sdk/react';
function ChatComponent({ initialMessages }: { initialMessages: Message[] }) {
const { messages } = useChat({
initialMessages: initialMessages,
// ...
});
// your component
}
AI SDK 5.0
import { useChat, type UIMessage } from '@ai-sdk/react';
function ChatComponent({ initialMessages }: { initialMessages: UIMessage[] }) {
const { messages } = useChat({
messages: initialMessages,
// ...
});
// your component
}

Chat Transport Architecture

Configuration is now handled through transport objects instead of direct API options.

AI SDK 4.0
import { useChat } from '@ai-sdk/react';
const { messages } = useChat({
api: '/api/chat',
credentials: 'include',
headers: { 'Custom-Header': 'value' },
});
AI SDK 5.0
import { useChat } from '@ai-sdk/react';
import { DefaultChatTransport } from 'ai';
const { messages } = useChat({
transport: new DefaultChatTransport({
api: '/api/chat',
credentials: 'include',
headers: { 'Custom-Header': 'value' },
}),
});

Removed Managed Input State

The useChat hook no longer manages input state internally. You must now manage input state manually.

AI SDK 4.0
import { useChat } from '@ai-sdk/react';
export default function Page() {
const { messages, input, handleInputChange, handleSubmit } = useChat({
api: '/api/chat',
});
return (
<form onSubmit={handleSubmit}>
<input value={input} onChange={handleInputChange} />
<button type="submit">Send</button>
</form>
);
}
AI SDK 5.0
import { useChat } from '@ai-sdk/react';
import { DefaultChatTransport } from 'ai';
import { useState } from 'react';
export default function Page() {
const [input, setInput] = useState('');
const { messages, sendMessage } = useChat({
transport: new DefaultChatTransport({ api: '/api/chat' }),
});
const handleSubmit = e => {
e.preventDefault();
sendMessage({ text: input });
setInput('');
};
return (
<form onSubmit={handleSubmit}>
<input value={input} onChange={e => setInput(e.target.value)} />
<button type="submit">Send</button>
</form>
);
}

Message Sending: appendsendMessage

The append function has been replaced with sendMessage and requires structured message format.

AI SDK 4.0
const { append } = useChat();
// Simple text message
append({ role: 'user', content: 'Hello' });
// With custom body
append(
{
role: 'user',
content: 'Hello',
},
{ body: { imageUrl: 'https://...' } },
);
AI SDK 5.0
const { sendMessage } = useChat();
// Simple text message (most common usage)
sendMessage({ text: 'Hello' });
// Or with explicit parts array
sendMessage({
parts: [{ type: 'text', text: 'Hello' }],
});
// With custom body (via request options)
sendMessage(
{ role: 'user', parts: [{ type: 'text', text: 'Hello' }] },
{ body: { imageUrl: 'https://...' } },
);

Message Regeneration: reloadregenerate

The reload function has been renamed to regenerate with enhanced functionality.

AI SDK 4.0
const { reload } = useChat();
// Regenerate last message
reload();
AI SDK 5.0
const { regenerate } = useChat();
// Regenerate last message
regenerate();
// Regenerate specific message
regenerate({ messageId: 'message-123' });

onResponse Removal

The onResponse callback has been removed from useChat and useCompletion.

AI SDK 4.0
const { messages } = useChat({
onResponse(response) {
// handle response
},
});
AI SDK 5.0
const { messages } = useChat({
// onResponse is no longer available
});

Send Extra Message Fields Default

The sendExtraMessageFields option has been removed and is now the default behavior.

AI SDK 4.0
const { messages } = useChat({
sendExtraMessageFields: true,
});
AI SDK 5.0
const { messages } = useChat({
// sendExtraMessageFields is now the default
});

Keep Last Message on Error Removal

The keepLastMessageOnError option has been removed as it's no longer needed.

AI SDK 4.0
const { messages } = useChat({
keepLastMessageOnError: true,
});
AI SDK 5.0
const { messages } = useChat({
// keepLastMessageOnError is no longer needed
});

Chat Request Options Changes

The data and allowEmptySubmit options have been removed from ChatRequestOptions.

AI SDK 4.0
handleSubmit(e, {
data: { imageUrl: 'https://...' },
body: { custom: 'value' },
allowEmptySubmit: true,
});
AI SDK 5.0
sendMessage(
{
/* yourMessage */
},
{
body: {
custom: 'value',
imageUrl: 'https://...', // Move data to body
},
},
);

Request Options Type Rename

RequestOptions has been renamed to CompletionRequestOptions.

AI SDK 4.0
import type { RequestOptions } from 'ai';
AI SDK 5.0
import type { CompletionRequestOptions } from 'ai';

addToolResult Changes

In the addToolResult function, the result parameter has been renamed to output for consistency with other tool-related APIs.

AI SDK 4.0
const { addToolResult } = useChat();
// Add tool result with 'result' parameter
addToolResult({
toolCallId: 'tool-call-123',
result: 'Weather: 72°F, sunny',
});
AI SDK 5.0
const { addToolResult } = useChat();
// Add tool result with 'output' parameter and 'tool' name for type safety
addToolResult({
tool: 'getWeather',
toolCallId: 'tool-call-123',
output: 'Weather: 72°F, sunny',
});

Tool Result Submission Changes

The automatic tool result submission behavior has been updated in useChat and the Chat component. You now have more control and flexibility over when tool results are submitted.

  • onToolCall no longer supports returning values to automatically submit tool results
  • You must explicitly call addToolResult to provide tool results
  • Use sendAutomaticallyWhen with lastAssistantMessageIsCompleteWithToolCalls helper for automatic submission
  • Important: Don't use await with addToolResult inside onToolCall to avoid deadlocks
  • The maxSteps parameter has been removed from the Chat component and useChat hook
  • For multi-step tool execution, use server-side stopWhen conditions instead (see maxSteps Removal)
AI SDK 4.0
const { messages, sendMessage, addToolResult } = useChat({
maxSteps: 5, // Removed in v5
// Automatic submission by returning a value
async onToolCall({ toolCall }) {
if (toolCall.toolName === 'getLocation') {
const cities = ['New York', 'Los Angeles', 'Chicago', 'San Francisco'];
return cities[Math.floor(Math.random() * cities.length)];
}
},
});
AI SDK 5.0
import { useChat } from '@ai-sdk/react';
import {
DefaultChatTransport,
lastAssistantMessageIsCompleteWithToolCalls,
} from 'ai';
const { messages, sendMessage, addToolResult } = useChat({
// Automatic submission with helper
sendAutomaticallyWhen: lastAssistantMessageIsCompleteWithToolCalls,
async onToolCall({ toolCall }) {
if (toolCall.toolName === 'getLocation') {
const cities = ['New York', 'Los Angeles', 'Chicago', 'San Francisco'];
// Important: Don't await inside onToolCall to avoid deadlocks
addToolResult({
tool: 'getLocation',
toolCallId: toolCall.toolCallId,
output: cities[Math.floor(Math.random() * cities.length)],
});
}
},
});

Loading State Changes

The deprecated isLoading helper has been removed in favor of status.

AI SDK 4.0
const { isLoading } = useChat();
AI SDK 5.0
const { status } = useChat();
// Use state instead of isLoading for more granular control

Resume Stream Support

The resume functionality has been moved from experimental_resume to resumeStream.

AI SDK 4.0
// Resume was experimental
const { messages } = useChat({
experimental_resume: true,
});
AI SDK 5.0
const { messages } = useChat({
resumeStream: true, // Resume interrupted streams
});

Dynamic Body Values

In v4, the body option in useChat configuration would dynamically update with component state changes. In v5, the body value is only captured at the first render and remains static throughout the component lifecycle.

AI SDK 4.0
const [temperature, setTemperature] = useState(0.7);
const { messages } = useChat({
api: '/api/chat',
body: {
temperature, // This would update dynamically in v4
},
});
AI SDK 5.0
const [temperature, setTemperature] = useState(0.7);
// Option 1: Use request-level configuration (Recommended)
const { messages, sendMessage } = useChat({
transport: new DefaultChatTransport({ api: '/api/chat' }),
});
// Pass dynamic values at request time
sendMessage(
{ text: input },
{
body: {
temperature, // Current temperature value at request time
},
},
);
// Option 2: Use function configuration with useRef
const temperatureRef = useRef(temperature);
temperatureRef.current = temperature;
const { messages } = useChat({
transport: new DefaultChatTransport({
api: '/api/chat',
body: () => ({
temperature: temperatureRef.current,
}),
}),
});

For more details on request configuration, see the Chatbot guide.

@ai-sdk/vue Changes

The Vue.js integration has been completely restructured, replacing the useChat composable with a Chat class.

useChat Replaced with Chat Class

@ai-sdk/vue v1
<script setup>
import { useChat } from '@ai-sdk/vue';
const { messages, input, handleSubmit } = useChat({
api: '/api/chat',
});
</script>
@ai-sdk/vue v2
<script setup>
import { Chat } from '@ai-sdk/vue';
import { DefaultChatTransport } from 'ai';
import { ref } from 'vue';
const input = ref('');
const chat = new Chat({
transport: new DefaultChatTransport({ api: '/api/chat' }),
});
const handleSubmit = (e: Event) => {
e.preventDefault();
chat.sendMessage({ text: input.value });
input.value = '';
};
</script>

Message Structure Changes

Messages now use a parts array instead of a content string.

@ai-sdk/vue v1
<template>
<div v-for="message in messages" :key="message.id">
<div>{{ message.role }}: {{ message.content }}</div>
</div>
</template>
@ai-sdk/vue v2
<template>
<div v-for="message in chat.messages" :key="message.id">
<div>{{ message.role }}:</div>
<div v-for="part in message.parts" :key="part.type">
<span v-if="part.type === 'text'">{{ part.text }}</span>
</div>
</div>
</template>

@ai-sdk/svelte Changes

The Svelte integration has also been updated with new constructor patterns and readonly properties.

Constructor API Changes

@ai-sdk/svelte v1
import { Chat } from '@ai-sdk/svelte';
const chatInstance = Chat({
api: '/api/chat',
});
@ai-sdk/svelte v2
import { Chat } from '@ai-sdk/svelte';
import { DefaultChatTransport } from 'ai';
const chatInstance = Chat(() => ({
transport: new DefaultChatTransport({ api: '/api/chat' }),
}));
Properties Made Readonly

Properties are now readonly and must be updated using setter methods.

@ai-sdk/svelte v1
// Direct property mutation was allowed
chatInstance.messages = [...chatInstance.messages, newMessage];
@ai-sdk/svelte v2
// Must use setter methods
chatInstance.setMessages([...chatInstance.messages, newMessage]);
Removed Managed Input

Like React and Vue, input management has been removed from the Svelte integration.

@ai-sdk/svelte v1
// Input was managed internally
const { messages, input, handleSubmit } = chatInstance;
@ai-sdk/svelte v2
// Must manage input state manually
let input = '';
const { messages, sendMessage } = chatInstance;
const handleSubmit = () => {
sendMessage({ text: input });
input = '';
};

@ai-sdk/ui-utils Package Removal

The @ai-sdk/ui-utils package has been removed and its exports moved to the main ai package.

AI SDK 4.0
import { getTextFromDataUrl } from '@ai-sdk/ui-utils';
AI SDK 5.0
import { getTextFromDataUrl } from 'ai';

useCompletion Changes

The data property has been removed from the useCompletion hook.

AI SDK 4.0
const {
completion,
handleSubmit,
data, // No longer available
} = useCompletion();
AI SDK 5.0
const {
completion,
handleSubmit,
// data property removed entirely
} = useCompletion();

useAssistant Removal

The useAssistant hook has been removed.

AI SDK 4.0
import { useAssistant } from '@ai-sdk/react';
AI SDK 5.0
// useAssistant has been removed
// Use useChat with appropriate configuration instead

For an implementation of the assistant functionality with AI SDK v5, see this example repository.

Attachments → File Parts

The experimental_attachments property has been replaced with the parts array.

AI SDK 4.0
{
messages.map(message => (
<div className="flex flex-col gap-2">
{message.content}
<div className="flex flex-row gap-2">
{message.experimental_attachments?.map((attachment, index) =>
attachment.contentType?.includes('image/') ? (
<img src={attachment.url} alt={attachment.name} />
) : attachment.contentType?.includes('text/') ? (
<div className="w-32 h-24 p-2 overflow-hidden text-xs border rounded-md ellipsis text-zinc-500">
{getTextFromDataUrl(attachment.url)}
</div>
) : null,
)}
</div>
</div>
));
}
AI SDK 5.0
{
messages.map(message => (
<div>
{message.parts.map((part, index) => {
if (part.type === 'text') {
return <div key={index}>{part.text}</div>;
}
if (part.type === 'file' && part.mediaType?.startsWith('image/')) {
return (
<div key={index}>
<img src={part.url} />
</div>
);
}
})}
</div>
));
}

Embedding Changes

Provider Options for Embeddings

Embedding model settings now use provider options instead of model parameters.

AI SDK 4.0
const { embedding } = await embed({
model: openai('text-embedding-3-small', {
dimensions: 10,
}),
});
AI SDK 5.0
const { embedding } = await embed({
model: openai('text-embedding-3-small'),
providerOptions: {
openai: {
dimensions: 10,
},
},
});

Raw Response → Response

The rawResponse property has been renamed to response.

AI SDK 4.0
const { rawResponse } = await embed(/* */);
AI SDK 5.0
const { response } = await embed(/* */);

Parallel Requests in embedMany

embedMany now makes parallel requests with a configurable maxParallelCalls option.

AI SDK 5.0
const { embeddings, usage } = await embedMany({
maxParallelCalls: 2, // Limit parallel requests
model: openai.textEmbeddingModel('text-embedding-3-small'),
values: [
'sunny day at the beach',
'rainy afternoon in the city',
'snowy night in the mountains',
],
});

LangChain Adapter Moved to @ai-sdk/langchain

The LangChainAdapter has been moved to @ai-sdk/langchain and the API has been updated to use UI message streams.

AI SDK 4.0
import { LangChainAdapter } from 'ai';
const response = LangChainAdapter.toDataStreamResponse(stream);
AI SDK 5.0
import { toUIMessageStream } from '@ai-sdk/langchain';
import { createUIMessageStreamResponse } from 'ai';
const response = createUIMessageStreamResponse({
stream: toUIMessageStream(stream),
});

Don't forget to install the new package: npm install @ai-sdk/langchain

LlamaIndex Adapter Moved to @ai-sdk/llamaindex

The LlamaIndexAdapter has been extracted to a separate package @ai-sdk/llamaindex and follows the same UI message stream pattern.

AI SDK 4.0
import { LlamaIndexAdapter } from 'ai';
const response = LlamaIndexAdapter.toDataStreamResponse(stream);
AI SDK 5.0
import { toUIMessageStream } from '@ai-sdk/llamaindex';
import { createUIMessageStreamResponse } from 'ai';
const response = createUIMessageStreamResponse({
stream: toUIMessageStream(stream),
});

Don't forget to install the new package: npm install @ai-sdk/llamaindex

Streaming Architecture

The streaming architecture has been completely redesigned in v5 to support better content differentiation, concurrent streaming of multiple parts, and improved real-time UX.

Stream Protocol Changes

Stream Protocol: Single Chunks → Start/Delta/End Pattern

The fundamental streaming pattern has changed from single chunks to a three-phase pattern with unique IDs for each content block.

AI SDK 4.0
for await (const chunk of result.fullStream) {
switch (chunk.type) {
case 'text-delta': {
process.stdout.write(chunk.textDelta);
break;
}
}
}
AI SDK 5.0
for await (const chunk of result.fullStream) {
switch (chunk.type) {
case 'text-start': {
// New: Initialize a text block with unique ID
console.log(`Starting text block: ${chunk.id}`);
break;
}
case 'text-delta': {
// Changed: Now includes ID and uses 'delta' property
process.stdout.write(chunk.delta); // Changed from 'textDelta'
break;
}
case 'text-end': {
// New: Finalize the text block
console.log(`Completed text block: ${chunk.id}`);
break;
}
}
}

Reasoning Streaming Pattern

Reasoning content now follows the same start/delta/end pattern:

AI SDK 4.0
for await (const chunk of result.fullStream) {
switch (chunk.type) {
case 'reasoning': {
// Single chunk with full reasoning text
console.log('Reasoning:', chunk.text);
break;
}
}
}
AI SDK 5.0
for await (const chunk of result.fullStream) {
switch (chunk.type) {
case 'reasoning-start': {
console.log(`Starting reasoning block: ${chunk.id}`);
break;
}
case 'reasoning-delta': {
process.stdout.write(chunk.delta);
break;
}
case 'reasoning-end': {
console.log(`Completed reasoning block: ${chunk.id}`);
break;
}
}
}

Tool Input Streaming

Tool inputs can now be streamed as they're being generated:

AI SDK 5.0
for await (const chunk of result.fullStream) {
switch (chunk.type) {
case 'tool-input-start': {
console.log(`Starting tool input for ${chunk.toolName}: ${chunk.id}`);
break;
}
case 'tool-input-delta': {
// Stream the JSON input as it's being generated
process.stdout.write(chunk.delta);
break;
}
case 'tool-input-end': {
console.log(`Completed tool input: ${chunk.id}`);
break;
}
case 'tool-call': {
// Final tool call with complete input
console.log('Tool call:', chunk.toolName, chunk.input);
break;
}
}
}

onChunk Callback Changes

The onChunk callback now receives the new streaming chunk types with IDs and the start/delta/end pattern.

AI SDK 4.0
const result = streamText({
model: openai('gpt-4.1'),
prompt: 'Write a story',
onChunk({ chunk }) {
switch (chunk.type) {
case 'text-delta': {
// Single property with text content
console.log('Text delta:', chunk.textDelta);
break;
}
}
},
});
AI SDK 5.0
const result = streamText({
model: openai('gpt-4.1'),
prompt: 'Write a story',
onChunk({ chunk }) {
switch (chunk.type) {
case 'text': {
// Text chunks now use single 'text' type
console.log('Text chunk:', chunk.text);
break;
}
case 'reasoning': {
// Reasoning chunks use single 'reasoning' type
console.log('Reasoning chunk:', chunk.text);
break;
}
case 'source': {
console.log('Source chunk:', chunk);
break;
}
case 'tool-call': {
console.log('Tool call:', chunk.toolName, chunk.input);
break;
}
case 'tool-input-start': {
console.log(
`Tool input started for ${chunk.toolName}:`,
chunk.toolCallId,
);
break;
}
case 'tool-input-delta': {
console.log(`Tool input delta for ${chunk.toolCallId}:`, chunk.delta);
break;
}
case 'tool-result': {
console.log('Tool result:', chunk.output);
break;
}
case 'raw': {
console.log('Raw chunk:', chunk);
break;
}
}
},
});

File Stream Parts Restructure

File parts in streams have been flattened.

AI SDK 4.0
for await (const chunk of result.fullStream) {
switch (chunk.type) {
case 'file': {
console.log('Media type:', chunk.file.mediaType);
console.log('File data:', chunk.file.data);
break;
}
}
}
AI SDK 5.0
for await (const chunk of result.fullStream) {
switch (chunk.type) {
case 'file': {
console.log('Media type:', chunk.mediaType);
console.log('File data:', chunk.data);
break;
}
}
}

Source Stream Parts Restructure

Source stream parts have been flattened.

AI SDK 4.0
for await (const part of result.fullStream) {
if (part.type === 'source' && part.source.sourceType === 'url') {
console.log('ID:', part.source.id);
console.log('Title:', part.source.title);
console.log('URL:', part.source.url);
}
}
AI SDK 5.0
for await (const part of result.fullStream) {
if (part.type === 'source' && part.sourceType === 'url') {
console.log('ID:', part.id);
console.log('Title:', part.title);
console.log('URL:', part.url);
}
}

Finish Event Changes

Stream finish events have been renamed for consistency.

AI SDK 4.0
for await (const part of result.fullStream) {
switch (part.type) {
case 'step-finish': {
console.log('Step finished:', part.finishReason);
break;
}
case 'finish': {
console.log('Usage:', part.usage);
break;
}
}
}
AI SDK 5.0
for await (const part of result.fullStream) {
switch (part.type) {
case 'finish-step': {
// Renamed from 'step-finish'
console.log('Step finished:', part.finishReason);
break;
}
case 'finish': {
console.log('Total Usage:', part.totalUsage); // Changed from 'usage'
break;
}
}
}

Stream Protocol Changes

Proprietary Protocol -> Server-Sent Events

The data stream protocol has been updated to use Server-Sent Events.

AI SDK 4.0
import { createDataStream, formatDataStreamPart } from 'ai';
const dataStream = createDataStream({
execute: writer => {
writer.writeData('initialized call');
writer.write(formatDataStreamPart('text', 'Hello'));
writer.writeSource({
type: 'source',
sourceType: 'url',
id: 'source-1',
url: 'https://example.com',
title: 'Example Source',
});
},
});
AI SDK 5.0
import { createUIMessageStream } from 'ai';
const stream = createUIMessageStream({
execute: ({ writer }) => {
writer.write({ type: 'data', value: ['initialized call'] });
writer.write({ type: 'text', value: 'Hello' });
writer.write({
type: 'source-url',
value: {
type: 'source',
id: 'source-1',
url: 'https://example.com',
title: 'Example Source',
},
});
},
});

Data Stream Response Helper Functions Renamed

The streaming API has been completely restructured from data streams to UI message streams.

AI SDK 4.0
// Express/Node.js servers
app.post('/stream', async (req, res) => {
const result = streamText({
model: openai('gpt-4.1'),
prompt: 'Generate content',
});
result.pipeDataStreamToResponse(res);
});
// Next.js API routes
const result = streamText({
model: openai('gpt-4.1'),
prompt: 'Generate content',
});
return result.toDataStreamResponse();
AI SDK 5.0
// Express/Node.js servers
app.post('/stream', async (req, res) => {
const result = streamText({
model: openai('gpt-4.1'),
prompt: 'Generate content',
});
result.pipeUIMessageStreamToResponse(res);
});
// Next.js API routes
const result = streamText({
model: openai('gpt-4.1'),
prompt: 'Generate content',
});
return result.toUIMessageStreamResponse();

Stream Transform Function Renaming

Various stream-related functions have been renamed for consistency.

AI SDK 4.0
import { DataStreamToSSETransformStream } from 'ai';
AI SDK 5.0
import { JsonToSseTransformStream } from 'ai';

Error Handling: getErrorMessage → onError

The getErrorMessage option in toDataStreamResponse has been replaced with onError in toUIMessageStreamResponse, providing more control over error forwarding to the client.

By default, error messages are NOT sent to the client to prevent leaking sensitive information. The onError callback allows you to explicitly control what error information is forwarded to the client.

AI SDK 4.0
return result.toDataStreamResponse({
getErrorMessage: error => {
// Return sanitized error data to send to client
// Only return what you want the client to see!
return {
errorCode: 'STREAM_ERROR',
message: 'An error occurred while processing your request',
// In production, avoid sending error.message directly to prevent information leakage
};
},
});
AI SDK 5.0
return result.toUIMessageStreamResponse({
onError: error => {
// Return sanitized error data to send to client
// Only return what you want the client to see!
return {
errorCode: 'STREAM_ERROR',
message: 'An error occurred while processing your request',
// In production, avoid sending error.message directly to prevent information leakage
};
},
});

Utility Changes

ID Generation Changes

The createIdGenerator() function now requires a size argument.

AI SDK 4.0
const generator = createIdGenerator({ prefix: 'msg' });
const id = generator(16); // Custom size at call time
AI SDK 5.0
const generator = createIdGenerator({ prefix: 'msg', size: 16 });
const id = generator(); // Fixed size from creation

IDGenerator → IdGenerator

The type name has been updated.

AI SDK 4.0
import { IDGenerator } from 'ai';
AI SDK 5.0
import { IdGenerator } from 'ai';

Provider Interface Changes

Language Model V2 Import

LanguageModelV2 must now be imported from @ai-sdk/provider.

AI SDK 4.0
import { LanguageModelV2 } from 'ai';
AI SDK 5.0
import { LanguageModelV2 } from '@ai-sdk/provider';

Middleware Rename

LanguageModelV1Middleware has been renamed and moved.

AI SDK 4.0
import { LanguageModelV1Middleware } from 'ai';
AI SDK 5.0
import { LanguageModelV2Middleware } from '@ai-sdk/provider';

Usage Token Properties

Token usage properties have been renamed for consistency.

AI SDK 4.0
// In language model implementations
{
usage: {
promptTokens: 10,
completionTokens: 20
}
}
AI SDK 5.0
// In language model implementations
{
usage: {
inputTokens: 10,
outputTokens: 20,
totalTokens: 30 // Now required
}
}

Stream Part Type Changes

The LanguageModelV2StreamPart type has been expanded to support the new streaming architecture with start/delta/end patterns and IDs.

AI SDK 4.0
// V4: Simple stream parts
type LanguageModelV2StreamPart =
| { type: 'text-delta'; textDelta: string }
| { type: 'reasoning'; text: string }
| { type: 'tool-call'; toolCallId: string; toolName: string; input: string };
AI SDK 5.0
// V5: Enhanced stream parts with IDs and lifecycle events
type LanguageModelV2StreamPart =
// Text blocks with start/delta/end pattern
| {
type: 'text-start';
id: string;
providerMetadata?: SharedV2ProviderMetadata;
}
| {
type: 'text-delta';
id: string;
delta: string;
providerMetadata?: SharedV2ProviderMetadata;
}
| {
type: 'text-end';
id: string;
providerMetadata?: SharedV2ProviderMetadata;
}
// Reasoning blocks with start/delta/end pattern
| {
type: 'reasoning-start';
id: string;
providerMetadata?: SharedV2ProviderMetadata;
}
| {
type: 'reasoning-delta';
id: string;
delta: string;
providerMetadata?: SharedV2ProviderMetadata;
}
| {
type: 'reasoning-end';
id: string;
providerMetadata?: SharedV2ProviderMetadata;
}
// Tool input streaming
| {
type: 'tool-input-start';
id: string;
toolName: string;
providerMetadata?: SharedV2ProviderMetadata;
}
| {
type: 'tool-input-delta';
id: string;
delta: string;
providerMetadata?: SharedV2ProviderMetadata;
}
| {
type: 'tool-input-end';
id: string;
providerMetadata?: SharedV2ProviderMetadata;
}
// Enhanced tool calls
| {
type: 'tool-call';
toolCallId: string;
toolName: string;
input: string;
providerMetadata?: SharedV2ProviderMetadata;
}
// Stream lifecycle events
| { type: 'stream-start'; warnings: Array<LanguageModelV2CallWarning> }
| {
type: 'finish';
usage: LanguageModelV2Usage;
finishReason: LanguageModelV2FinishReason;
providerMetadata?: SharedV2ProviderMetadata;
};

Raw Response → Response

Provider response objects have been updated.

AI SDK 4.0
// In language model implementations
{
rawResponse: {
/* ... */
}
}
AI SDK 5.0
// In language model implementations
{
response: {
/* ... */
}
}

wrapLanguageModel now stable

AI SDK 4.0
import { experimental_wrapLanguageModel } from 'ai';
AI SDK 5.0
import { wrapLanguageModel } from 'ai';

activeTools No Longer Experimental

AI SDK 4.0
const result = await generateText({
model: openai('gpt-4'),
messages,
tools: { weatherTool, locationTool },
experimental_activeTools: ['weatherTool'],
});
AI SDK 5.0
const result = await generateText({
model: openai('gpt-4'),
messages,
tools: { weatherTool, locationTool },
activeTools: ['weatherTool'], // No longer experimental
});

prepareStep No Longer Experimental

The experimental_prepareStep option has been promoted and no longer requires the experimental prefix.

AI SDK 4.0
const result = await generateText({
model: openai('gpt-4'),
messages,
tools: { weatherTool, locationTool },
experimental_prepareStep: ({ steps, stepNumber, model }) => {
console.log('Preparing step:', stepNumber);
return {
activeTools: ['weatherTool'],
system: 'Be helpful and concise.',
};
},
});
AI SDK 5.0
const result = await generateText({
model: openai('gpt-4'),
messages,
tools: { weatherTool, locationTool },
prepareStep: ({ steps, stepNumber, model }) => {
console.log('Preparing step:', stepNumber);
return {
activeTools: ['weatherTool'],
system: 'Be helpful and concise.',
// Can also configure toolChoice, model, etc.
};
},
});

The prepareStep function receives { steps, stepNumber, model } and can return:

  • model: Different model for this step
  • activeTools: Which tools to make available
  • toolChoice: Tool selection strategy
  • system: System message for this step
  • undefined: Use default settings

Temperature Default Removal

Temperature is no longer set to 0 by default.

AI SDK 4.0
await generateText({
model: openai('gpt-4'),
prompt: 'Write a creative story',
// Implicitly temperature: 0
});
AI SDK 5.0
await generateText({
model: openai('gpt-4'),
prompt: 'Write a creative story',
temperature: 0, // Must explicitly set
});

Message Persistence Changes

In v4, you would typically use helper functions like appendResponseMessages or appendClientMessage to format messages in the onFinish callback of streamText:

AI SDK 4.0
import {
streamText,
convertToModelMessages,
appendClientMessage,
appendResponseMessages,
} from 'ai';
import { openai } from '@ai-sdk/openai';
const updatedMessages = appendClientMessage({
messages,
message: lastUserMessage,
});
const result = streamText({
model: openai('gpt-4o'),
messages: updatedMessages,
experimental_generateMessageId: () => generateId(), // ID generation on streamText
onFinish: async ({ responseMessages, usage }) => {
// Use helper functions to format messages
const finalMessages = appendResponseMessages({
messages: updatedMessages,
responseMessages,
});
// Save formatted messages to database
await saveMessages(finalMessages);
},
});

In v5, message persistence is now handled through the toUIMessageStreamResponse method, which automatically formats response messages in the UIMessage format:

AI SDK 5.0
import { streamText, convertToModelMessages, UIMessage } from 'ai';
import { openai } from '@ai-sdk/openai';
const messages: UIMessage[] = [
// Your existing messages in UIMessage format
];
const result = streamText({
model: openai('gpt-4o'),
messages: convertToModelMessages(messages),
// experimental_generateMessageId removed from here
});
return result.toUIMessageStreamResponse({
originalMessages: messages, // Pass original messages for context
generateMessageId: () => generateId(), // ID generation moved here for UI messages
onFinish: ({ messages, responseMessage }) => {
// messages contains all messages (original + response) in UIMessage format
saveChat({ chatId, messages });
// responseMessage contains just the generated message in UIMessage format
saveMessage({ chatId, message: responseMessage });
},
});

Message ID Generation

The experimental_generateMessageId option has been moved from streamText configuration to toUIMessageStreamResponse, as it's designed for use with UIMessages rather than ModelMessages.

AI SDK 4.0
const result = streamText({
model: openai('gpt-4o'),
messages,
experimental_generateMessageId: () => generateId(),
});
AI SDK 5.0
const result = streamText({
model: openai('gpt-4o'),
messages: convertToModelMessages(messages),
});
return result.toUIMessageStreamResponse({
generateMessageId: () => generateId(), // No longer experimental
// ...
});

For more details on message IDs and persistence, see the Chatbot Message Persistence guide.

Using createUIMessageStream

For more complex scenarios, especially when working with data parts, you can use createUIMessageStream:

AI SDK 5.0 - Advanced
import {
createUIMessageStream,
createUIMessageStreamResponse,
streamText,
convertToModelMessages,
UIMessage,
} from 'ai';
import { openai } from '@ai-sdk/openai';
const stream = createUIMessageStream({
originalMessages: messages,
execute: ({ writer }) => {
// Write custom data parts
writer.write({
type: 'data',
data: { status: 'processing', timestamp: Date.now() },
});
// Stream the AI response
const result = streamText({
model: openai('gpt-4o'),
messages: convertToModelMessages(messages),
});
writer.merge(result.toUIMessageStream());
},
onFinish: ({ messages }) => {
// messages contains all messages (original + response + data parts) in UIMessage format
saveChat({ chatId, messages });
},
});
return createUIMessageStreamResponse({ stream });

Provider & Model Changes

OpenAI

Structured Outputs Default

Structured outputs are now enabled by default for supported OpenAI models.

AI SDK 4.0
const result = await generateText({
model: openai('gpt-4.1-2024-08-06', { structuredOutputs: true }),
});
AI SDK 5.0
const result = await generateText({
model: openai('gpt-4.1-2024-08-06'),
// structuredOutputs: true is now the default
});

Compatibility Option Removal

The compatibility option has been removed; strict mode is now the default.

AI SDK 4.0
const openai = createOpenAI({
compatibility: 'strict',
});
AI SDK 5.0
const openai = createOpenAI({
// strict compatibility is now the default
});

Legacy Function Calls Removal

The useLegacyFunctionCalls option has been removed.

AI SDK 4.0
const result = streamText({
model: openai('gpt-4.1', { useLegacyFunctionCalls: true }),
});
AI SDK 5.0
const result = streamText({
model: openai('gpt-4.1'),
});

Simulate Streaming

The simulateStreaming model option has been replaced with middleware.

AI SDK 4.0
const result = generateText({
model: openai('gpt-4.1', { simulateStreaming: true }),
prompt: 'Hello, world!',
});
AI SDK 5.0
import { simulateStreamingMiddleware, wrapLanguageModel } from 'ai';
const model = wrapLanguageModel({
model: openai('gpt-4.1'),
middleware: simulateStreamingMiddleware(),
});
const result = generateText({
model,
prompt: 'Hello, world!',
});

Google

Search Grounding is now a provider defined tool

Search Grounding is now called "Google Search" and is now a provider defined tool.

AI SDK 4.0
const { text, providerMetadata } = await generateText({
model: google('gemini-1.5-pro', {
useSearchGrounding: true,
}),
prompt: 'List the top 5 San Francisco news from the past week.',
});
AI SDK 5.0
import { google } from '@ai-sdk/google';
const { text, sources, providerMetadata } = await generateText({
model: google('gemini-1.5-pro'),
prompt:
'List the top 5 San Francisco news from the past week.'
tools: {
google_search: google.tools.googleSearch({}),
},
});

Amazon Bedrock

Snake Case → Camel Case

Provider options have been updated to use camelCase.

AI SDK 4.0
const result = await generateText({
model: bedrock('amazon.titan-tg1-large'),
prompt: 'Hello, world!',
providerOptions: {
bedrock: {
reasoning_config: {
/* ... */
},
},
},
});
AI SDK 5.0
const result = await generateText({
model: bedrock('amazon.titan-tg1-large'),
prompt: 'Hello, world!',
providerOptions: {
bedrock: {
reasoningConfig: {
/* ... */
},
},
},
});

Provider-Utils Changes

Deprecated CoreTool* types have been removed.

AI SDK 4.0
import {
CoreToolCall,
CoreToolResult,
CoreToolResultUnion,
CoreToolCallUnion,
CoreToolChoice,
} from '@ai-sdk/provider-utils';
AI SDK 5.0
import {
ToolCall,
ToolResult,
TypedToolResult,
TypedToolCall,
ToolChoice,
} from '@ai-sdk/provider-utils';

Codemod Table

The following table lists available codemods for the AI SDK 5.0 upgrade process. For more information, see the Codemods section.

ChangeCodemod
AI SDK Core Changes
Flatten streamText file propertiesv5/flatten-streamtext-file-properties
ID Generation Changesv5/require-createIdGenerator-size-argument
IDGenerator → IdGeneratorv5/rename-IDGenerator-to-IdGenerator
Import LanguageModelV2 from provider packagev5/import-LanguageModelV2-from-provider-package
Migrate to data stream protocol v2v5/migrate-to-data-stream-protocol-v2
Move image model maxImagesPerCallv5/move-image-model-maxImagesPerCall
Move LangChain adapterv5/move-langchain-adapter
Move provider optionsv5/move-provider-options
Move React to AI SDKv5/move-react-to-ai-sdk
Move UI utils to AIv5/move-ui-utils-to-ai
Remove experimental wrap language modelv5/remove-experimental-wrap-language-model
Remove experimental activeToolsv5/remove-experimental-activetools
Remove experimental prepareStepv5/remove-experimental-preparestep
Remove experimental continueStepsv5/remove-experimental-continuesteps
Remove experimental temperaturev5/remove-experimental-temperature
Remove experimental truncatev5/remove-experimental-truncate
Remove experimental OpenAI compatibilityv5/remove-experimental-openai-compatibility
Remove experimental OpenAI legacy function callsv5/remove-experimental-openai-legacy-function-calls
Remove experimental OpenAI structured outputsv5/remove-experimental-openai-structured-outputs
Remove experimental OpenAI storev5/remove-experimental-openai-store
Remove experimental OpenAI userv5/remove-experimental-openai-user
Remove experimental OpenAI parallel tool callsv5/remove-experimental-openai-parallel-tool-calls
Remove experimental OpenAI response formatv5/remove-experimental-openai-response-format
Remove experimental OpenAI logit biasv5/remove-experimental-openai-logit-bias
Remove experimental OpenAI logprobsv5/remove-experimental-openai-logprobs
Remove experimental OpenAI seedv5/remove-experimental-openai-seed
Remove experimental OpenAI service tierv5/remove-experimental-openai-service-tier
Remove experimental OpenAI top logprobsv5/remove-experimental-openai-top-logprobs
Remove experimental OpenAI transformv5/remove-experimental-openai-transform
Remove experimental OpenAI stream optionsv5/remove-experimental-openai-stream-options
Remove experimental OpenAI predictionv5/remove-experimental-openai-prediction
Remove experimental Anthropic cachingv5/remove-experimental-anthropic-caching
Remove experimental Anthropic computer usev5/remove-experimental-anthropic-computer-use
Remove experimental Anthropic PDF supportv5/remove-experimental-anthropic-pdf-support
Remove experimental Anthropic prompt cachingv5/remove-experimental-anthropic-prompt-caching
Remove experimental Google search groundingv5/remove-experimental-google-search-grounding
Remove experimental Google code executionv5/remove-experimental-google-code-execution
Remove experimental Google cached contentv5/remove-experimental-google-cached-content
Remove experimental Google custom headersv5/remove-experimental-google-custom-headers
Rename format stream partv5/rename-format-stream-part
Rename parse stream partv5/rename-parse-stream-part
Replace image type with file typev5/replace-image-type-with-file-type
Replace LlamaIndex adapterv5/replace-llamaindex-adapter
Replace onCompletion with onFinalv5/replace-oncompletion-with-onfinal
Replace provider metadata with provider optionsv5/replace-provider-metadata-with-provider-options
Replace rawResponse with responsev5/replace-rawresponse-with-response
Replace redacted reasoning typev5/replace-redacted-reasoning-type
Replace simulate streamingv5/replace-simulate-streaming
Replace textDelta with textv5/replace-textdelta-with-text
Replace usage token propertiesv5/replace-usage-token-properties
Restructure file stream partsv5/restructure-file-stream-parts
Restructure source stream partsv5/restructure-source-stream-parts
RSC packagev5/rsc-package

Changes Between v5 Beta Versions

This section documents breaking changes between different beta versions of AI SDK 5.0. If you're upgrading from an earlier v5 beta version to a later one, check this section for any changes that might affect your code.

fullStream Type Rename: text/reasoning → text-delta/reasoning-delta

The chunk types in fullStream have been renamed for consistency with UI streams and language model streams.

AI SDK 5.0 (before beta.26)
for await (const chunk of result.fullStream) {
switch (chunk.type) {
case 'text': {
process.stdout.write(chunk.text);
break;
}
case 'reasoning': {
console.log('Reasoning:', chunk.text);
break;
}
}
}
AI SDK 5.0 (beta.26 and later)
for await (const chunk of result.fullStream) {
switch (chunk.type) {
case 'text-delta': {
process.stdout.write(chunk.text);
break;
}
case 'reasoning-delta': {
console.log('Reasoning:', chunk.text);
break;
}
}
}