
# Announcing AI SDK 6 Beta

<Note type="warning">
  AI SDK 6 is in beta — while more stable than alpha, AI SDK 6 is still in
  active development and APIs may still change. Pin to specific versions as
  breaking changes may occur in patch releases.
</Note>

## Why AI SDK 6?

AI SDK 6 is a **major version** due to the introduction of the **v3 Language Model Specification** that powers new capabilities like agents and tool approval. However, unlike AI SDK 5, **this release is not expected to have major breaking changes** for most users.

The version bump reflects improvements to the specification, not a complete redesign of the SDK. If you're using AI SDK 5, migrating to v6 should be straightforward with minimal code changes.

## Beta Version Guidance

The AI SDK 6 Beta is intended for:

- **Trying out new features** and giving us feedback on the developer experience
- **Experimenting with agents** and tool approval workflows

Your feedback during this beta phase directly shapes the final stable release. Share your experiences through [GitHub issues](https://github.com/vercel/ai/issues/new/choose).

## Installation

To install the AI SDK 6 Beta, run the following command:

```bash
npm install ai@beta @ai-sdk/openai@beta @ai-sdk/react@beta
```

<Note type="warning">
  APIs may still change during beta. Pin to specific versions as breaking
  changes may occur in patch releases.
</Note>

## What's New in AI SDK 6?

AI SDK 6 introduces several features (with more to come soon!):

### Agent Abstraction

A new unified interface for building agents with full control over execution flow, tool loops, and state management.

### Tool Execution Approval

Request user confirmation before executing tools, enabling native human-in-the-loop patterns.

### Structured Output (Stable)

Generate structured data alongside tool calling with `generateText` and `streamText` - now stable and production-ready.

### Reranking Support

Improve search relevance by reordering documents based on their relationship to a query using specialized reranking models.

### Image Editing Support

Native support for image editing (coming soon).

## Agent Abstraction

AI SDK 6 introduces a powerful new `Agent` interface that provides a standardized way to build agents.

### Default Implementation: ToolLoopAgent

The `ToolLoopAgent` class provides a default implementation out of the box:

```typescript
import { ToolLoopAgent } from 'ai';
__PROVIDER_IMPORT__;
import { weatherTool } from '@/tool/weather';

export const weatherAgent = new ToolLoopAgent({
  model: __MODEL__,
  instructions: 'You are a helpful weather assistant.',
  tools: {
    weather: weatherTool,
  },
});

// Use the agent
const result = await weatherAgent.generate({
  prompt: 'What is the weather in San Francisco?',
});
```

The agent automatically handles the tool execution loop:

1. Calls the LLM with your prompt
2. Executes any requested tool calls
3. Adds results back to the conversation
4. Repeats until complete (default `stopWhen: stepCountIs(20)`)

### Configuring Call Options

Call options let you pass type-safe runtime inputs to dynamically configure your agents. Use them to inject retrieved documents for RAG, select models based on request complexity, customize tool behavior per request, or adjust any agent setting based on context.

Without call options, you'd need to create multiple agents or handle configuration logic outside the agent. With call options, you define a schema once and modify agent behavior at runtime:

```typescript
import { ToolLoopAgent } from 'ai';
__PROVIDER_IMPORT__;
import { z } from 'zod';

const supportAgent = new ToolLoopAgent({
  model: __MODEL__,
  callOptionsSchema: z.object({
    userId: z.string(),
    accountType: z.enum(['free', 'pro', 'enterprise']),
  }),
  instructions: 'You are a helpful customer support agent.',
  prepareCall: ({ options, ...settings }) => ({
    ...settings,
    instructions:
      settings.instructions +
      `\nUser context:
- Account type: ${options.accountType}
- User ID: ${options.userId}

Adjust your response based on the user's account level.`,
  }),
});

// Pass options when calling the agent
const result = await supportAgent.generate({
  prompt: 'How do I upgrade my account?',
  options: {
    userId: 'user_123',
    accountType: 'free',
  },
});
```

The `options` parameter is type-safe and will error if you don't provide it or pass incorrect types.

Call options enable dynamic agent configuration for several scenarios:

- **RAG**: Fetch relevant documents and inject them into prompts at runtime
- **Dynamic model selection**: Choose faster or more capable models based on request complexity
- **Tool configuration**: Adjust tools per request
- **Provider options**: Set reasoning effort, temperature, or other provider-specific settings dynamically

Learn more in the [Configuring Call Options](/docs/agents/configuring-call-options) documentation.

### UI Integration

Agents integrate seamlessly with React and other UI frameworks:

```typescript
// Server-side API route
import { createAgentUIStreamResponse } from 'ai';

export async function POST(request: Request) {
  const { messages } = await request.json();

  return createAgentUIStreamResponse({
    agent: weatherAgent,
    messages,
  });
}
```

```typescript
// Client-side with type safety
import { useChat } from '@ai-sdk/react';
import { InferAgentUIMessage } from 'ai';
import { weatherAgent } from '@/agent/weather-agent';

type WeatherAgentUIMessage = InferAgentUIMessage<typeof weatherAgent>;

const { messages, sendMessage } = useChat<WeatherAgentUIMessage>();
```

### Custom Agent Implementations

In AI SDK 6, `Agent` is an interface rather than a concrete class. While `ToolLoopAgent` provides a solid default implementation for most use cases, you can implement the `Agent` interface to build custom agent architectures:

```typescript
import { Agent } from 'ai';

// Build your own multi-agent orchestrator that delegates to specialists
class Orchestrator implements Agent {
  constructor(private subAgents: Record<string, Agent>) {
    /* Implementation */
  }
}

const orchestrator = new Orchestrator({
  subAgents: {
    // your subagents
  },
});
```

This approach enables you to experiment with orchestrators, memory layers, custom stop conditions, and agent patterns tailored to your specific use case.

## Tool Execution Approval

AI SDK 6 introduces a tool approval system that gives you control over when tools are executed.

Enable approval for a tool by setting `needsApproval`:

```typescript
import { tool } from 'ai';
import { z } from 'zod';

export const weatherTool = tool({
  description: 'Get the weather in a location',
  inputSchema: z.object({
    city: z.string(),
  }),
  needsApproval: true, // Require user approval
  execute: async ({ city }) => {
    const weather = await fetchWeather(city);
    return weather;
  },
});
```

### Dynamic Approval

Make approval decisions based on tool input:

```typescript
export const paymentTool = tool({
  description: 'Process a payment',
  inputSchema: z.object({
    amount: z.number(),
    recipient: z.string(),
  }),
  // Only require approval for large transactions
  needsApproval: async ({ amount }) => amount > 1000,
  execute: async ({ amount, recipient }) => {
    return await processPayment(amount, recipient);
  },
});
```

### Client-Side Approval UI

Handle approval requests in your UI:

```tsx
export function WeatherToolView({ invocation, addToolApprovalResponse }) {
  if (invocation.state === 'approval-requested') {
    return (
      <div>
        <p>Can I retrieve the weather for {invocation.input.city}?</p>
        <button
          onClick={() =>
            addToolApprovalResponse({
              id: invocation.approval.id,
              approved: true,
            })
          }
        >
          Approve
        </button>
        <button
          onClick={() =>
            addToolApprovalResponse({
              id: invocation.approval.id,
              approved: false,
            })
          }
        >
          Deny
        </button>
      </div>
    );
  }

  if (invocation.state === 'output-available') {
    return (
      <div>
        Weather: {invocation.output.weather}
        Temperature: {invocation.output.temperature}°F
      </div>
    );
  }

  // Handle other states...
}
```

### Auto-Submit After Approvals

Automatically continue the conversation once approvals are handled:

```typescript
import { useChat } from '@ai-sdk/react';
import { lastAssistantMessageIsCompleteWithApprovalResponses } from 'ai';

const { messages, addToolApprovalResponse } = useChat({
  sendAutomaticallyWhen: lastAssistantMessageIsCompleteWithApprovalResponses,
});
```

## Structured Output (Stable)

AI SDK 6 stabilizes structured output support for agents, enabling you to generate structured data alongside multi-step tool calling.

Previously, you could only generate structured outputs with `generateObject` and `streamObject`, which didn't support tool calling. Now `ToolLoopAgent` (and `generateText` / `streamText`) can combine both capabilities using the `output` parameter:

```typescript
import { Output, ToolLoopAgent, tool } from 'ai';
__PROVIDER_IMPORT__;
import { z } from 'zod';

const agent = new ToolLoopAgent({
  model: __MODEL__,
  tools: {
    weather: tool({
      description: 'Get the weather in a location',
      inputSchema: z.object({
        city: z.string(),
      }),
      execute: async ({ city }) => {
        return { temperature: 72, condition: 'sunny' };
      },
    }),
  },
  output: Output.object({
    schema: z.object({
      summary: z.string(),
      temperature: z.number(),
      recommendation: z.string(),
    }),
  }),
});

const { output } = await agent.generate({
  prompt: 'What is the weather in San Francisco and what should I wear?',
});
// The agent calls the weather tool AND returns structured output
console.log(output);
// {
//   summary: "It's sunny in San Francisco",
//   temperature: 72,
//   recommendation: "Wear light clothing and sunglasses"
// }
```

### Output Types

The `Output` object provides multiple strategies for structured generation:

- **`Output.object()`**: Generate structured objects with Zod schemas
- **`Output.array()`**: Generate arrays of structured objects
- **`Output.choice()`**: Select from a specific set of options
- **`Output.text()`**: Generate plain text (default behavior)

### Streaming Structured Output

Use `agent.stream()` to stream structured output as it's being generated:

```typescript
import { ToolLoopAgent, Output } from 'ai';
__PROVIDER_IMPORT__;
import { z } from 'zod';

const profileAgent = new ToolLoopAgent({
  model: __MODEL__,
  instructions: 'Generate realistic person profiles.',
  output: Output.object({
    schema: z.object({
      name: z.string(),
      age: z.number(),
      occupation: z.string(),
    }),
  }),
});

const { partialOutputStream } = await profileAgent.stream({
  prompt: 'Generate a person profile.',
});

for await (const partial of partialOutputStream) {
  console.log(partial);
  // { name: "John" }
  // { name: "John", age: 30 }
  // { name: "John", age: 30, occupation: "Engineer" }
}
```

### Support in `generateText` and `streamText`

Structured outputs are also supported in `generateText` and `streamText` functions, allowing you to use this feature outside of agents when needed.

<Note>
  When using structured output with `generateText` or `streamText`, you must
  configure multiple steps with `stopWhen` because generating the structured
  output is itself a step. For example: `stopWhen: stepCountIs(2)` to allow tool
  calling and output generation.
</Note>

## Reranking Support

AI SDK 6 introduces native support for reranking, a technique that improves search relevance by reordering documents based on their relationship to a query.

Unlike embedding-based similarity search, reranking models are specifically trained to understand query-document relationships, producing more accurate relevance scores:

```typescript
import { rerank } from 'ai';
import { cohere } from '@ai-sdk/cohere';

const documents = [
  'sunny day at the beach',
  'rainy afternoon in the city',
  'snowy night in the mountains',
];

const { ranking } = await rerank({
  model: cohere.reranking('rerank-v3.5'),
  documents,
  query: 'talk about rain',
  topN: 2,
});

console.log(ranking);
// [
//   { originalIndex: 1, score: 0.9, document: 'rainy afternoon in the city' },
//   { originalIndex: 0, score: 0.3, document: 'sunny day at the beach' }
// ]
```

### Structured Document Reranking

Reranking also supports structured documents, making it ideal for searching through databases, emails, or other structured content:

```typescript
import { rerank } from 'ai';
import { cohere } from '@ai-sdk/cohere';

const documents = [
  {
    from: 'Paul Doe',
    subject: 'Follow-up',
    text: 'We are happy to give you a discount of 20% on your next order.',
  },
  {
    from: 'John McGill',
    subject: 'Missing Info',
    text: 'Sorry, but here is the pricing information from Oracle: $5000/month',
  },
];

const { rerankedDocuments } = await rerank({
  model: cohere.reranking('rerank-v3.5'),
  documents,
  query: 'Which pricing did we get from Oracle?',
  topN: 1,
});

console.log(rerankedDocuments[0]);
// { from: 'John McGill', subject: 'Missing Info', text: '...' }
```

### Supported Providers

Several providers offer reranking models:

- [Cohere](/providers/ai-sdk-providers/cohere#reranking-models)
- [Amazon Bedrock](/providers/ai-sdk-providers/amazon-bedrock#reranking-models)
- [Together.ai](/providers/ai-sdk-providers/togetherai#reranking-models)

## Image Editing Support

Native support for image editing and generation workflows is coming soon. This will enable:

- Image-to-image transformations
- Multi-modal editing with text prompts

## Migration from AI SDK 5.x

AI SDK 6 is expected to have minimal breaking changes. The version bump is due to the v3 Language Model Specification, but most AI SDK 5 code will work with little or no modification.

## Timeline

**AI SDK 6 Beta**: Available now

**Stable Release**: End of 2025


## Navigation

- [AI SDK by Vercel](/v5/docs/introduction)
- [AI SDK 6 Beta](/v5/docs/announcing-ai-sdk-6-beta)
- [Foundations](/v5/docs/foundations)
  - [Overview](/v5/docs/foundations/overview)
  - [Providers and Models](/v5/docs/foundations/providers-and-models)
  - [Prompts](/v5/docs/foundations/prompts)
  - [Tools](/v5/docs/foundations/tools)
  - [Streaming](/v5/docs/foundations/streaming)
- [Getting Started](/v5/docs/getting-started)
  - [Choosing a Provider](/v5/docs/getting-started/choosing-a-provider)
  - [Navigating the Library](/v5/docs/getting-started/navigating-the-library)
  - [Next.js App Router](/v5/docs/getting-started/nextjs-app-router)
  - [Next.js Pages Router](/v5/docs/getting-started/nextjs-pages-router)
  - [Svelte](/v5/docs/getting-started/svelte)
  - [Vue.js (Nuxt)](/v5/docs/getting-started/nuxt)
  - [Node.js](/v5/docs/getting-started/nodejs)
  - [Expo](/v5/docs/getting-started/expo)
- [Agents](/v5/docs/agents)
  - [Agents](/v5/docs/agents/overview)
  - [Building Agents](/v5/docs/agents/building-agents)
  - [Workflow Patterns](/v5/docs/agents/workflows)
  - [Loop Control](/v5/docs/agents/loop-control)
- [AI SDK Core](/v5/docs/ai-sdk-core)
  - [Overview](/v5/docs/ai-sdk-core/overview)
  - [Generating Text](/v5/docs/ai-sdk-core/generating-text)
  - [Generating Structured Data](/v5/docs/ai-sdk-core/generating-structured-data)
  - [Tool Calling](/v5/docs/ai-sdk-core/tools-and-tool-calling)
  - [Model Context Protocol (MCP)](/v5/docs/ai-sdk-core/mcp-tools)
  - [Prompt Engineering](/v5/docs/ai-sdk-core/prompt-engineering)
  - [Settings](/v5/docs/ai-sdk-core/settings)
  - [Embeddings](/v5/docs/ai-sdk-core/embeddings)
  - [Image Generation](/v5/docs/ai-sdk-core/image-generation)
  - [Transcription](/v5/docs/ai-sdk-core/transcription)
  - [Speech](/v5/docs/ai-sdk-core/speech)
  - [Language Model Middleware](/v5/docs/ai-sdk-core/middleware)
  - [Provider & Model Management](/v5/docs/ai-sdk-core/provider-management)
  - [Error Handling](/v5/docs/ai-sdk-core/error-handling)
  - [Testing](/v5/docs/ai-sdk-core/testing)
  - [Telemetry](/v5/docs/ai-sdk-core/telemetry)
- [AI SDK UI](/v5/docs/ai-sdk-ui)
  - [Overview](/v5/docs/ai-sdk-ui/overview)
  - [Chatbot](/v5/docs/ai-sdk-ui/chatbot)
  - [Chatbot Message Persistence](/v5/docs/ai-sdk-ui/chatbot-message-persistence)
  - [Chatbot Resume Streams](/v5/docs/ai-sdk-ui/chatbot-resume-streams)
  - [Chatbot Tool Usage](/v5/docs/ai-sdk-ui/chatbot-tool-usage)
  - [Generative User Interfaces](/v5/docs/ai-sdk-ui/generative-user-interfaces)
  - [Completion](/v5/docs/ai-sdk-ui/completion)
  - [Object Generation](/v5/docs/ai-sdk-ui/object-generation)
  - [Streaming Custom Data](/v5/docs/ai-sdk-ui/streaming-data)
  - [Error Handling](/v5/docs/ai-sdk-ui/error-handling)
  - [Transport](/v5/docs/ai-sdk-ui/transport)
  - [Reading UIMessage Streams](/v5/docs/ai-sdk-ui/reading-ui-message-streams)
  - [Message Metadata](/v5/docs/ai-sdk-ui/message-metadata)
  - [Stream Protocols](/v5/docs/ai-sdk-ui/stream-protocol)
- [AI SDK RSC](/v5/docs/ai-sdk-rsc)
  - [Overview](/v5/docs/ai-sdk-rsc/overview)
  - [Streaming React Components](/v5/docs/ai-sdk-rsc/streaming-react-components)
  - [Managing Generative UI State](/v5/docs/ai-sdk-rsc/generative-ui-state)
  - [Saving and Restoring States](/v5/docs/ai-sdk-rsc/saving-and-restoring-states)
  - [Multistep Interfaces](/v5/docs/ai-sdk-rsc/multistep-interfaces)
  - [Streaming Values](/v5/docs/ai-sdk-rsc/streaming-values)
  - [Handling Loading State](/v5/docs/ai-sdk-rsc/loading-state)
  - [Error Handling](/v5/docs/ai-sdk-rsc/error-handling)
  - [Handling Authentication](/v5/docs/ai-sdk-rsc/authentication)
  - [Migrating from RSC to UI](/v5/docs/ai-sdk-rsc/migrating-to-ui)
- [Advanced](/v5/docs/advanced)
  - [Prompt Engineering](/v5/docs/advanced/prompt-engineering)
  - [Stopping Streams](/v5/docs/advanced/stopping-streams)
  - [Backpressure](/v5/docs/advanced/backpressure)
  - [Caching](/v5/docs/advanced/caching)
  - [Multiple Streamables](/v5/docs/advanced/multiple-streamables)
  - [Rate Limiting](/v5/docs/advanced/rate-limiting)
  - [Rendering UI with Language Models](/v5/docs/advanced/rendering-ui-with-language-models)
  - [Language Models as Routers](/v5/docs/advanced/model-as-router)
  - [Multistep Interfaces](/v5/docs/advanced/multistep-interfaces)
  - [Sequential Generations](/v5/docs/advanced/sequential-generations)
  - [Vercel Deployment Guide](/v5/docs/advanced/vercel-deployment-guide)
- [Reference](/v5/docs/reference)
  - [AI SDK Core](/v5/docs/reference/ai-sdk-core)
    - [generateText](/v5/docs/reference/ai-sdk-core/generate-text)
    - [streamText](/v5/docs/reference/ai-sdk-core/stream-text)
    - [generateObject](/v5/docs/reference/ai-sdk-core/generate-object)
    - [streamObject](/v5/docs/reference/ai-sdk-core/stream-object)
    - [embed](/v5/docs/reference/ai-sdk-core/embed)
    - [embedMany](/v5/docs/reference/ai-sdk-core/embed-many)
    - [generateImage](/v5/docs/reference/ai-sdk-core/generate-image)
    - [transcribe](/v5/docs/reference/ai-sdk-core/transcribe)
    - [generateSpeech](/v5/docs/reference/ai-sdk-core/generate-speech)
    - [tool](/v5/docs/reference/ai-sdk-core/tool)
    - [dynamicTool](/v5/docs/reference/ai-sdk-core/dynamic-tool)
    - [experimental_createMCPClient](/v5/docs/reference/ai-sdk-core/create-mcp-client)
    - [Experimental_StdioMCPTransport](/v5/docs/reference/ai-sdk-core/mcp-stdio-transport)
    - [jsonSchema](/v5/docs/reference/ai-sdk-core/json-schema)
    - [zodSchema](/v5/docs/reference/ai-sdk-core/zod-schema)
    - [valibotSchema](/v5/docs/reference/ai-sdk-core/valibot-schema)
    - [ModelMessage](/v5/docs/reference/ai-sdk-core/model-message)
    - [UIMessage](/v5/docs/reference/ai-sdk-core/ui-message)
    - [validateUIMessages](/v5/docs/reference/ai-sdk-core/validate-ui-messages)
    - [safeValidateUIMessages](/v5/docs/reference/ai-sdk-core/safe-validate-ui-messages)
    - [createProviderRegistry](/v5/docs/reference/ai-sdk-core/provider-registry)
    - [customProvider](/v5/docs/reference/ai-sdk-core/custom-provider)
    - [cosineSimilarity](/v5/docs/reference/ai-sdk-core/cosine-similarity)
    - [wrapLanguageModel](/v5/docs/reference/ai-sdk-core/wrap-language-model)
    - [LanguageModelV2Middleware](/v5/docs/reference/ai-sdk-core/language-model-v2-middleware)
    - [extractReasoningMiddleware](/v5/docs/reference/ai-sdk-core/extract-reasoning-middleware)
    - [simulateStreamingMiddleware](/v5/docs/reference/ai-sdk-core/simulate-streaming-middleware)
    - [defaultSettingsMiddleware](/v5/docs/reference/ai-sdk-core/default-settings-middleware)
    - [stepCountIs](/v5/docs/reference/ai-sdk-core/step-count-is)
    - [hasToolCall](/v5/docs/reference/ai-sdk-core/has-tool-call)
    - [simulateReadableStream](/v5/docs/reference/ai-sdk-core/simulate-readable-stream)
    - [smoothStream](/v5/docs/reference/ai-sdk-core/smooth-stream)
    - [generateId](/v5/docs/reference/ai-sdk-core/generate-id)
    - [createIdGenerator](/v5/docs/reference/ai-sdk-core/create-id-generator)
  - [AI SDK UI](/v5/docs/reference/ai-sdk-ui)
    - [useChat](/v5/docs/reference/ai-sdk-ui/use-chat)
    - [useCompletion](/v5/docs/reference/ai-sdk-ui/use-completion)
    - [useObject](/v5/docs/reference/ai-sdk-ui/use-object)
    - [convertToModelMessages](/v5/docs/reference/ai-sdk-ui/convert-to-model-messages)
    - [pruneMessages](/v5/docs/reference/ai-sdk-ui/prune-messages)
    - [createUIMessageStream](/v5/docs/reference/ai-sdk-ui/create-ui-message-stream)
    - [createUIMessageStreamResponse](/v5/docs/reference/ai-sdk-ui/create-ui-message-stream-response)
    - [pipeUIMessageStreamToResponse](/v5/docs/reference/ai-sdk-ui/pipe-ui-message-stream-to-response)
    - [readUIMessageStream](/v5/docs/reference/ai-sdk-ui/read-ui-message-stream)
    - [InferUITools](/v5/docs/reference/ai-sdk-ui/infer-ui-tools)
    - [InferUITool](/v5/docs/reference/ai-sdk-ui/infer-ui-tool)
  - [AI SDK RSC](/v5/docs/reference/ai-sdk-rsc)
    - [streamUI](/v5/docs/reference/ai-sdk-rsc/stream-ui)
    - [createAI](/v5/docs/reference/ai-sdk-rsc/create-ai)
    - [createStreamableUI](/v5/docs/reference/ai-sdk-rsc/create-streamable-ui)
    - [createStreamableValue](/v5/docs/reference/ai-sdk-rsc/create-streamable-value)
    - [readStreamableValue](/v5/docs/reference/ai-sdk-rsc/read-streamable-value)
    - [getAIState](/v5/docs/reference/ai-sdk-rsc/get-ai-state)
    - [getMutableAIState](/v5/docs/reference/ai-sdk-rsc/get-mutable-ai-state)
    - [useAIState](/v5/docs/reference/ai-sdk-rsc/use-ai-state)
    - [useActions](/v5/docs/reference/ai-sdk-rsc/use-actions)
    - [useUIState](/v5/docs/reference/ai-sdk-rsc/use-ui-state)
    - [useStreamableValue](/v5/docs/reference/ai-sdk-rsc/use-streamable-value)
    - [render (Removed)](/v5/docs/reference/ai-sdk-rsc/render)
  - [Stream Helpers](/v5/docs/reference/stream-helpers)
    - [AIStream](/v5/docs/reference/stream-helpers/ai-stream)
    - [StreamingTextResponse](/v5/docs/reference/stream-helpers/streaming-text-response)
    - [streamToResponse](/v5/docs/reference/stream-helpers/stream-to-response)
    - [OpenAIStream](/v5/docs/reference/stream-helpers/openai-stream)
    - [AnthropicStream](/v5/docs/reference/stream-helpers/anthropic-stream)
    - [AWSBedrockStream](/v5/docs/reference/stream-helpers/aws-bedrock-stream)
    - [AWSBedrockAnthropicStream](/v5/docs/reference/stream-helpers/aws-bedrock-anthropic-stream)
    - [AWSBedrockAnthropicMessagesStream](/v5/docs/reference/stream-helpers/aws-bedrock-messages-stream)
    - [AWSBedrockCohereStream](/v5/docs/reference/stream-helpers/aws-bedrock-cohere-stream)
    - [AWSBedrockLlama2Stream](/v5/docs/reference/stream-helpers/aws-bedrock-llama-2-stream)
    - [CohereStream](/v5/docs/reference/stream-helpers/cohere-stream)
    - [GoogleGenerativeAIStream](/v5/docs/reference/stream-helpers/google-generative-ai-stream)
    - [HuggingFaceStream](/v5/docs/reference/stream-helpers/hugging-face-stream)
    - [@ai-sdk/langchain Adapter](/v5/docs/reference/stream-helpers/langchain-adapter)
    - [@ai-sdk/llamaindex Adapter](/v5/docs/reference/stream-helpers/llamaindex-adapter)
    - [MistralStream](/v5/docs/reference/stream-helpers/mistral-stream)
    - [ReplicateStream](/v5/docs/reference/stream-helpers/replicate-stream)
    - [InkeepStream](/v5/docs/reference/stream-helpers/inkeep-stream)
  - [AI SDK Errors](/v5/docs/reference/ai-sdk-errors)
    - [AI_APICallError](/v5/docs/reference/ai-sdk-errors/ai-api-call-error)
    - [AI_DownloadError](/v5/docs/reference/ai-sdk-errors/ai-download-error)
    - [AI_EmptyResponseBodyError](/v5/docs/reference/ai-sdk-errors/ai-empty-response-body-error)
    - [AI_InvalidArgumentError](/v5/docs/reference/ai-sdk-errors/ai-invalid-argument-error)
    - [AI_InvalidDataContentError](/v5/docs/reference/ai-sdk-errors/ai-invalid-data-content-error)
    - [AI_InvalidDataContent](/v5/docs/reference/ai-sdk-errors/ai-invalid-data-content)
    - [AI_InvalidMessageRoleError](/v5/docs/reference/ai-sdk-errors/ai-invalid-message-role-error)
    - [AI_InvalidPromptError](/v5/docs/reference/ai-sdk-errors/ai-invalid-prompt-error)
    - [AI_InvalidResponseDataError](/v5/docs/reference/ai-sdk-errors/ai-invalid-response-data-error)
    - [AI_InvalidToolInputError](/v5/docs/reference/ai-sdk-errors/ai-invalid-tool-input-error)
    - [AI_JSONParseError](/v5/docs/reference/ai-sdk-errors/ai-json-parse-error)
    - [AI_LoadAPIKeyError](/v5/docs/reference/ai-sdk-errors/ai-load-api-key-error)
    - [AI_LoadSettingError](/v5/docs/reference/ai-sdk-errors/ai-load-setting-error)
    - [AI_MessageConversionError](/v5/docs/reference/ai-sdk-errors/ai-message-conversion-error)
    - [AI_NoContentGeneratedError](/v5/docs/reference/ai-sdk-errors/ai-no-content-generated-error)
    - [AI_NoImageGeneratedError](/v5/docs/reference/ai-sdk-errors/ai-no-image-generated-error)
    - [AI_NoObjectGeneratedError](/v5/docs/reference/ai-sdk-errors/ai-no-object-generated-error)
    - [AI_NoOutputSpecifiedError](/v5/docs/reference/ai-sdk-errors/ai-no-output-specified-error)
    - [AI_NoSpeechGeneratedError](/v5/docs/reference/ai-sdk-errors/ai-no-speech-generated-error)
    - [AI_NoSuchModelError](/v5/docs/reference/ai-sdk-errors/ai-no-such-model-error)
    - [AI_NoSuchProviderError](/v5/docs/reference/ai-sdk-errors/ai-no-such-provider-error)
    - [AI_NoSuchToolError](/v5/docs/reference/ai-sdk-errors/ai-no-such-tool-error)
    - [AI_NoTranscriptGeneratedError](/v5/docs/reference/ai-sdk-errors/ai-no-transcript-generated-error)
    - [AI_RetryError](/v5/docs/reference/ai-sdk-errors/ai-retry-error)
    - [AI_TooManyEmbeddingValuesForCallError](/v5/docs/reference/ai-sdk-errors/ai-too-many-embedding-values-for-call-error)
    - [ToolCallRepairError](/v5/docs/reference/ai-sdk-errors/ai-tool-call-repair-error)
    - [AI_TypeValidationError](/v5/docs/reference/ai-sdk-errors/ai-type-validation-error)
    - [AI_UnsupportedFunctionalityError](/v5/docs/reference/ai-sdk-errors/ai-unsupported-functionality-error)
- [Migration Guides](/v5/docs/migration-guides)
  - [Versioning](/v5/docs/migration-guides/versioning)
  - [Migrate AI SDK 5.x to 6.0 Beta](/v5/docs/migration-guides/migration-guide-6-0)
  - [Migrate Your Data to AI SDK 5.0](/v5/docs/migration-guides/migration-guide-5-0-data)
  - [Migrate AI SDK 4.x to 5.0](/v5/docs/migration-guides/migration-guide-5-0)
  - [Migrate AI SDK 4.1 to 4.2](/v5/docs/migration-guides/migration-guide-4-2)
  - [Migrate AI SDK 4.0 to 4.1](/v5/docs/migration-guides/migration-guide-4-1)
  - [Migrate AI SDK 3.4 to 4.0](/v5/docs/migration-guides/migration-guide-4-0)
  - [Migrate AI SDK 3.3 to 3.4](/v5/docs/migration-guides/migration-guide-3-4)
  - [Migrate AI SDK 3.2 to 3.3](/v5/docs/migration-guides/migration-guide-3-3)
  - [Migrate AI SDK 3.1 to 3.2](/v5/docs/migration-guides/migration-guide-3-2)
  - [Migrate AI SDK 3.0 to 3.1](/v5/docs/migration-guides/migration-guide-3-1)
- [Troubleshooting](/v5/docs/troubleshooting)
  - [Azure OpenAI Slow to Stream](/v5/docs/troubleshooting/azure-stream-slow)
  - [Client-Side Function Calls Not Invoked](/v5/docs/troubleshooting/client-side-function-calls-not-invoked)
  - [Server Actions in Client Components](/v5/docs/troubleshooting/server-actions-in-client-components)
  - [useChat/useCompletion stream output contains 0:... instead of text](/v5/docs/troubleshooting/strange-stream-output)
  - [Streamable UI Errors](/v5/docs/troubleshooting/streamable-ui-errors)
  - [Tool Invocation Missing Result Error](/v5/docs/troubleshooting/tool-invocation-missing-result)
  - [Streaming Not Working When Deployed](/v5/docs/troubleshooting/streaming-not-working-when-deployed)
  - [Streaming Not Working When Proxied](/v5/docs/troubleshooting/streaming-not-working-when-proxied)
  - [Getting Timeouts When Deploying on Vercel](/v5/docs/troubleshooting/timeout-on-vercel)
  - [Unclosed Streams](/v5/docs/troubleshooting/unclosed-streams)
  - [useChat Failed to Parse Stream](/v5/docs/troubleshooting/use-chat-failed-to-parse-stream)
  - [Server Action Plain Objects Error](/v5/docs/troubleshooting/client-stream-error)
  - [useChat No Response](/v5/docs/troubleshooting/use-chat-tools-no-response)
  - [Custom headers, body, and credentials not working with useChat](/v5/docs/troubleshooting/use-chat-custom-request-options)
  - [TypeScript performance issues with Zod and AI SDK 5](/v5/docs/troubleshooting/typescript-performance-zod)
  - [useChat "An error occurred"](/v5/docs/troubleshooting/use-chat-an-error-occurred)
  - [Repeated assistant messages in useChat](/v5/docs/troubleshooting/repeated-assistant-messages)
  - [onFinish not called when stream is aborted](/v5/docs/troubleshooting/stream-abort-handling)
  - [Tool calling with generateObject and streamObject](/v5/docs/troubleshooting/tool-calling-with-structured-outputs)
  - [Abort breaks resumable streams](/v5/docs/troubleshooting/abort-breaks-resumable-streams)
  - [streamText fails silently](/v5/docs/troubleshooting/stream-text-not-working)
  - [Streaming Status Shows But No Text Appears](/v5/docs/troubleshooting/streaming-status-delay)
  - [Stale body values with useChat](/v5/docs/troubleshooting/use-chat-stale-body-data)
  - [Type Error with onToolCall](/v5/docs/troubleshooting/ontoolcall-type-narrowing)
  - [Unsupported model version error](/v5/docs/troubleshooting/unsupported-model-version)
  - [Object generation failed with OpenAI](/v5/docs/troubleshooting/no-object-generated-content-filter)
  - [Model is not assignable to type "LanguageModelV1"](/v5/docs/troubleshooting/model-is-not-assignable-to-type)
  - [TypeScript error "Cannot find namespace 'JSX'"](/v5/docs/troubleshooting/typescript-cannot-find-namespace-jsx)
  - [React error "Maximum update depth exceeded"](/v5/docs/troubleshooting/react-maximum-update-depth-exceeded)
  - [Jest: cannot find module '@ai-sdk/rsc'](/v5/docs/troubleshooting/jest-cannot-find-module-ai-rsc)


[Full Sitemap](/sitemap.md)
