Tool Calling

As covered under Foundations, tools are objects that can be called by the model to perform a specific task. AI SDK Core tools contain three elements:

  • description: An optional description of the tool that can influence when the tool is picked.
  • inputSchema: A Zod schema or a JSON schema that defines the input parameters. The schema is consumed by the LLM, and also used to validate the LLM tool calls.
  • execute: An optional async function that is called with the inputs from the tool call. It produces a value of type RESULT (generic type). It is optional because you might want to forward tool calls to the client or to a queue instead of executing them in the same process.

You can use the tool helper function to infer the types of the execute parameters.

The tools parameter of generateText and streamText is an object that has the tool names as keys and the tools as values:

import { z } from 'zod';
import { generateText, tool } from 'ai';
const result = await generateText({
model: 'openai/gpt-4o',
tools: {
weather: tool({
description: 'Get the weather in a location',
inputSchema: z.object({
location: z.string().describe('The location to get the weather for'),
}),
execute: async ({ location }) => ({
location,
temperature: 72 + Math.floor(Math.random() * 21) - 10,
}),
}),
},
prompt: 'What is the weather in San Francisco?',
});

When a model uses a tool, it is called a "tool call" and the output of the tool is called a "tool result".

Tool calling is not restricted to only text generation. You can also use it to render user interfaces (Generative UI).

Multi-Step Calls (using stopWhen)

With the stopWhen setting, you can enable multi-step calls in generateText and streamText. When stopWhen is set and the model generates a tool call, the AI SDK will trigger a new generation passing in the tool result until there are no further tool calls or the stopping condition is met.

The stopWhen conditions are only evaluated when the last step contains tool results.

By default, when you use generateText or streamText, it triggers a single generation. This works well for many use cases where you can rely on the model's training data to generate a response. However, when you provide tools, the model now has the choice to either generate a normal text response, or generate a tool call. If the model generates a tool call, it's generation is complete and that step is finished.

You may want the model to generate text after the tool has been executed, either to summarize the tool results in the context of the users query. In many cases, you may also want the model to use multiple tools in a single response. This is where multi-step calls come in.

You can think of multi-step calls in a similar way to a conversation with a human. When you ask a question, if the person does not have the requisite knowledge in their common knowledge (a model's training data), the person may need to look up information (use a tool) before they can provide you with an answer. In the same way, the model may need to call a tool to get the information it needs to answer your question where each generation (tool call or text generation) is a step.

Example

In the following example, there are two steps:

  1. Step 1
    1. The prompt 'What is the weather in San Francisco?' is sent to the model.
    2. The model generates a tool call.
    3. The tool call is executed.
  2. Step 2
    1. The tool result is sent to the model.
    2. The model generates a response considering the tool result.
import { z } from 'zod';
import { generateText, tool, stepCountIs } from 'ai';
const { text, steps } = await generateText({
model: 'openai/gpt-4o',
tools: {
weather: tool({
description: 'Get the weather in a location',
inputSchema: z.object({
location: z.string().describe('The location to get the weather for'),
}),
execute: async ({ location }) => ({
location,
temperature: 72 + Math.floor(Math.random() * 21) - 10,
}),
}),
},
stopWhen: stepCountIs(5), // stop after a maximum of 5 steps if tools were called
prompt: 'What is the weather in San Francisco?',
});
You can use streamText in a similar way.

Steps

To access intermediate tool calls and results, you can use the steps property in the result object or the streamText onFinish callback. It contains all the text, tool calls, tool results, and more from each step.

Example: Extract tool results from all steps

import { generateText } from 'ai';
const { steps } = await generateText({
model: openai('gpt-4o'),
stopWhen: stepCountIs(10),
// ...
});
// extract all tool calls from the steps:
const allToolCalls = steps.flatMap(step => step.toolCalls);

onStepFinish callback

When using generateText or streamText, you can provide an onStepFinish callback that is triggered when a step is finished, i.e. all text deltas, tool calls, and tool results for the step are available. When you have multiple steps, the callback is triggered for each step.

import { generateText } from 'ai';
const result = await generateText({
// ...
onStepFinish({ text, toolCalls, toolResults, finishReason, usage }) {
// your own logic, e.g. for saving the chat history or recording usage
},
});

prepareStep callback

The prepareStep callback is called before a step is started.

It is called with the following parameters:

  • model: The model that was passed into generateText.
  • stopWhen: The stopping condition that was passed into generateText.
  • stepNumber: The number of the step that is being executed.
  • steps: The steps that have been executed so far.
  • messages: The messages that will be sent to the model for the current step.

You can use it to provide different settings for a step, including modifying the input messages.

import { generateText } from 'ai';
const result = await generateText({
// ...
prepareStep: async ({ model, stepNumber, steps, messages }) => {
if (stepNumber === 0) {
return {
// use a different model for this step:
model: modelForThisParticularStep,
// force a tool choice for this step:
toolChoice: { type: 'tool', toolName: 'tool1' },
// limit the tools that are available for this step:
activeTools: ['tool1'],
};
}
// when nothing is returned, the default settings are used
},
});

Message Modification for Longer Agentic Loops

In longer agentic loops, you can use the messages parameter to modify the input messages for each step. This is particularly useful for prompt compression:

prepareStep: async ({ stepNumber, steps, messages }) => {
// Compress conversation history for longer loops
if (messages.length > 20) {
return {
messages: messages.slice(-10),
};
}
return {};
},

Response Messages

Adding the generated assistant and tool messages to your conversation history is a common task, especially if you are using multi-step tool calls.

Both generateText and streamText have a response.messages property that you can use to add the assistant and tool messages to your conversation history. It is also available in the onFinish callback of streamText.

The response.messages property contains an array of ModelMessage objects that you can add to your conversation history:

import { generateText, ModelMessage } from 'ai';
const messages: ModelMessage[] = [
// ...
];
const { response } = await generateText({
// ...
messages,
});
// add the response messages to your conversation history:
messages.push(...response.messages); // streamText: ...((await response).messages)

Dynamic Tools

AI SDK Core supports dynamic tools for scenarios where tool schemas are not known at compile time. This is useful for:

  • MCP (Model Context Protocol) tools without schemas
  • User-defined functions at runtime
  • Tools loaded from external sources

Using dynamicTool

The dynamicTool helper creates tools with unknown input/output types:

import { dynamicTool } from 'ai';
import { z } from 'zod';
const customTool = dynamicTool({
description: 'Execute a custom function',
inputSchema: z.object({}),
execute: async input => {
// input is typed as 'unknown'
// You need to validate/cast it at runtime
const { action, parameters } = input as any;
// Execute your dynamic logic
return { result: `Executed ${action}` };
},
});

Type-Safe Handling

When using both static and dynamic tools, use the dynamic flag for type narrowing:

const result = await generateText({
model: 'openai/gpt-4o',
tools: {
// Static tool with known types
weather: weatherTool,
// Dynamic tool
custom: dynamicTool({
/* ... */
}),
},
onStepFinish: ({ toolCalls, toolResults }) => {
// Type-safe iteration
for (const toolCall of toolCalls) {
if (toolCall.dynamic) {
// Dynamic tool: input is 'unknown'
console.log('Dynamic:', toolCall.toolName, toolCall.input);
continue;
}
// Static tool: full type inference
switch (toolCall.toolName) {
case 'weather':
console.log(toolCall.input.location); // typed as string
break;
}
}
},
});

Preliminary Tool Results

You can return an AsyncIterable over multiple results. In this case, the last value from the iterable is the final tool result.

This can be used in combination with generator functions to e.g. stream status information during the tool execution:

tool({
description: 'Get the current weather.',
inputSchema: z.object({
location: z.string(),
}),
async *execute({ location }) {
yield {
status: 'loading' as const,
text: `Getting weather for ${location}`,
weather: undefined,
};
await new Promise(resolve => setTimeout(resolve, 3000));
const temperature = 72 + Math.floor(Math.random() * 21) - 10;
yield {
status: 'success' as const,
text: `The weather in ${location} is ${temperature}°F`,
temperature,
};
},
});

Tool Choice

You can use the toolChoice setting to influence when a tool is selected. It supports the following settings:

  • auto (default): the model can choose whether and which tools to call.
  • required: the model must call a tool. It can choose which tool to call.
  • none: the model must not call tools
  • { type: 'tool', toolName: string (typed) }: the model must call the specified tool
import { z } from 'zod';
import { generateText, tool } from 'ai';
const result = await generateText({
model: 'openai/gpt-4o',
tools: {
weather: tool({
description: 'Get the weather in a location',
inputSchema: z.object({
location: z.string().describe('The location to get the weather for'),
}),
execute: async ({ location }) => ({
location,
temperature: 72 + Math.floor(Math.random() * 21) - 10,
}),
}),
},
toolChoice: 'required', // force the model to call a tool
prompt: 'What is the weather in San Francisco?',
});

Tool Execution Options

When tools are called, they receive additional options as a second parameter.

Tool Call ID

The ID of the tool call is forwarded to the tool execution. You can use it e.g. when sending tool-call related information with stream data.

import {
streamText,
tool,
createUIMessageStream,
createUIMessageStreamResponse,
} from 'ai';
export async function POST(req: Request) {
const { messages } = await req.json();
const stream = createUIMessageStream({
execute: ({ writer }) => {
const result = streamText({
// ...
messages,
tools: {
myTool: tool({
// ...
execute: async (args, { toolCallId }) => {
// return e.g. custom status for tool call
writer.write({
type: 'data-tool-status',
id: toolCallId,
data: {
name: 'myTool',
status: 'in-progress',
},
});
// ...
},
}),
},
});
writer.merge(result.toUIMessageStream());
},
});
return createUIMessageStreamResponse({ stream });
}

Messages

The messages that were sent to the language model to initiate the response that contained the tool call are forwarded to the tool execution. You can access them in the second parameter of the execute function. In multi-step calls, the messages contain the text, tool calls, and tool results from all previous steps.

import { generateText, tool } from 'ai';
const result = await generateText({
// ...
tools: {
myTool: tool({
// ...
execute: async (args, { messages }) => {
// use the message history in e.g. calls to other language models
return { ... };
},
}),
},
});

Abort Signals

The abort signals from generateText and streamText are forwarded to the tool execution. You can access them in the second parameter of the execute function and e.g. abort long-running computations or forward them to fetch calls inside tools.

import { z } from 'zod';
import { generateText, tool } from 'ai';
const result = await generateText({
model: 'openai/gpt-4.1',
abortSignal: myAbortSignal, // signal that will be forwarded to tools
tools: {
weather: tool({
description: 'Get the weather in a location',
inputSchema: z.object({ location: z.string() }),
execute: async ({ location }, { abortSignal }) => {
return fetch(
`https://api.weatherapi.com/v1/current.json?q=${location}`,
{ signal: abortSignal }, // forward the abort signal to fetch
);
},
}),
},
prompt: 'What is the weather in San Francisco?',
});

Context (experimental)

You can pass in arbitrary context from generateText or streamText via the experimental_context setting. This context is available in the experimental_context tool execution option.

const result = await generateText({
// ...
tools: {
someTool: tool({
// ...
execute: async (input, { experimental_context: context }) => {
const typedContext = context as { example: string }; // or use type validation library
// ...
},
}),
},
experimental_context: { example: '123' },
});

Types

Modularizing your code often requires defining types to ensure type safety and reusability. To enable this, the AI SDK provides several helper types for tools, tool calls, and tool results.

You can use them to strongly type your variables, function parameters, and return types in parts of the code that are not directly related to streamText or generateText.

Each tool call is typed with ToolCall<NAME extends string, ARGS>, depending on the tool that has been invoked. Similarly, the tool results are typed with ToolResult<NAME extends string, ARGS, RESULT>.

The tools in streamText and generateText are defined as a ToolSet. The type inference helpers TypedToolCall<TOOLS extends ToolSet> and TypedToolResult<TOOLS extends ToolSet> can be used to extract the tool call and tool result types from the tools.

import { openai } from '@ai-sdk/openai';
import { TypedToolCall, TypedToolResult, generateText, tool } from 'ai';
import { z } from 'zod';
const myToolSet = {
firstTool: tool({
description: 'Greets the user',
inputSchema: z.object({ name: z.string() }),
execute: async ({ name }) => `Hello, ${name}!`,
}),
secondTool: tool({
description: 'Tells the user their age',
inputSchema: z.object({ age: z.number() }),
execute: async ({ age }) => `You are ${age} years old!`,
}),
};
type MyToolCall = TypedToolCall<typeof myToolSet>;
type MyToolResult = TypedToolResult<typeof myToolSet>;
async function generateSomething(prompt: string): Promise<{
text: string;
toolCalls: Array<MyToolCall>; // typed tool calls
toolResults: Array<MyToolResult>; // typed tool results
}> {
return generateText({
model: openai('gpt-4.1'),
tools: myToolSet,
prompt,
});
}

Handling Errors

The AI SDK has three tool-call related errors:

When tool execution fails (errors thrown by your tool's execute function), the AI SDK adds them as tool-error content parts to enable automated LLM roundtrips in multi-step scenarios.

generateText

generateText throws errors for tool schema validation issues and other errors, and can be handled using a try/catch block. Tool execution errors appear as tool-error parts in the result steps:

try {
const result = await generateText({
//...
});
} catch (error) {
if (NoSuchToolError.isInstance(error)) {
// handle the no such tool error
} else if (InvalidToolInputError.isInstance(error)) {
// handle the invalid tool inputs error
} else {
// handle other errors
}
}

Tool execution errors are available in the result steps:

const { steps } = await generateText({
// ...
});
// check for tool errors in the steps
const toolErrors = steps.flatMap(step =>
step.content.filter(part => part.type === 'tool-error'),
);
toolErrors.forEach(toolError => {
console.log('Tool error:', toolError.error);
console.log('Tool name:', toolError.toolName);
console.log('Tool input:', toolError.input);
});

streamText

streamText sends errors as part of the full stream. Tool execution errors appear as tool-error parts, while other errors appear as error parts.

When using toUIMessageStreamResponse, you can pass an onError function to extract the error message from the error part and forward it as part of the stream response:

const result = streamText({
// ...
});
return result.toUIMessageStreamResponse({
onError: error => {
if (NoSuchToolError.isInstance(error)) {
return 'The model tried to call a unknown tool.';
} else if (InvalidToolInputError.isInstance(error)) {
return 'The model called a tool with invalid inputs.';
} else {
return 'An unknown error occurred.';
}
},
});

Tool Call Repair

The tool call repair feature is experimental and may change in the future.

Language models sometimes fail to generate valid tool calls, especially when the input schema is complex or the model is smaller.

If you use multiple steps, those failed tool calls will be sent back to the LLM in the next step to give it an opportunity to fix it. However, you may want to control how invalid tool calls are repaired without requiring additional steps that pollute the message history.

You can use the experimental_repairToolCall function to attempt to repair the tool call with a custom function.

You can use different strategies to repair the tool call:

  • Use a model with structured outputs to generate the inputs.
  • Send the messages, system prompt, and tool schema to a stronger model to generate the inputs.
  • Provide more specific repair instructions based on which tool was called.

Example: Use a model with structured outputs for repair

import { openai } from '@ai-sdk/openai';
import { generateObject, generateText, NoSuchToolError, tool } from 'ai';
const result = await generateText({
model,
tools,
prompt,
experimental_repairToolCall: async ({
toolCall,
tools,
inputSchema,
error,
}) => {
if (NoSuchToolError.isInstance(error)) {
return null; // do not attempt to fix invalid tool names
}
const tool = tools[toolCall.toolName as keyof typeof tools];
const { object: repairedArgs } = await generateObject({
model: openai('gpt-4.1'),
schema: tool.inputSchema,
prompt: [
`The model tried to call the tool "${toolCall.toolName}"` +
` with the following inputs:`,
JSON.stringify(toolCall.input),
`The tool accepts the following schema:`,
JSON.stringify(inputSchema(toolCall)),
'Please fix the inputs.',
].join('\n'),
});
return { ...toolCall, input: JSON.stringify(repairedArgs) };
},
});

Example: Use the re-ask strategy for repair

import { openai } from '@ai-sdk/openai';
import { generateObject, generateText, NoSuchToolError, tool } from 'ai';
const result = await generateText({
model,
tools,
prompt,
experimental_repairToolCall: async ({
toolCall,
tools,
error,
messages,
system,
}) => {
const result = await generateText({
model,
system,
messages: [
...messages,
{
role: 'assistant',
content: [
{
type: 'tool-call',
toolCallId: toolCall.toolCallId,
toolName: toolCall.toolName,
input: toolCall.input,
},
],
},
{
role: 'tool' as const,
content: [
{
type: 'tool-result',
toolCallId: toolCall.toolCallId,
toolName: toolCall.toolName,
output: error.message,
},
],
},
],
tools,
});
const newToolCall = result.toolCalls.find(
newToolCall => newToolCall.toolName === toolCall.toolName,
);
return newToolCall != null
? {
toolCallType: 'function' as const,
toolCallId: toolCall.toolCallId,
toolName: toolCall.toolName,
input: JSON.stringify(newToolCall.input),
}
: null;
},
});

Active Tools

Language models can only handle a limited number of tools at a time, depending on the model. To allow for static typing using a large number of tools and limiting the available tools to the model at the same time, the AI SDK provides the activeTools property.

It is an array of tool names that are currently active. By default, the value is undefined and all tools are active.

import { openai } from '@ai-sdk/openai';
import { generateText } from 'ai';
const { text } = await generateText({
model: openai('gpt-4.1'),
tools: myToolSet,
activeTools: ['firstTool'],
});

Multi-modal Tool Results

Multi-modal tool results are experimental and only supported by Anthropic.

In order to send multi-modal tool results, e.g. screenshots, back to the model, they need to be converted into a specific format.

AI SDK Core tools have an optional toModelOutput function that converts the tool result into a content part.

Here is an example for converting a screenshot into a content part:

const result = await generateText({
model: anthropic('claude-3-5-sonnet-20241022'),
tools: {
computer: anthropic.tools.computer_20241022({
// ...
async execute({ action, coordinate, text }) {
switch (action) {
case 'screenshot': {
return {
type: 'image',
data: fs
.readFileSync('./data/screenshot-editor.png')
.toString('base64'),
};
}
default: {
return `executed ${action}`;
}
}
},
// map to tool result content for LLM consumption:
toModelOutput(result) {
return {
type: 'content',
value:
typeof result === 'string'
? [{ type: 'text', text: result }]
: [{ type: 'image', data: result.data, mediaType: 'image/png' }],
};
},
}),
},
// ...
});

Extracting Tools

Once you start having many tools, you might want to extract them into separate files. The tool helper function is crucial for this, because it ensures correct type inference.

Here is an example of an extracted tool:

tools/weather-tool.ts
import { tool } from 'ai';
import { z } from 'zod';
// the `tool` helper function ensures correct type inference:
export const weatherTool = tool({
description: 'Get the weather in a location',
inputSchema: z.object({
location: z.string().describe('The location to get the weather for'),
}),
execute: async ({ location }) => ({
location,
temperature: 72 + Math.floor(Math.random() * 21) - 10,
}),
});

MCP Tools

The AI SDK supports connecting to Model Context Protocol (MCP) servers to access their tools. MCP enables your AI applications to discover and use tools across various services through a standardized interface.

For detailed information about MCP tools, including initialization, transport options, and usage patterns, see the MCP Tools documentation.

AI SDK Tools vs MCP Tools

In most cases, you should define your own AI SDK tools for production applications. They provide full control, type safety, and optimal performance. MCP tools are best suited for rapid development iteration and scenarios where users bring their own tools.

AspectAI SDK ToolsMCP Tools
Type SafetyFull static typing end-to-endDynamic discovery at runtime
ExecutionSame process as your request (low latency)Separate server (network overhead)
Prompt ControlFull control over descriptions and schemasControlled by MCP server owner
Schema ControlYou define and optimize for your modelControlled by MCP server owner
Version ManagementFull visibility over updatesCan update independently (version skew risk)
AuthenticationSame process, no additional auth requiredSeparate server introduces additional auth complexity
Best ForProduction applications requiring control and performanceDevelopment iteration, user-provided tools

Examples

You can see tools in action using various frameworks in the following examples:

Learn to use tools in Node.js
Learn to use tools in Next.js with Route Handlers
Learn to use MCP tools in Node.js