Event Callbacks
The AI SDK provides per-call event callbacks that you can pass to generateText and streamText to observe lifecycle events. This is useful for building observability tools, logging systems, analytics, and debugging utilities.
Basic Usage
Pass callbacks directly to generateText or streamText:
import { generateText } from 'ai';
const result = await generateText({ model: openai('gpt-4o'), prompt: 'What is the weather in San Francisco?', experimental_onStart: event => { console.log('Generation started:', event.model.modelId); }, onFinish: event => { console.log('Generation finished:', event.totalUsage); },});Available Callbacks
experimental_onStart:
experimental_onStepStart:
experimental_onToolCallStart:
experimental_onToolCallFinish:
onStepFinish:
onFinish:
Event Reference
experimental_onStart
Called when the generation operation begins, before any LLM calls are made.
const result = await generateText({ model: openai('gpt-4o'), prompt: 'Hello!', experimental_onStart: event => { console.log('Model:', event.model.modelId); console.log('Temperature:', event.temperature); },});model:
system:
prompt:
messages:
tools:
toolChoice:
activeTools:
maxOutputTokens:
temperature:
topP:
topK:
presencePenalty:
frequencyPenalty:
stopSequences:
seed:
maxRetries:
timeout:
headers:
providerOptions:
stopWhen:
output:
abortSignal:
include:
functionId:
metadata:
experimental_context:
experimental_onStepStart
Called before each step (LLM call) begins. Useful for tracking multi-step generations.
const result = await generateText({ model: openai('gpt-4o'), prompt: 'Hello!', experimental_onStepStart: event => { console.log('Step:', event.stepNumber); console.log('Messages:', event.messages.length); },});stepNumber:
model:
system:
messages:
tools:
toolChoice:
activeTools:
steps:
providerOptions:
timeout:
headers:
stopWhen:
output:
abortSignal:
include:
functionId:
metadata:
experimental_context:
experimental_onToolCallStart
Called before a tool's execute function runs.
const result = await generateText({ model: openai('gpt-4o'), prompt: 'What is the weather?', tools: { getWeather }, experimental_onToolCallStart: event => { console.log('Tool:', event.toolCall.toolName); console.log('Input:', event.toolCall.input); },});stepNumber:
model:
toolCall:
type:
toolCallId:
toolName:
input:
messages:
abortSignal:
functionId:
metadata:
experimental_context:
experimental_onToolCallFinish
Called after a tool's execute function completes or errors. Uses a discriminated union on the success field.
const result = await generateText({ model: openai('gpt-4o'), prompt: 'What is the weather?', tools: { getWeather }, experimental_onToolCallFinish: event => { console.log('Tool:', event.toolCall.toolName); console.log('Duration:', event.durationMs, 'ms');
if (event.success) { console.log('Output:', event.output); } else { console.error('Error:', event.error); } },});stepNumber:
model:
toolCall:
type:
toolCallId:
toolName:
input:
messages:
abortSignal:
durationMs:
functionId:
metadata:
experimental_context:
success:
output:
error:
onStepFinish
Called after each step (LLM call) completes. Provides the full StepResult.
const result = await generateText({ model: openai('gpt-4o'), prompt: 'Hello!', onStepFinish: event => { console.log('Step:', event.stepNumber); console.log('Finish reason:', event.finishReason); console.log('Tokens:', event.usage.totalTokens); },});stepNumber:
model:
finishReason:
usage:
inputTokens:
outputTokens:
totalTokens:
text:
toolCalls:
toolResults:
content:
reasoning:
reasoningText:
files:
sources:
warnings:
request:
response:
functionId:
metadata:
experimental_context:
providerMetadata:
onFinish
Called when the entire generation completes (all steps finished). Includes aggregated data.
const result = await generateText({ model: openai('gpt-4o'), prompt: 'Hello!', onFinish: event => { console.log('Total steps:', event.steps.length); console.log('Total tokens:', event.totalUsage.totalTokens); console.log('Final text:', event.text); },});steps:
totalUsage:
inputTokens:
outputTokens:
totalTokens:
stepNumber:
model:
finishReason:
usage:
text:
toolCalls:
toolResults:
content:
reasoning:
reasoningText:
files:
sources:
warnings:
request:
response:
functionId:
metadata:
experimental_context:
providerMetadata:
Use Cases
Logging and Debugging
import { generateText } from 'ai';
const result = await generateText({ model: openai('gpt-4o'), prompt: 'Hello!', experimental_onStart: event => { console.log(`[${new Date().toISOString()}] Generation started`, { model: event.model.modelId, provider: event.model.provider, }); }, onStepFinish: event => { console.log( `[${new Date().toISOString()}] Step ${event.stepNumber} finished`, { finishReason: event.finishReason, tokens: event.usage.totalTokens, }, ); }, onFinish: event => { console.log(`[${new Date().toISOString()}] Generation complete`, { totalSteps: event.steps.length, totalTokens: event.totalUsage.totalTokens, }); },});Tool Execution Monitoring
import { generateText } from 'ai';
const result = await generateText({ model: openai('gpt-4o'), prompt: 'What is the weather?', tools: { getWeather }, experimental_onToolCallStart: event => { console.log(`Tool "${event.toolCall.toolName}" starting...`); }, experimental_onToolCallFinish: event => { if (event.success) { console.log( `Tool "${event.toolCall.toolName}" completed in ${event.durationMs}ms`, ); } else { console.error(`Tool "${event.toolCall.toolName}" failed:`, event.error); } },});Error Handling
Errors thrown inside callbacks are caught and do not break the generation flow. This ensures that monitoring code cannot disrupt your application:
const result = await generateText({ model: openai('gpt-4o'), prompt: 'Hello!', experimental_onStart: () => { throw new Error('This error is caught internally'); // Generation continues normally },});