xAI Grok Provider

The xAI Grok provider contains language model support for the xAI API.

Setup

The xAI Grok provider is available via the @ai-sdk/xai module. You can install it with

pnpm add @ai-sdk/xai

Provider Instance

You can import the default provider instance xai from @ai-sdk/xai:

import { xai } from '@ai-sdk/xai';

If you need a customized setup, you can import createXai from @ai-sdk/xai and create a provider instance with your settings:

import { createXai } from '@ai-sdk/xai';
const xai = createXai({
apiKey: 'your-api-key',
});

You can use the following optional settings to customize the xAI provider instance:

  • baseURL string

    Use a different URL prefix for API calls, e.g. to use proxy servers. The default prefix is https://api.x.ai/v1.

  • apiKey string

    API key that is being sent using the Authorization header. It defaults to the XAI_API_KEY environment variable.

  • headers Record<string,string>

    Custom headers to include in the requests.

  • fetch (input: RequestInfo, init?: RequestInit) => Promise<Response>

    Custom fetch implementation. Defaults to the global fetch function. You can use it as a middleware to intercept requests, or to provide a custom fetch implementation for e.g. testing.

Language Models

You can create xAI models using a provider instance. The first argument is the model id, e.g. grok-3.

const model = xai('grok-3');

By default, xai(modelId) uses the Chat API. To use the Responses API with server-side agentic tools, explicitly use xai.responses(modelId).

Example

You can use xAI language models to generate text with the generateText function:

import { xai } from '@ai-sdk/xai';
import { generateText } from 'ai';
const { text } = await generateText({
model: xai('grok-3'),
prompt: 'Write a vegetarian lasagna recipe for 4 people.',
});

xAI language models can also be used in the streamText function and support structured data generation with Output (see AI SDK Core).

Provider Options

xAI chat models support additional provider options that are not part of the standard call settings. You can pass them in the providerOptions argument:

import { xai, type XaiLanguageModelChatOptions } from '@ai-sdk/xai';
const model = xai('grok-3-mini');
await generateText({
model,
providerOptions: {
xai: {
reasoningEffort: 'high',
} satisfies XaiLanguageModelChatOptions,
},
});

The following optional provider options are available for xAI chat models:

  • reasoningEffort 'low' | 'high'

    Reasoning effort for reasoning models.

  • logprobs boolean

    Return log probabilities for output tokens.

  • topLogprobs number

    Number of most likely tokens to return per token position (0-8). When set, logprobs is automatically enabled.

  • parallel_function_calling boolean

    Whether to enable parallel function calling during tool use. When true, the model can call multiple functions in parallel. When false, the model will call functions sequentially. Defaults to true.

Responses API (Agentic Tools)

You can use the xAI Responses API with the xai.responses(modelId) factory method for server-side agentic tool calling. This enables the model to autonomously orchestrate tool calls and research on xAI's servers.

const model = xai.responses('grok-4-fast-non-reasoning');

The Responses API provides server-side tools that the model can autonomously execute during its reasoning process:

  • web_search: Real-time web search and page browsing
  • x_search: Search X (Twitter) posts, users, and threads
  • code_execution: Execute Python code for calculations and data analysis
  • view_image: View and analyze images
  • view_x_video: View and analyze videos from X posts
  • mcp_server: Connect to remote MCP servers and use their tools
  • file_search: Search through documents in vector stores (collections)

Vision

The Responses API supports image input with vision models:

import { xai } from '@ai-sdk/xai';
import { generateText } from 'ai';
const { text } = await generateText({
model: xai.responses('grok-3'),
messages: [
{
role: 'user',
content: [
{ type: 'text', text: 'What do you see in this image?' },
{ type: 'image', image: fs.readFileSync('./image.png') },
],
},
],
});

Web Search Tool

The web search tool enables autonomous web research with optional domain filtering and image understanding:

import { xai } from '@ai-sdk/xai';
import { generateText } from 'ai';
const { text, sources } = await generateText({
model: xai.responses('grok-4-fast-non-reasoning'),
prompt: 'What are the latest developments in AI?',
tools: {
web_search: xai.tools.webSearch({
allowedDomains: ['arxiv.org', 'openai.com'],
enableImageUnderstanding: true,
}),
},
});
console.log(text);
console.log('Citations:', sources);

Web Search Parameters

  • allowedDomains string[]

    Only search within specified domains (max 5). Cannot be used with excludedDomains.

  • excludedDomains string[]

    Exclude specified domains from search (max 5). Cannot be used with allowedDomains.

  • enableImageUnderstanding boolean

    Enable the model to view and analyze images found during search. Increases token usage.

X Search Tool

The X search tool enables searching X (Twitter) for posts, with filtering by handles and date ranges:

const { text, sources } = await generateText({
model: xai.responses('grok-4-fast-non-reasoning'),
prompt: 'What are people saying about AI on X this week?',
tools: {
x_search: xai.tools.xSearch({
allowedXHandles: ['elonmusk', 'xai'],
fromDate: '2025-10-23',
toDate: '2025-10-30',
enableImageUnderstanding: true,
enableVideoUnderstanding: true,
}),
},
});

X Search Parameters

  • allowedXHandles string[]

    Only search posts from specified X handles (max 10). Cannot be used with excludedXHandles.

  • excludedXHandles string[]

    Exclude posts from specified X handles (max 10). Cannot be used with allowedXHandles.

  • fromDate string

    Start date for posts in ISO8601 format (YYYY-MM-DD).

  • toDate string

    End date for posts in ISO8601 format (YYYY-MM-DD).

  • enableImageUnderstanding boolean

    Enable the model to view and analyze images in X posts.

  • enableVideoUnderstanding boolean

    Enable the model to view and analyze videos in X posts.

Code Execution Tool

The code execution tool enables the model to write and execute Python code for calculations and data analysis:

const { text } = await generateText({
model: xai.responses('grok-4-fast-non-reasoning'),
prompt:
'Calculate the compound interest for $10,000 at 5% annually for 10 years',
tools: {
code_execution: xai.tools.codeExecution(),
},
});

View Image Tool

The view image tool enables the model to view and analyze images:

const { text } = await generateText({
model: xai.responses('grok-4-fast-non-reasoning'),
prompt: 'Describe what you see in the image',
tools: {
view_image: xai.tools.viewImage(),
},
});

View X Video Tool

The view X video tool enables the model to view and analyze videos from X (Twitter) posts:

const { text } = await generateText({
model: xai.responses('grok-4-fast-non-reasoning'),
prompt: 'Summarize the content of this X video',
tools: {
view_x_video: xai.tools.viewXVideo(),
},
});

MCP Server Tool

The MCP server tool enables the model to connect to remote Model Context Protocol (MCP) servers and use their tools:

const { text } = await generateText({
model: xai.responses('grok-4-fast-non-reasoning'),
prompt: 'Use the weather tool to check conditions in San Francisco',
tools: {
weather_server: xai.tools.mcpServer({
serverUrl: 'https://example.com/mcp',
serverLabel: 'weather-service',
serverDescription: 'Weather data provider',
allowedTools: ['get_weather', 'get_forecast'],
}),
},
});

MCP Server Parameters

  • serverUrl string (required)

    The URL of the remote MCP server.

  • serverLabel string

    A label to identify the MCP server.

  • serverDescription string

    A description of what the MCP server provides.

  • allowedTools string[]

    List of tool names that the model is allowed to use from the MCP server. If not specified, all tools are allowed.

  • headers Record<string, string>

    Custom headers to include when connecting to the MCP server.

  • authorization string

    Authorization header value for authenticating with the MCP server (e.g., 'Bearer token123').

File Search Tool

The file search tool enables searching through documents stored in xAI vector stores (collections):

import { xai, type XaiLanguageModelResponsesOptions } from '@ai-sdk/xai';
import { streamText } from 'ai';
const result = streamText({
model: xai.responses('grok-4-1-fast-reasoning'),
prompt: 'What documents do you have access to?',
tools: {
file_search: xai.tools.fileSearch({
vectorStoreIds: ['collection_your-collection-id'],
maxNumResults: 10,
}),
},
providerOptions: {
xai: {
include: ['file_search_call.results'],
} satisfies XaiLanguageModelResponsesOptions,
},
});

File Search Parameters

  • vectorStoreIds string[] (required)

    The IDs of the vector stores (collections) to search.

  • maxNumResults number

    The maximum number of results to return from the search.

  • include Array<'file_search_call.results'>

    Include file search results in the response. When set to ['file_search_call.results'], the response will contain the actual search results with file content and scores.

File search requires grok-4 family models and the Responses API. Vector stores can be created using the xAI API.

Multiple Tools

You can combine multiple server-side tools for comprehensive research:

import { xai } from '@ai-sdk/xai';
import { streamText } from 'ai';
const { fullStream } = streamText({
model: xai.responses('grok-4-fast-non-reasoning'),
prompt: 'Research AI safety developments and calculate risk metrics',
tools: {
web_search: xai.tools.webSearch(),
x_search: xai.tools.xSearch(),
code_execution: xai.tools.codeExecution(),
file_search: xai.tools.fileSearch({
vectorStoreIds: ['collection_your-documents'],
}),
data_service: xai.tools.mcpServer({
serverUrl: 'https://data.example.com/mcp',
serverLabel: 'data-service',
}),
},
});
for await (const part of fullStream) {
if (part.type === 'text-delta') {
process.stdout.write(part.text);
} else if (part.type === 'source' && part.sourceType === 'url') {
console.log('\nSource:', part.url);
}
}

Provider Options

The Responses API supports the following provider options:

import { xai, type XaiLanguageModelResponsesOptions } from '@ai-sdk/xai';
import { generateText } from 'ai';
const result = await generateText({
model: xai.responses('grok-4-fast-non-reasoning'),
providerOptions: {
xai: {
reasoningEffort: 'high',
} satisfies XaiLanguageModelResponsesOptions,
},
// ...
});

The following provider options are available:

  • reasoningEffort 'low' | 'medium' | 'high'

    Control the reasoning effort for the model. Higher effort may produce more thorough results at the cost of increased latency and token usage.

  • logprobs boolean

    Return log probabilities for output tokens.

  • topLogprobs number

    Number of most likely tokens to return per token position (0-8). When set, logprobs is automatically enabled.

  • include Array<'file_search_call.results'>

    Specify additional output data to include in the model response. Use ['file_search_call.results'] to include file search results with scores and content.

  • store boolean

    Whether to store the input message(s) and model response for later retrieval. Defaults to true.

  • previousResponseId string

    The ID of the previous response from the model. You can use it to continue a conversation.

The Responses API only supports server-side tools. You cannot mix server-side tools with client-side function tools in the same request.

xAI models support Live Search functionality, allowing them to query real-time data from various sources and include it in responses with citations.

To enable search, specify searchParameters with a search mode:

import { xai, type XaiLanguageModelChatOptions } from '@ai-sdk/xai';
import { generateText } from 'ai';
const { text, sources } = await generateText({
model: xai('grok-3-latest'),
prompt: 'What are the latest developments in AI?',
providerOptions: {
xai: {
searchParameters: {
mode: 'auto', // 'auto', 'on', or 'off'
returnCitations: true,
maxSearchResults: 5,
},
} satisfies XaiLanguageModelChatOptions,
},
});
console.log(text);
console.log('Sources:', sources);

Search Parameters

The following search parameters are available:

  • mode 'auto' | 'on' | 'off'

    Search mode preference:

    • 'auto' (default): Model decides whether to search
    • 'on': Always enables search
    • 'off': Disables search completely
  • returnCitations boolean

    Whether to return citations in the response. Defaults to true.

  • fromDate string

    Start date for search data in ISO8601 format (YYYY-MM-DD).

  • toDate string

    End date for search data in ISO8601 format (YYYY-MM-DD).

  • maxSearchResults number

    Maximum number of search results to consider. Defaults to 20, max 50.

  • sources Array<SearchSource>

    Data sources to search from. Defaults to ["web", "x"] if not specified.

Search Sources

You can specify different types of data sources for search:

import { xai, type XaiLanguageModelChatOptions } from '@ai-sdk/xai';
const result = await generateText({
model: xai('grok-3-latest'),
prompt: 'Best ski resorts in Switzerland',
providerOptions: {
xai: {
searchParameters: {
mode: 'on',
sources: [
{
type: 'web',
country: 'CH', // ISO alpha-2 country code
allowedWebsites: ['ski.com', 'snow-forecast.com'],
safeSearch: true,
},
],
},
} satisfies XaiLanguageModelChatOptions,
},
});

Web source parameters

  • country string: ISO alpha-2 country code
  • allowedWebsites string[]: Max 5 allowed websites
  • excludedWebsites string[]: Max 5 excluded websites
  • safeSearch boolean: Enable safe search (default: true)
import { xai, type XaiLanguageModelChatOptions } from '@ai-sdk/xai';
const result = await generateText({
model: xai('grok-3-latest'),
prompt: 'Latest updates on Grok AI',
providerOptions: {
xai: {
searchParameters: {
mode: 'on',
sources: [
{
type: 'x',
includedXHandles: ['grok', 'xai'],
excludedXHandles: ['openai'],
postFavoriteCount: 10,
postViewCount: 100,
},
],
},
} satisfies XaiLanguageModelChatOptions,
},
});

X source parameters

  • includedXHandles string[]: Array of X handles to search (without @ symbol)
  • excludedXHandles string[]: Array of X handles to exclude from search (without @ symbol)
  • postFavoriteCount number: Minimum favorite count of the X posts to consider.
  • postViewCount number: Minimum view count of the X posts to consider.
import { xai, type XaiLanguageModelChatOptions } from '@ai-sdk/xai';
const result = await generateText({
model: xai('grok-3-latest'),
prompt: 'Recent tech industry news',
providerOptions: {
xai: {
searchParameters: {
mode: 'on',
sources: [
{
type: 'news',
country: 'US',
excludedWebsites: ['tabloid.com'],
safeSearch: true,
},
],
},
} satisfies XaiLanguageModelChatOptions,
},
});

News source parameters

  • country string: ISO alpha-2 country code
  • excludedWebsites string[]: Max 5 excluded websites
  • safeSearch boolean: Enable safe search (default: true)
import { xai, type XaiLanguageModelChatOptions } from '@ai-sdk/xai';
const result = await generateText({
model: xai('grok-3-latest'),
prompt: 'Latest status updates',
providerOptions: {
xai: {
searchParameters: {
mode: 'on',
sources: [
{
type: 'rss',
links: ['https://status.x.ai/feed.xml'],
},
],
},
} satisfies XaiLanguageModelChatOptions,
},
});

RSS source parameters

  • links string[]: Array of RSS feed URLs (max 1 currently supported)

Multiple Sources

You can combine multiple data sources in a single search:

import { xai, type XaiLanguageModelChatOptions } from '@ai-sdk/xai';
const result = await generateText({
model: xai('grok-3-latest'),
prompt: 'Comprehensive overview of recent AI breakthroughs',
providerOptions: {
xai: {
searchParameters: {
mode: 'on',
returnCitations: true,
maxSearchResults: 15,
sources: [
{
type: 'web',
allowedWebsites: ['arxiv.org', 'openai.com'],
},
{
type: 'news',
country: 'US',
},
{
type: 'x',
includedXHandles: ['openai', 'deepmind'],
},
],
},
} satisfies XaiLanguageModelChatOptions,
},
});

Sources and Citations

When search is enabled with returnCitations: true, the response includes sources that were used to generate the answer:

import { xai, type XaiLanguageModelChatOptions } from '@ai-sdk/xai';
const { text, sources } = await generateText({
model: xai('grok-3-latest'),
prompt: 'What are the latest developments in AI?',
providerOptions: {
xai: {
searchParameters: {
mode: 'auto',
returnCitations: true,
},
} satisfies XaiLanguageModelChatOptions,
},
});
// Access the sources used
for (const source of sources) {
if (source.sourceType === 'url') {
console.log('Source:', source.url);
}
}

Live Search works with streaming responses. Citations are included when the stream completes:

import { xai, type XaiLanguageModelChatOptions } from '@ai-sdk/xai';
import { streamText } from 'ai';
const result = streamText({
model: xai('grok-3-latest'),
prompt: 'What has happened in tech recently?',
providerOptions: {
xai: {
searchParameters: {
mode: 'auto',
returnCitations: true,
},
} satisfies XaiLanguageModelChatOptions,
},
});
for await (const textPart of result.textStream) {
process.stdout.write(textPart);
}
console.log('Sources:', await result.sources);

Model Capabilities

ModelImage InputObject GenerationTool UsageTool StreamingReasoning
grok-4-1
grok-4-1-fast-reasoning
grok-4-1-fast-non-reasoning
grok-4-fast-non-reasoning
grok-4-fast-reasoning
grok-code-fast-1
grok-4
grok-4-0709
grok-4-latest
grok-3
grok-3-latest
grok-3-mini
grok-3-mini-latest

The table above lists popular models. Please see the xAI docs for a full list of available models. You can also pass any available provider model ID as a string if needed.

Image Models

You can create xAI image models using the .image() factory method. For more on image generation with the AI SDK see generateImage().

import { xai } from '@ai-sdk/xai';
import { generateImage } from 'ai';
const { image } = await generateImage({
model: xai.image('grok-imagine-image'),
prompt: 'A futuristic cityscape at sunset',
});

The xAI image model does not support the size parameter. Use aspectRatio instead. Supported aspect ratios: 1:1, 16:9, 9:16, 4:3, 3:4, 3:2, 2:3, 2:1, 1:2, 19.5:9, 9:19.5, 20:9, 9:20, and auto.

Image Editing

xAI supports image editing through the grok-imagine-image model. Pass input images via prompt.images to transform or edit existing images.

xAI image editing does not support masks. Editing is prompt-driven - describe what you want to change in the text prompt.

Basic Image Editing

Transform an existing image using text prompts:

import { xai } from '@ai-sdk/xai';
import { generateImage } from 'ai';
import { readFileSync } from 'fs';
const imageBuffer = readFileSync('./input-image.png');
const { images } = await generateImage({
model: xai.image('grok-imagine-image'),
prompt: {
text: 'Turn the cat into a golden retriever dog',
images: [imageBuffer],
},
});

Multi-Image Editing

Combine or reference multiple input images (up to 3) in the prompt:

import { xai } from '@ai-sdk/xai';
import { generateImage } from 'ai';
import { readFileSync } from 'fs';
const cat = readFileSync('./cat.png');
const dog = readFileSync('./dog.png');
const { images } = await generateImage({
model: xai.image('grok-imagine-image'),
prompt: {
text: 'Combine these two animals into a group photo',
images: [cat, dog],
},
});

Style Transfer

Apply artistic styles to an image:

const imageBuffer = readFileSync('./input-image.png');
const { images } = await generateImage({
model: xai.image('grok-imagine-image'),
prompt: {
text: 'Transform this into a watercolor painting style',
images: [imageBuffer],
},
aspectRatio: '1:1',
});

Input images can be provided as Buffer, ArrayBuffer, Uint8Array, or base64-encoded strings. Up to 3 input images are supported per request.

Model-specific options

You can customize the image generation behavior with model-specific settings:

import { xai } from '@ai-sdk/xai';
import { generateImage } from 'ai';
const { images } = await generateImage({
model: xai.image('grok-imagine-image'),
prompt: 'A futuristic cityscape at sunset',
aspectRatio: '16:9',
n: 2,
});

Model Capabilities

ModelAspect RatiosImage Editing
grok-imagine-image1:1, 16:9, 9:16, 4:3, 3:4, 3:2, 2:3, 2:1, 1:2, 19.5:9, 9:19.5, 20:9, 9:20, auto

Video Models

You can create xAI video models using the .video() factory method. For more on video generation with the AI SDK see generateVideo().

This provider supports three video generation modes: text-to-video, image-to-video, and video editing.

Text-to-Video

Generate videos from text prompts:

import { xai, type XaiVideoModelOptions } from '@ai-sdk/xai';
import { experimental_generateVideo as generateVideo } from 'ai';
const { videos } = await generateVideo({
model: xai.video('grok-imagine-video'),
prompt: 'A chicken flying into the sunset in the style of 90s anime.',
aspectRatio: '16:9',
duration: 5,
providerOptions: {
xai: {
pollTimeoutMs: 600000, // 10 minutes
} satisfies XaiVideoModelOptions,
},
});

Image-to-Video

Generate videos using an image as the starting frame with an optional text prompt:

import { xai, type XaiVideoModelOptions } from '@ai-sdk/xai';
import { experimental_generateVideo as generateVideo } from 'ai';
const { videos } = await generateVideo({
model: xai.video('grok-imagine-video'),
prompt: {
image: 'https://example.com/start-frame.png',
text: 'The cat slowly turns its head and blinks',
},
duration: 5,
providerOptions: {
xai: {
pollTimeoutMs: 600000, // 10 minutes
} satisfies XaiVideoModelOptions,
},
});

Video Editing

Edit an existing video using a text prompt by providing a source video URL via provider options:

import { xai, type XaiVideoModelOptions } from '@ai-sdk/xai';
import { experimental_generateVideo as generateVideo } from 'ai';
const { videos } = await generateVideo({
model: xai.video('grok-imagine-video'),
prompt: 'Give the person sunglasses and a hat',
providerOptions: {
xai: {
videoUrl: 'https://example.com/source-video.mp4',
pollTimeoutMs: 600000, // 10 minutes
} satisfies XaiVideoModelOptions,
},
});

Video editing accepts input videos up to 8.7 seconds long. The duration, aspectRatio, and resolution parameters are not supported for editing - the output matches the input video's properties (capped at 720p).

Chaining and Concurrent Edits

The xAI-hosted video URL is available in providerMetadata.xai.videoUrl. You can use it to chain sequential edits or branch into concurrent edits using Promise.all:

import { xai, type XaiVideoModelOptions } from '@ai-sdk/xai';
import { experimental_generateVideo as generateVideo } from 'ai';
const providerOptions = {
xai: {
videoUrl: 'https://example.com/source-video.mp4',
pollTimeoutMs: 600000,
} satisfies XaiVideoModelOptions,
};
// Step 1: Apply an initial edit
const step1 = await generateVideo({
model: xai.video('grok-imagine-video'),
prompt: 'Add a party hat to the person',
providerOptions,
});
// Get the xAI-hosted URL from provider metadata
const step1VideoUrl = step1.providerMetadata?.xai?.videoUrl as string;
// Step 2: Apply two more edits concurrently, building on step 1
const [withSunglasses, withScarf] = await Promise.all([
generateVideo({
model: xai.video('grok-imagine-video'),
prompt: 'Add sunglasses',
providerOptions: {
xai: { videoUrl: step1VideoUrl, pollTimeoutMs: 600000 },
},
}),
generateVideo({
model: xai.video('grok-imagine-video'),
prompt: 'Add a scarf',
providerOptions: {
xai: { videoUrl: step1VideoUrl, pollTimeoutMs: 600000 },
},
}),
]);

Video Provider Options

The following provider options are available via providerOptions.xai. You can validate the provider options using the XaiVideoModelOptions type.

  • pollIntervalMs number

    Polling interval in milliseconds for checking task status. Defaults to 5000.

  • pollTimeoutMs number

    Maximum wait time in milliseconds for video generation. Defaults to 600000 (10 minutes).

  • resolution '480p' | '720p'

    Video resolution. When using the SDK's standard resolution parameter, 1280x720 maps to 720p and 854x480 maps to 480p. Use this provider option to pass the native format directly.

  • videoUrl string

    URL of a source video for video editing. When provided, the prompt is used to describe the desired edits to the video.

Video generation is an asynchronous process that can take several minutes. Consider setting pollTimeoutMs to at least 10 minutes (600000ms) for reliable operation. Generated video URLs are ephemeral and should be downloaded promptly.

Aspect Ratio and Resolution

For text-to-video, you can specify both aspectRatio and resolution. The default aspect ratio is 16:9 and the default resolution is 480p.

For image-to-video, the output defaults to the input image's aspect ratio. If you specify aspectRatio, it will override this and stretch the image to the desired ratio.

For video editing, the output matches the input video's aspect ratio and resolution. Custom duration, aspectRatio, and resolution are not supported - the output resolution is capped at 720p (e.g., a 1080p input will be downsized to 720p).

Video Model Capabilities

ModelDurationAspect RatiosResolutionImage-to-VideoVideo Editing
grok-imagine-video1–15s1:1, 16:9, 9:16, 4:3, 3:4, 3:2, 2:3480p, 720p

You can also pass any available provider model ID as a string if needed.