Mistral AI Provider
The Mistral AI provider contains language model support for the Mistral chat API.
Setup
The Mistral provider is available in the @ai-sdk/mistral
module. You can install it with
pnpm add @ai-sdk/mistral
Provider Instance
You can import the default provider instance mistral
from @ai-sdk/mistral
:
import { mistral } from '@ai-sdk/mistral';
If you need a customized setup, you can import createMistral
from @ai-sdk/mistral
and create a provider instance with your settings:
import { createMistral } from '@ai-sdk/mistral';
const mistral = createMistral({ // custom settings});
You can use the following optional settings to customize the Mistral provider instance:
-
baseURL string
Use a different URL prefix for API calls, e.g. to use proxy servers. The default prefix is
https://api.mistral.ai/v1
. -
apiKey string
API key that is being sent using the
Authorization
header. It defaults to theMISTRAL_API_KEY
environment variable. -
headers Record<string,string>
Custom headers to include in the requests.
-
fetch (input: RequestInfo, init?: RequestInit) => Promise<Response>
Custom fetch implementation. Defaults to the global
fetch
function. You can use it as a middleware to intercept requests, or to provide a custom fetch implementation for e.g. testing.
Language Models
You can create models that call the Mistral chat API using a provider instance.
The first argument is the model id, e.g. mistral-large-latest
.
Some Mistral chat models support tool calls.
const model = mistral('mistral-large-latest');
Mistral chat models also support additional model settings that are not part of the standard call settings.
You can pass them as an options argument and utilize MistralLanguageModelOptions
for typing:
import { mistral, type MistralLanguageModelOptions } from '@ai-sdk/mistral';const model = mistral('mistral-large-latest');
await generateText({ model, providerOptions: { mistral: { safePrompt: true, // optional safety prompt injection } satisfies MistralLanguageModelOptions, },});
The following optional provider options are available for Mistral models:
-
safePrompt boolean
Whether to inject a safety prompt before all conversations.
Defaults to
false
. -
documentImageLimit number
Maximum number of images to process in a document.
-
documentPageLimit number
Maximum number of pages to process in a document.
-
strictJsonSchema boolean
Whether to use strict JSON schema validation for structured outputs. Only applies when a schema is provided and only sets the
strict
flag in addition to using Custom Structured Outputs, which is used by default if a schema is provided.Defaults to
false
. -
structuredOutputs boolean
Whether to use structured outputs. When enabled, tool calls and object generation will be strict and follow the provided schema.
Defaults to
true
.
Document OCR
Mistral chat models support document OCR for PDF files. You can optionally set image and page limits using the provider options.
const result = await generateText({ model: mistral('mistral-small-latest'), messages: [ { role: 'user', content: [ { type: 'text', text: 'What is an embedding model according to this document?', }, { type: 'file', data: new URL( 'https://github.com/vercel/ai/blob/main/examples/ai-core/data/ai.pdf?raw=true', ), mediaType: 'application/pdf', }, ], }, ], // optional settings: providerOptions: { mistral: { documentImageLimit: 8, documentPageLimit: 64, }, },});
Reasoning Models
Mistral offers reasoning models that provide step-by-step thinking capabilities:
- magistral-small-2506: Smaller reasoning model for efficient step-by-step thinking
- magistral-medium-2506: More powerful reasoning model balancing performance and cost
These models return content that includes <think>...</think>
tags containing the reasoning process. To properly extract and separate the reasoning from the final answer, use the extract reasoning middleware:
import { mistral } from '@ai-sdk/mistral';import { extractReasoningMiddleware, generateText, wrapLanguageModel,} from 'ai';
const result = await generateText({ model: wrapLanguageModel({ model: mistral('magistral-small-2506'), middleware: extractReasoningMiddleware({ tagName: 'think', }), }), prompt: 'What is 15 * 24?',});
console.log('REASONING:', result.reasoningText);// Output: "Let me calculate this step by step..."
console.log('ANSWER:', result.text);// Output: "360"
The middleware automatically parses the <think>
tags and provides separate reasoningText
and text
properties in the result.
Example
You can use Mistral language models to generate text with the generateText
function:
import { mistral } from '@ai-sdk/mistral';import { generateText } from 'ai';
const { text } = await generateText({ model: mistral('mistral-large-latest'), prompt: 'Write a vegetarian lasagna recipe for 4 people.',});
Mistral language models can also be used in the streamText
, generateObject
, and streamObject
functions
(see AI SDK Core).
Structured Outputs
Mistral chat models support structured outputs using JSON Schema. You can use generateObject
or streamObject
with Zod, Valibot, or raw JSON Schema. The SDK sends your schema via Mistral's response_format: { type: 'json_schema' }
.
import { mistral } from '@ai-sdk/mistral';import { generateObject } from 'ai';import { z } from 'zod';
const result = await generateObject({ model: mistral('mistral-large-latest'), schema: z.object({ recipe: z.object({ name: z.string(), ingredients: z.array(z.string()), instructions: z.array(z.string()), }), }), prompt: 'Generate a simple pasta recipe.',});
console.log(JSON.stringify(result.object, null, 2));
You can enable strict JSON Schema validation using a provider option:
import { mistral } from '@ai-sdk/mistral';import { generateObject } from 'ai';import { z } from 'zod';
const result = await generateObject({ model: mistral('mistral-large-latest'), providerOptions: { mistral: { strictJsonSchema: true, // reject outputs that don't strictly match the schema }, }, schema: z.object({ title: z.string(), items: z.array(z.object({ id: z.string(), qty: z.number().int().min(1) })), }), prompt: 'Generate a small shopping list.',});
When using structured outputs, the SDK no longer injects an extra "answer with
JSON" instruction. It relies on Mistral's native json_schema
/json_object
response formats instead. You can customize the schema name/description via
the standard structured-output APIs.
Model Capabilities
Model | Image Input | Object Generation | Tool Usage | Tool Streaming |
---|---|---|---|---|
pixtral-large-latest | ||||
mistral-large-latest | ||||
mistral-medium-latest | ||||
mistral-medium-2505 | ||||
mistral-small-latest | ||||
magistral-small-2506 | ||||
magistral-medium-2506 | ||||
ministral-3b-latest | ||||
ministral-8b-latest | ||||
pixtral-12b-2409 | ||||
open-mistral-7b | ||||
open-mixtral-8x7b | ||||
open-mixtral-8x22b |
The table above lists popular models. Please see the Mistral docs for a full list of available models. The table above lists popular models. You can also pass any available provider model ID as a string if needed.
Embedding Models
You can create models that call the Mistral embeddings API
using the .textEmbedding()
factory method.
const model = mistral.textEmbedding('mistral-embed');
You can use Mistral embedding models to generate embeddings with the embed
function:
import { mistral } from '@ai-sdk/mistral';import { embed } from 'ai';
const { embedding } = await embed({ model: mistral.textEmbedding('mistral-embed'), value: 'sunny day at the beach',});
Model Capabilities
Model | Default Dimensions |
---|---|
mistral-embed | 1024 |