Open Responses Provider
The Open Responses provider contains language model support for Open Responses compatible APIs.
Setup
The Open Responses provider is available in the @ai-sdk/open-responses module. You can install it with
pnpm add @ai-sdk/open-responses
Provider Instance
Create an Open Responses provider instance using createOpenResponses:
import { createOpenResponses } from '@ai-sdk/open-responses';
const openResponses = createOpenResponses({ name: 'aProvider', url: 'http://localhost:1234/v1/responses',});The name and url options are required:
-
name string
Provider name. Used as the key for provider options and metadata.
-
url string
URL for the Open Responses API POST endpoint.
You can use the following optional settings to customize the Open Responses provider instance:
-
apiKey string
API key that is being sent using the
Authorizationheader. -
headers Record<string,string>
Custom headers to include in the requests.
-
fetch (input: RequestInfo, init?: RequestInit) => Promise<Response>
Custom fetch implementation. Defaults to the global
fetchfunction.
Language Models
The Open Responses provider instance is a function that you can invoke to create a language model:
const model = openResponses('mistralai/ministral-3-14b-reasoning');You can use Open Responses models with the generateText, streamText, generateObject, and streamObject functions
(see AI SDK Core).
Example
import { createOpenResponses } from '@ai-sdk/open-responses';import { generateText } from 'ai';
const openResponses = createOpenResponses({ name: 'aProvider', url: 'http://localhost:1234/v1/responses',});
const { text } = await generateText({ model: openResponses('mistralai/ministral-3-14b-reasoning'), prompt: 'Invent a new holiday and describe its traditions.',});Notes
- Stop sequences,
topK, andseedare not supported and are ignored with warnings. - Image inputs are supported for user messages with
fileparts using image media types.