Hugging Face Provider
The Hugging Face provider offers access to thousands of language models through Hugging Face Inference Providers, including models from Meta, DeepSeek, Qwen, and more.
API keys can be obtained from Hugging Face Settings.
Setup
The Hugging Face provider is available via the @ai-sdk/huggingface
module. You can install it with:
pnpm add @ai-sdk/huggingface
Provider Instance
You can import the default provider instance huggingface
from @ai-sdk/huggingface
:
import { huggingface } from '@ai-sdk/huggingface';
For custom configuration, you can import createHuggingFace
and create a provider instance with your settings:
import { createHuggingFace } from '@ai-sdk/huggingface';
const huggingface = createHuggingFace({ apiKey: process.env.HUGGINGFACE_API_KEY ?? '',});
You can use the following optional settings to customize the Hugging Face provider instance:
-
baseURL string
Use a different URL prefix for API calls, e.g. to use proxy servers. The default prefix is
https://router.huggingface.co/v1
. -
apiKey string
API key that is being sent using the
Authorization
header. It defaults to theHUGGINGFACE_API_KEY
environment variable. You can get your API key from Hugging Face Settings. -
headers Record<string,string>
Custom headers to include in the requests.
-
fetch (input: RequestInfo, init?: RequestInit) => Promise<Response>
Custom fetch implementation.
Language Models
You can create language models using a provider instance:
import { huggingface } from '@ai-sdk/huggingface';import { generateText } from 'ai';
const { text } = await generateText({ model: huggingface('deepseek-ai/DeepSeek-V3-0324'), prompt: 'Write a vegetarian lasagna recipe for 4 people.',});
You can also use the .responses()
or .languageModel()
factory methods:
const model = huggingface.responses('deepseek-ai/DeepSeek-V3-0324');// orconst model = huggingface.languageModel('moonshotai/Kimi-K2-Instruct');
Hugging Face language models can be used in the streamText
function
(see AI SDK Core).
You can explore the latest and trending models with their capabilities, context size, throughput and pricing on the Hugging Face Inference Models page.
Model Capabilities
Model | Image Input | Object Generation | Tool Usage | Tool Streaming |
---|---|---|---|---|
meta-llama/Llama-3.1-8B-Instruct | ||||
meta-llama/Llama-3.1-70B-Instruct | ||||
meta-llama/Llama-3.3-70B-Instruct | ||||
meta-llama/Llama-4-Scout-17B-16E-Instruct | ||||
deepseek-ai/DeepSeek-V3-0324 | ||||
deepseek-ai/DeepSeek-R1 | ||||
deepseek-ai/DeepSeek-R1-Distill-Llama-70B | ||||
Qwen/Qwen3-235B-A22B-Instruct-2507 | ||||
Qwen/Qwen3-Coder-480B-A35B-Instruct | ||||
Qwen/Qwen2.5-VL-7B-Instruct | ||||
google/gemma-3-27b-it | ||||
moonshotai/Kimi-K2-Instruct |
The capabilities depend on the specific model you're using. Check the model documentation on Hugging Face Hub for detailed information about each model's features.