With the release of DeepSeek R1, there has never been a better time to start building AI applications, particularly those that require complex reasoning capabilities.
The AI SDK is a powerful TypeScript toolkit for building AI applications with large language models (LLMs) like DeepSeek R1 alongside popular frameworks like React, Next.js, Vue, Svelte, Node.js, and more.
DeepSeek R1 is a series of advanced AI models designed to tackle complex reasoning tasks in science, coding, and mathematics. These models are optimized to "think before they answer," producing detailed internal chains of thought that aid in solving challenging problems.
The series includes two primary variants:
DeepSeek R1 models excel in reasoning tasks, delivering competitive performance across key benchmarks:
DeepSeek R1 models excel with structured and straightforward prompts. The following best practices can help achieve optimal performance:
<think>
tags for reasoning and <answer>
tags for the final result.The AI SDK is the TypeScript toolkit designed to help developers build AI-powered applications with React, Next.js, Vue, Svelte, Node.js, and more. Integrating LLMs into applications is complicated and heavily dependent on the specific model provider you use.
The AI SDK abstracts away the differences between model providers, eliminates boilerplate code for building chatbots, and allows you to go beyond text output to generate rich, interactive components.
At the center of the AI SDK is AI SDK Core, which provides a unified API to call any LLM. The code snippet below is all you need to call DeepSeek R1 with the AI SDK:
import { deepseek } from '@ai-sdk/deepseek';import { generateText } from 'ai';
const { reasoningText, text } = await generateText({ model: deepseek('deepseek-reasoner'), prompt: 'Explain quantum entanglement.',});
The unified interface also means that you can easily switch between providers by changing just two lines of code. For example, to use DeepSeek R1 via Fireworks:
import { fireworks } from '@ai-sdk/fireworks';import { generateText, wrapLanguageModel, extractReasoningMiddleware,} from 'ai';
// middleware to extract reasoning tokensconst enhancedModel = wrapLanguageModel({ model: fireworks('accounts/fireworks/models/deepseek-r1'), middleware: extractReasoningMiddleware({ tagName: 'think' }),});
const { reasoningText, text } = await generateText({ model: enhancedModel, prompt: 'Explain quantum entanglement.',});
Or to use Groq's deepseek-r1-distill-llama-70b
model:
import { groq } from '@ai-sdk/groq';import { generateText, wrapLanguageModel, extractReasoningMiddleware,} from 'ai';
// middleware to extract reasoning tokensconst enhancedModel = wrapLanguageModel({ model: groq('deepseek-r1-distill-llama-70b'), middleware: extractReasoningMiddleware({ tagName: 'think' }),});
const { reasoningText, text } = await generateText({ model: enhancedModel, prompt: 'Explain quantum entanglement.',});
The AI SDK provides a middleware
(extractReasoningMiddleware
) that can be used to extract the reasoning
tokens from the model's output.
When using DeepSeek-R1 series models with third-party providers like Together AI, we recommend using the startWithReasoning
option in the extractReasoningMiddleware
function, as they tend to bypass thinking patterns.
You can use DeepSeek R1 with the AI SDK through various providers. Here's a comparison of the providers that support DeepSeek R1:
Provider | Model ID | Reasoning Tokens |
---|---|---|
DeepSeek | deepseek-reasoner | |
Fireworks | accounts/fireworks/models/deepseek-r1 | Requires Middleware |
Groq | deepseek-r1-distill-llama-70b | Requires Middleware |
Azure | DeepSeek-R1 | Requires Middleware |
Together AI | deepseek-ai/DeepSeek-R1 | Requires Middleware |
FriendliAI | deepseek-r1 | Requires Middleware |
LangDB | deepseek/deepseek-reasoner | Requires Middleware |
AI SDK Core can be paired with AI SDK UI, another powerful component of the AI SDK, to streamline the process of building chat, completion, and assistant interfaces with popular frameworks like Next.js, Nuxt, and SvelteKit.
AI SDK UI provides robust abstractions that simplify the complex tasks of managing chat streams and UI updates on the frontend, enabling you to develop dynamic AI-driven interfaces more efficiently.
With four main hooks — useChat
, useCompletion
, and useObject
— you can incorporate real-time chat capabilities, text completions, streamed JSON, and interactive assistant features into your app.
Let's explore building a chatbot with Next.js, the AI SDK, and DeepSeek R1:
In a new Next.js application, first install the AI SDK and the DeepSeek provider:
pnpm install ai @ai-sdk/deepseek @ai-sdk/react
Then, create a route handler for the chat endpoint:
import { deepseek } from '@ai-sdk/deepseek';import { convertToModelMessages, streamText, UIMessage } from 'ai';
export async function POST(req: Request) { const { messages }: { messages: UIMessage[] } = await req.json();
const result = streamText({ model: deepseek('deepseek-reasoner'), messages: convertToModelMessages(messages), });
return result.toUIMessageStreamResponse({ sendReasoning: true, });}
You can forward the model's reasoning tokens to the client with
sendReasoning: true
in the toDataStreamResponse
method.
Finally, update the root page (app/page.tsx
) to use the useChat
hook:
'use client';
import { useChat } from '@ai-sdk/react';import { useState } from 'react';
export default function Page() { const [input, setInput] = useState(''); const { messages, sendMessage } = useChat();
const handleSubmit = (e: React.FormEvent<HTMLFormElement>) => { e.preventDefault(); if (input.trim()) { sendMessage({ text: input }); setInput(''); } };
return ( <> {messages.map(message => ( <div key={message.id}> {message.role === 'user' ? 'User: ' : 'AI: '} {message.parts.map((part, index) => { if (part.type === 'reasoning') { return <pre key={index}>{part.text}</pre>; } if (part.type === 'text') { return <span key={index}>{part.text}</span>; } return null; })} </div> ))} <form onSubmit={handleSubmit}> <input name="prompt" value={input} onChange={e => setInput(e.target.value)} /> <button type="submit">Submit</button> </form> </> );}
You can access the model's reasoning tokens through the parts
array on the
message
object, where reasoning parts have type: 'reasoning'
.
The useChat hook on your root page (app/page.tsx
) will make a request to your AI provider endpoint (app/api/chat/route.ts
) whenever the user submits a message. The messages are then displayed in the chat UI.
While DeepSeek R1 models are powerful, they have certain limitations:
Ready to dive in? Here's how you can begin:
DeepSeek R1 opens new opportunities for reasoning-intensive AI applications. Start building today and leverage the power of advanced reasoning in your AI projects.