In this guide, you will learn how to build a Slackbot powered by the AI SDK. The bot will be able to respond to direct messages and mentions in channels using the full context of the thread.
Before we start building, you'll need to create and configure a Slack app:
app_mentions:read
chat:write
im:history
im:write
assistant:write
This project uses the following stack:
starter
branchgit clone https://github.com/vercel-labs/ai-sdk-slackbot.git
cd ai-sdk-slackbot
git checkout starter
pnpm install
The starter repository already includes:
lib/slack-utils.ts
) including functions for validating incoming requests, converting Slack threads to AI SDK compatible message formats, and getting the Slackbot's user IDlib/utils.ts
) including initial Exa setuplib/handle-messages.ts
and lib/handle-app-mention.ts
)POST
) for Slack events (api/events.ts
)First, let's take a look at our API route (api/events.ts
):
import type { SlackEvent } from '@slack/web-api';import { assistantThreadMessage, handleNewAssistantMessage,} from '../lib/handle-messages';import { waitUntil } from '@vercel/functions';import { handleNewAppMention } from '../lib/handle-app-mention';import { verifyRequest, getBotId } from '../lib/slack-utils';
export async function POST(request: Request) { const rawBody = await request.text(); const payload = JSON.parse(rawBody); const requestType = payload.type as 'url_verification' | 'event_callback';
// See https://api.slack.com/events/url_verification if (requestType === 'url_verification') { return new Response(payload.challenge, { status: 200 }); }
await verifyRequest({ requestType, request, rawBody });
try { const botUserId = await getBotId();
const event = payload.event as SlackEvent;
if (event.type === 'app_mention') { waitUntil(handleNewAppMention(event, botUserId)); }
if (event.type === 'assistant_thread_started') { waitUntil(assistantThreadMessage(event)); }
if ( event.type === 'message' && !event.subtype && event.channel_type === 'im' && !event.bot_id && !event.bot_profile && event.bot_id !== botUserId ) { waitUntil(handleNewAssistantMessage(event, botUserId)); }
return new Response('Success!', { status: 200 }); } catch (error) { console.error('Error generating response', error); return new Response('Error generating response', { status: 500 }); }}
This file defines a POST
function that handles incoming requests from Slack. First, you check the request type to see if it's a URL verification request. If it is, you respond with the challenge string provided by Slack. If it's an event callback, you verify the request and then have access to the event data. This is where you can implement your event handling logic.
You then handle three types of events: app_mention
, assistant_thread_started
, and message
:
app_mention
, you call handleNewAppMention
with the event and the bot user ID.assistant_thread_started
, you call assistantThreadMessage
with the event.message
, you call handleNewAssistantMessage
with the event and the bot user ID.Finally, you respond with a success message to Slack. Note, each handler function is wrapped in a waitUntil
function. Let's take a look at what this means and why it's important.
Slack expects a response within 3 seconds to confirm the request is being handled. However, generating AI responses can take longer. If you don't respond to the Slack request within 3 seconds, Slack will send another request, leading to another invocation of your API route, another call to the LLM, and ultimately another response to the user. To solve this, you can use the waitUntil
function, which allows you to run your AI logic after the response is sent, without blocking the response itself.
This means, your API endpoint will:
Let's look at how each event type is currently handled.
When a user mentions your bot in a channel, the app_mention
event is triggered. The handleNewAppMention
function in handle-app-mention.ts
processes these mentions:
generateResponse
function which you will implement in the next section)Here's the code for the handleNewAppMention
function:
import { AppMentionEvent } from '@slack/web-api';import { client, getThread } from './slack-utils';import { generateResponse } from './ai';
const updateStatusUtil = async ( initialStatus: string, event: AppMentionEvent,) => { const initialMessage = await client.chat.postMessage({ channel: event.channel, thread_ts: event.thread_ts ?? event.ts, text: initialStatus, });
if (!initialMessage || !initialMessage.ts) throw new Error('Failed to post initial message');
const updateMessage = async (status: string) => { await client.chat.update({ channel: event.channel, ts: initialMessage.ts as string, text: status, }); }; return updateMessage;};
export async function handleNewAppMention( event: AppMentionEvent, botUserId: string,) { console.log('Handling app mention'); if (event.bot_id || event.bot_id === botUserId || event.bot_profile) { console.log('Skipping app mention'); return; }
const { thread_ts, channel } = event; const updateMessage = await updateStatusUtil('is thinking...', event);
if (thread_ts) { const messages = await getThread(channel, thread_ts, botUserId); const result = await generateResponse(messages, updateMessage); updateMessage(result); } else { const result = await generateResponse( [{ role: 'user', content: event.text }], updateMessage, ); updateMessage(result); }}
Now let's see how new assistant threads and messages are handled.
When a user starts a thread with your assistant, the assistant_thread_started
event is triggered. The assistantThreadMessage
function in handle-messages.ts
handles this:
Here's the code for the assistantThreadMessage
function:
import type { AssistantThreadStartedEvent } from '@slack/web-api';import { client } from './slack-utils';
export async function assistantThreadMessage( event: AssistantThreadStartedEvent,) { const { channel_id, thread_ts } = event.assistant_thread; console.log(`Thread started: ${channel_id} ${thread_ts}`); console.log(JSON.stringify(event));
await client.chat.postMessage({ channel: channel_id, thread_ts: thread_ts, text: "Hello, I'm an AI assistant built with the AI SDK by Vercel!", });
await client.assistant.threads.setSuggestedPrompts({ channel_id: channel_id, thread_ts: thread_ts, prompts: [ { title: 'Get the weather', message: 'What is the current weather in London?', }, { title: 'Get the news', message: 'What is the latest Premier League news from the BBC?', }, ], });}
For direct messages to your bot, the message
event is triggered and the event is handled by the handleNewAssistantMessage
function in handle-messages.ts
:
Here's the code for the handleNewAssistantMessage
function:
import type { GenericMessageEvent } from '@slack/web-api';import { client, getThread } from './slack-utils';import { generateResponse } from './ai';
export async function handleNewAssistantMessage( event: GenericMessageEvent, botUserId: string,) { if ( event.bot_id || event.bot_id === botUserId || event.bot_profile || !event.thread_ts ) return;
const { thread_ts, channel } = event; const updateStatus = updateStatusUtil(channel, thread_ts); updateStatus('is thinking...');
const messages = await getThread(channel, thread_ts, botUserId); const result = await generateResponse(messages, updateStatus);
await client.chat.postMessage({ channel: channel, thread_ts: thread_ts, text: result, unfurl_links: false, blocks: [ { type: 'section', text: { type: 'mrkdwn', text: result, }, }, ], });
updateStatus('');}
With the event handlers in place, let's now implement the AI logic.
The core of our application is the generateResponse
function in lib/generate-response.ts
, which processes messages and generates responses using the AI SDK.
Here's how to implement it:
import { openai } from '@ai-sdk/openai';import { generateText, ModelMessage } from 'ai';
export const generateResponse = async ( messages: ModelMessage[], updateStatus?: (status: string) => void,) => { const { text } = await generateText({ model: openai('gpt-4o-mini'), system: `You are a Slack bot assistant. Keep your responses concise and to the point. - Do not tag users. - Current date is: ${new Date().toISOString().split('T')[0]}`, messages, });
// Convert markdown to Slack mrkdwn format return text.replace(/\[(.*?)\]\((.*?)\)/g, '<$2|$1>').replace(/\*\*/g, '*');};
This basic implementation:
generateText
function to call OpenAI's gpt-4o
modelThe real power of the AI SDK comes from tools that enable your bot to perform actions. Let's add two useful tools:
import { openai } from '@ai-sdk/openai';import { generateText, tool, ModelMessage, stepCountIs } from 'ai';import { z } from 'zod';import { exa } from './utils';
export const generateResponse = async ( messages: ModelMessage[], updateStatus?: (status: string) => void,) => { const { text } = await generateText({ model: openai('gpt-4o'), system: `You are a Slack bot assistant. Keep your responses concise and to the point. - Do not tag users. - Current date is: ${new Date().toISOString().split('T')[0]} - Always include sources in your final response if you use web search.`, messages, stopWhen: stepCountIs(10), tools: { getWeather: tool({ description: 'Get the current weather at a location', inputSchema: z.object({ latitude: z.number(), longitude: z.number(), city: z.string(), }), execute: async ({ latitude, longitude, city }) => { updateStatus?.(`is getting weather for ${city}...`);
const response = await fetch( `https://api.open-meteo.com/v1/forecast?latitude=${latitude}&longitude=${longitude}¤t=temperature_2m,weathercode,relativehumidity_2m&timezone=auto`, );
const weatherData = await response.json(); return { temperature: weatherData.current.temperature_2m, weatherCode: weatherData.current.weathercode, humidity: weatherData.current.relativehumidity_2m, city, }; }, }), searchWeb: tool({ description: 'Use this to search the web for information', inputSchema: z.object({ query: z.string(), specificDomain: z .string() .nullable() .describe( 'a domain to search if the user specifies e.g. bbc.com. Should be only the domain name without the protocol', ), }), execute: async ({ query, specificDomain }) => { updateStatus?.(`is searching the web for ${query}...`); const { results } = await exa.searchAndContents(query, { livecrawl: 'always', numResults: 3, includeDomains: specificDomain ? [specificDomain] : undefined, });
return { results: results.map(result => ({ title: result.title, url: result.url, snippet: result.text.slice(0, 1000), })), }; }, }), }, });
// Convert markdown to Slack mrkdwn format return text.replace(/\[(.*?)\]\((.*?)\)/g, '<$2|$1>').replace(/\*\*/g, '*');};
In this updated implementation:
You added two tools:
getWeather
: Fetches weather data for a specified locationsearchWeb
: Searches the web for information using the Exa APIYou set stopWhen: stepCountIs(10)
to enable multi-step conversations. This defines the stopping conditions of your agent, when the model generates a tool call. This will automatically send any tool results back to the LLM to trigger additional tool calls or responses as the LLM deems necessary. This turns your LLM call from a one-off operation into a multi-step agentic flow.
When a user interacts with your bot:
generateResponse
functionThe tools are automatically invoked based on the user's intent. For example, if a user asks "What's the weather in London?", the AI will:
getWeather
tool with London's coordinates (inferred by the LLM)pnpm install -g vercel
vercel deploy
SLACK_BOT_TOKEN=your_slack_bot_tokenSLACK_SIGNING_SECRET=your_slack_signing_secretOPENAI_API_KEY=your_openai_api_keyEXA_API_KEY=your_exa_api_key
Make sure to redeploy your app after updating environment variables.
https://your-vercel-url.vercel.app/api/events
app_mention
assistant_thread_started
message:im
Finally, head to Slack and test the app by sending a message to the bot.
You've built a Slack chatbot powered by the AI SDK! Here are some ways you could extend it:
In a production environment, it is recommended to implement a robust queueing system to ensure messages are properly handled.