Braintrust Observability
Braintrust is an end-to-end platform for building AI applications. When building with the AI SDK, you can integrate Braintrust to log, monitor, and take action on real-world interactions.
Setup
Braintrust natively supports OpenTelemetry and works out of the box with the AI SDK, either via Next.js or Node.js.
Next.js
If you are using Next.js, you can use the Braintrust exporter with @vercel/otel
for the cleanest setup:
import { registerOTel } from '@vercel/otel';import { BraintrustExporter } from 'braintrust';
// In your instrumentation.ts fileexport function register() { registerOTel({ serviceName: 'my-braintrust-app', traceExporter: new BraintrustExporter({ parent: 'project_name:your-project-name', filterAISpans: true, // Only send AI-related spans }), });}
Or set the following environment variables in your app's .env
file, with your API key and project ID:
OTEL_EXPORTER_OTLP_ENDPOINT=https://api.braintrust.dev/otelOTEL_EXPORTER_OTLP_HEADERS="Authorization=Bearer <Your API Key>, x-bt-parent=project_id:<Your Project ID>"
Traced LLM calls will appear under the Braintrust project or experiment provided in the x-bt-parent
header.
When you call the AI SDK, make sure to set experimental_telemetry
:
const result = await generateText({ model: openai('gpt-4o-mini'), prompt: 'What is 2 + 2?', experimental_telemetry: { isEnabled: true, metadata: { query: 'weather', location: 'San Francisco', }, },});
The integration supports streaming functions like streamText
. Each streamed call will produce ai.streamText
spans in Braintrust.
import { openai } from '@ai-sdk/openai';import { streamText } from 'ai';
export async function POST(req: Request) { const { prompt } = await req.json();
const result = await streamText({ model: openai('gpt-4o-mini'), prompt, experimental_telemetry: { isEnabled: true }, });
return result.toDataStreamResponse();}
Node.js
If you are using Node.js without a framework, you must configure the NodeSDK
directly. In this case, it's more straightforward to use the BraintrustSpanProcessor
.
First, install the necessary dependencies:
npm install ai @ai-sdk/openai braintrust @opentelemetry/sdk-node @opentelemetry/sdk-trace-base zod
Then, set up the OpenTelemetry SDK:
import { NodeSDK } from '@opentelemetry/sdk-node';import { generateText, tool } from 'ai';import { openai } from '@ai-sdk/openai';import { z } from 'zod';import { BraintrustSpanProcessor } from 'braintrust';
const sdk = new NodeSDK({ spanProcessors: [ new BraintrustSpanProcessor({ parent: 'project_name:your-project-name', filterAISpans: true, }), ],});
sdk.start();
async function main() { const result = await generateText({ model: openai('gpt-4o-mini'), messages: [ { role: 'user', content: 'What are my orders and where are they? My user ID is 123', }, ], tools: { listOrders: tool({ description: 'list all orders', parameters: z.object({ userId: z.string() }), execute: async ({ userId }) => `User ${userId} has the following orders: 1`, }), viewTrackingInformation: tool({ description: 'view tracking information for a specific order', parameters: z.object({ orderId: z.string() }), execute: async ({ orderId }) => `Here is the tracking information for ${orderId}`, }), }, experimental_telemetry: { isEnabled: true, functionId: 'my-awesome-function', metadata: { something: 'custom', someOtherThing: 'other-value', }, }, maxSteps: 10, });
await sdk.shutdown();}
main().catch(console.error);
Resources
To see a step-by-step example, check out the Braintrust cookbook.
After you log your application in Braintrust, explore other workflows like:
- Adding tools to your library and using them in experiments and the playground
- Creating custom scorers to assess the quality of your LLM calls
- Adding your logs to a dataset and running evaluations comparing models and prompts