LangSmith Observability
LangSmith is a platform for building production-grade LLM applications. It allows you to closely monitor and evaluate your application, so you can ship quickly and with confidence.
Use of LangChain's open-source frameworks is not necessary.
A version of this guide is also available in the LangSmith
documentation.
If you are using an older version of the langsmith
client, see the legacy
guide linked from that page.
Setup
langsmith>=0.3.37.
.Install an AI SDK model provider and the LangSmith client SDK. The code snippets below will use the AI SDK's OpenAI provider, but you can use any other supported provider as well.
pnpm add @ai-sdk/openai langsmith
You will also need to install the following OpenTelemetry packages:
pnpm add @opentelemetry/sdk-trace-base @opentelemetry/exporter-trace-otlp-proto @opentelemetry/context-async-hooks
Next, set required environment variables.
export LANGCHAIN_TRACING_V2=trueexport LANGCHAIN_API_KEY=<your-api-key>export OTEL_ENABLED=true
export OPENAI_API_KEY=<your-openai-api-key> # The examples use OpenAI (replace with your selected provider)
Trace Logging
To start tracing, you will need to import and call the initializeOTEL
method at the start of your code:
import { initializeOTEL } from 'langsmith/experimental/otel/setup';
const { DEFAULT_LANGSMITH_SPAN_PROCESSOR } = initializeOTEL();
Afterwards, add the experimental_telemetry
argument to your AI SDK calls that you want to trace.
Do not forget to call await DEFAULT_LANGSMITH_SPAN_PROCESSOR.shutdown()
or
.forceFlush()
before your application shuts down in order to flush any
remaining traces to LangSmith.
import { initializeOTEL } from 'langsmith/experimental/otel/setup';
const { DEFAULT_LANGSMITH_SPAN_PROCESSOR } = initializeOTEL();
import { generateText } from 'ai';import { openai } from '@ai-sdk/openai';
let result;
try { result = await generateText({ model: openai('gpt-4.1-nano'), prompt: 'Write a vegetarian lasagna recipe for 4 people.', experimental_telemetry: { isEnabled: true }, });} finally { await DEFAULT_LANGSMITH_SPAN_PROCESSOR.shutdown();}
You should see a trace in your LangSmith dashboard like this one.
You can also trace runs with tool calls:
import { generateText, tool, stepCountIs } from 'ai';import { openai } from '@ai-sdk/openai';import { z } from 'zod';
await generateText({ model: openai('gpt-4.1-nano'), messages: [ { role: 'user', content: 'What are my orders and where are they? My user ID is 123', }, ], tools: { listOrders: tool({ description: 'list all orders', parameters: z.object({ userId: z.string() }), execute: async ({ userId }) => `User ${userId} has the following orders: 1`, }), viewTrackingInformation: tool({ description: 'view tracking information for a specific order', parameters: z.object({ orderId: z.string() }), execute: async ({ orderId }) => `Here is the tracking information for ${orderId}`, }), }, experimental_telemetry: { isEnabled: true, }, stopWhen: stepCountIs(10),});
Which results in a trace like this one.
With traceable
You can wrap traceable
calls around or within AI SDK tool calls. If you do so, we recommend you initialize a LangSmith client
instance
that you pass into each traceable
, then call client.awaitPendingTraceBatches();
to ensure all traces flush. If you do this, you do not need
to manually call shutdown()
or forceFlush()
on the DEFAULT_LANGSMITH_SPAN_PROCESSOR
. Here's an example:
import { initializeOTEL } from 'langsmith/experimental/otel/setup';
initializeOTEL();
import { Client } from 'langsmith';import { traceable } from 'langsmith/traceable';import { generateText, tool } from 'ai';import { openai } from '@ai-sdk/openai';import { z } from 'zod';
const client = new Client();
const wrappedText = traceable( async (content: string) => { const { text } = await generateText({ model: openai('gpt-4.1-nano'), messages: [{ role: 'user', content }], tools: { listOrders: tool({ description: 'list all orders', parameters: z.object({ userId: z.string() }), execute: async ({ userId }) => { const getOrderNumber = traceable( async () => { return '1234'; }, { name: 'getOrderNumber' }, ); const orderNumber = await getOrderNumber(); return `User ${userId} has the following order: ${orderNumber}`; }, }), }, experimental_telemetry: { isEnabled: true, }, maxSteps: 10, });
return { text }; }, { name: 'parentTraceable', client },);
let result;
try { result = await wrappedText('What are my orders?');} finally { await client.awaitPendingTraceBatches();}
The resulting trace will look like this.
Further reading
For more examples and instructions for setting up tracing in specific environments, see the links below:
And once you've set up LangSmith tracing for your project, try gathering a dataset and evaluating it: