Laminar observability
Laminar is the open-source platform for tracing and evaluating AI applications.
Laminar features:
A version of this guide is available in Laminar's docs.
Setup
Laminar's tracing is based on OpenTelemetry. It supports AI SDK telemetry.
Installation
To start with Laminar's tracing, first install the @lmnr-ai/lmnr
package.
pnpm add @lmnr-ai/lmnr
Get your project API key and set in the environment
Then, either sign up on Laminar or self-host an instance (github) and create a new project.
In the project settings, create and copy the API key.
In your .env
LMNR_PROJECT_API_KEY=...
Next.js
Initialize tracing
In Next.js, Laminar initialization should be done in instrumentation.{ts,js}
:
export async function register() { // prevent this from running in the edge runtime if (process.env.NEXT_RUNTIME === 'nodejs') { const { Laminar } = await import('@lmnr-ai/lmnr'); Laminar.initialize({ projectApiKey: process.env.LMNR_API_KEY, }); }}
Add @lmnr-ai/lmnr to your next.config
In your next.config.js
(.ts
/ .mjs
), add the following lines:
const nextConfig = { serverExternalPackages: ['@lmnr-ai/lmnr'],};
export default nextConfig;
This is because Laminar depends on OpenTelemetry, which uses some Node.js-specific functionality, and we need to inform Next.js about it. Learn more in the Next.js docs.
Tracing AI SDK calls
Then, when you call AI SDK functions in any of your API routes, add the Laminar tracer to the experimental_telemetry
option.
import { openai } from '@ai-sdk/openai';import { generateText } from 'ai';import { getTracer } from '@lmnr-ai/lmnr';
const { text } = await generateText({ model: openai('gpt-4o-mini'), prompt: 'What is Laminar flow?', experimental_telemetry: { isEnabled: true, tracer: getTracer(), },});
This will create spans for ai.generateText
. Laminar collects and displays the following information:
- LLM call input and output
- Start and end time
- Duration / latency
- Provider and model used
- Input and output tokens
- Input and output price
- Additional metadata and span attributes
Older versions of Next.js
If you are using 13.4 ≤ Next.js < 15, you will also need to enable the experimental instrumentation hook. Place the following in your next.config.js
:
module.exports = { experimental: { instrumentationHook: true, },};
For more information, see Laminar's Next.js guide and Next.js instrumentation docs. You can also learn how to enable all traces for Next.js in the docs.
Usage with @vercel/otel
Laminar can live alongside @vercel/otel
and trace AI SDK calls. The default Laminar setup will ensure that
- regular Next.js traces are sent via
@vercel/otel
to your Telemetry backend configured with Vercel, - AI SDK and other LLM or browser agent traces are sent via Laminar.
import { registerOTel } from '@vercel/otel';
export async function register() { registerOTel('my-service-name'); if (process.env.NEXT_RUNTIME === 'nodejs') { const { Laminar } = await import('@lmnr-ai/lmnr'); // Make sure to initialize Laminar **after** `@registerOTel` Laminar.initialize({ projectApiKey: process.env.LMNR_PROJECT_API_KEY, }); }}
For an advanced configuration that allows you to trace all Next.js traces via Laminar, see an example repo.
Usage with @sentry/node
Laminar can live alongside @sentry/node
and trace AI SDK calls. Make sure to initialize Laminar after Sentry.init
.
This will ensure that
- Whatever is instrumented by Sentry is sent to your Sentry backend,
- AI SDK and other LLM or browser agent traces are sent via Laminar.
export async function register() { if (process.env.NEXT_RUNTIME === 'nodejs') { const Sentry = await import('@sentry/node'); const { Laminar } = await import('@lmnr-ai/lmnr');
Sentry.init({ dsn: process.env.SENTRY_DSN, });
// Make sure to initialize Laminar **after** `Sentry.init` Laminar.initialize({ projectApiKey: process.env.LMNR_PROJECT_API_KEY, }); }}
Node.js
Initialize tracing
Then, initialize tracing in your application:
import { Laminar } from '@lmnr-ai/lmnr';
Laminar.initialize();
This must be done once in your application, as early as possible, but after other tracing libraries (e.g. @sentry/node
) are initialized.
Read more in Laminar docs.
Tracing AI SDK calls
Then, when you call AI SDK functions in any of your API routes, add the Laminar tracer to the experimental_telemetry
option.
import { openai } from '@ai-sdk/openai';import { generateText } from 'ai';import { getTracer } from '@lmnr-ai/lmnr';
const { text } = await generateText({ model: openai('gpt-4o-mini'), prompt: 'What is Laminar flow?', experimental_telemetry: { isEnabled: true, tracer: getTracer(), },});
This will create spans for ai.generateText
. Laminar collects and displays the following information:
- LLM call input and output
- Start and end time
- Duration / latency
- Provider and model used
- Input and output tokens
- Input and output price
- Additional metadata and span attributes
Usage with @sentry/node
Laminar can work with @sentry/node
to trace AI SDK calls. Make sure to initialize Laminar after Sentry.init
:
const Sentry = await import('@sentry/node');const { Laminar } = await import('@lmnr-ai/lmnr');
Sentry.init({ dsn: process.env.SENTRY_DSN,});
Laminar.initialize({ projectApiKey: process.env.LMNR_PROJECT_API_KEY,});
This will ensure that
- Whatever is instrumented by Sentry is sent to your Sentry backend,
- AI SDK and other LLM or browser agent traces are sent via Laminar.
The two libraries allow for additional advanced configuration, but the default setup above is recommended.
Additional configuration
Nested spans
If you want to trace not just the AI SDK calls, but also other functions in your application, you can use Laminar's observe
wrapper.
import { getTracer, observe } from '@lmnr-ai/lmnr';
const result = await observe({ name: 'my-function' }, async () => { // ... some work await generateText({ //... }); // ... some work});
This will create a span with the name "my-function" and trace the function call. Inside it, you will see the nested ai.generateText
spans.
To trace input arguments of the function that you wrap in observe
, pass them to the wrapper as additional arguments. The return value of the function will be returned from the wrapper and traced as the span's output.
const result = await observe( { name: 'poem writer' }, async (topic: string, mood: string) => { const { text } = await generateText({ model: openai('gpt-4.1-nano'), prompt: `Write a poem about ${topic} in ${mood} mood.`, }); return text; }, 'Laminar flow', 'happy',);
Metadata
In Laminar, metadata is set on the trace level. Metadata contains key-value pairs and can be used to filter traces.
import { getTracer } from '@lmnr-ai/lmnr';
const { text } = await generateText({ model: openai('gpt-4.1-nano'), prompt: `Write a poem about Laminar flow.`, experimental_telemetry: { isEnabled: true, tracer: getTracer(), metadata: { 'my-key': 'my-value', 'another-key': 'another-value', }, },});
This is converted to Laminar's metadata and stored in the trace.