MLflow Observability
MLflow Tracing provides automatic tracing for applications built with the Vercel AI SDK (the ai package) via OpenTelemetry, unlocking observability for TypeScript and JavaScript apps.
When enabled, MLflow records:
- Prompts/messages and generated responses
- Latencies and call hierarchy
- Token usage (when the provider returns it)
- Exceptions
Quickstart (NextJS)
It is fairly straightforward to enable MLflow tracing for Vercel AI SDK if you are using NextJS.
No app handy? Try Vercel’s demo chatbot: https://vercel.com/templates/next.js/ai-chatbot-telemetry
1. Start MLflow Tracking Server
mlflow server --backend-store-uri sqlite:///mlruns.db --port 5000You can also start the server with Docker Compose; see the MLflow Setup Guide.
2. Configure Environment Variables
Add these to .env.local:
OTEL_EXPORTER_OTLP_ENDPOINT=<your-mlflow-tracking-server-endpoint>OTEL_EXPORTER_OTLP_TRACES_HEADERS=x-mlflow-experiment-id=<your-experiment-id>OTEL_EXPORTER_OTLP_TRACES_PROTOCOL=http/protobufFor local testing: OTEL_EXPORTER_OTLP_ENDPOINT=http://localhost:5000.
3. Enable OpenTelemetry
Install the Vercel OpenTelemetry integration:
pnpm i @opentelemetry/api @vercel/otelCreate instrumentation.ts in your project root:
import { registerOTel } from '@vercel/otel';
export async function register() { registerOTel({ serviceName: 'next-app' });}Then enable telemetry where you call the AI SDK (for example in route.ts):
import { openai } from '@ai-sdk/openai';import { generateText } from 'ai';
export async function POST(req: Request) { const { prompt } = await req.json();
const { text } = await generateText({ model: openai('gpt-5'), prompt, experimental_telemetry: { isEnabled: true }, });
return new Response(JSON.stringify({ text }), { headers: { 'Content-Type': 'application/json' }, });}See the Vercel OpenTelemetry docs for advanced options like context propagation.
4. Run the App and View Traces
Start your NextJS app and open MLflow UI at the tracking server endpoint (e.g., http://localhost:5000). Traces for AI SDK calls appear in the configured experiment.
Other Node.js Applications
For other Node.js frameworks, wire up the OpenTelemetry Node SDK and OTLP exporter manually.
import { init } from 'mlflow-tracing';import { generateText } from 'ai';import { openai } from '@ai-sdk/openai';import { NodeSDK } from '@opentelemetry/sdk-node';import { SimpleSpanProcessor } from '@opentelemetry/sdk-trace-node';import { OTLPTraceExporter } from '@opentelemetry/exporter-trace-otlp-proto';
const sdk = new NodeSDK({ spanProcessors: [ new SimpleSpanProcessor( new OTLPTraceExporter({ url: '<your-mlflow-tracking-server-endpoint>/v1/traces', headers: { 'x-mlflow-experiment-id': '<your-experiment-id>' }, }), ), ],});
sdk.start();init();
// Make an AI SDK call with telemetry enabledconst result = await generateText({ model: openai('gpt-5'), prompt: 'What is MLflow?', experimental_telemetry: { isEnabled: true },});
console.log(result.text);sdk.shutdown();npx tsx main.tsStreaming
Streaming is supported. As with generateText, set experimental_telemetry.isEnabled to true.
import { streamText } from 'ai';import { openai } from '@ai-sdk/openai';
const stream = await streamText({ model: openai('gpt-5'), prompt: 'Explain vector databases in one paragraph.', experimental_telemetry: { isEnabled: true },});
for await (const part of stream.textStream) { process.stdout.write(part);}Disable auto-tracing
To disable tracing for Vercel AI SDK, set experimental_telemetry: { isEnabled: false } on the AI SDK call.
Learn more
- After setting up MLflow Tracing for the AI SDK, you can tap into broader MLflow GenAI capabilities:
- Evaluation: Use built-in LLM judges and dataset management to systematically measure quality and monitor GenAI apps from development through production.
- Prompt Management: Centralize prompt templates with versioning, aliases, lineage, and collaboration so teams can reuse and compare prompts safely.
- MCP Server: Connect your coding agent with MLflow MCP Server to interact with MLflow traces programmatically and improve your LLM applications.
- For more informatio about tracing in AI SDK, see the telemetry documentation.