Arize AX Observability
Arize AX is an enterprise-grade observability, evaluation, and experimentation platform purpose-built for agents and complex AI systems. It empowers teams to rigorously develop and improve real-world AI applications.
You can also find this guide in the Arize AX docs.
Setup
Arize AX offers first-class OpenTelemetry integration and works directly with the AI SDK in both Next.js and Node.js environments.
Arize AX has an OpenInferenceSimpleSpanProcessor and an OpenInferenceBatchSpanProcessor. All of the examples below can be used with either the simple or the batch processor. For more information on simple / batch span processors see our documentation.
Next.js
In Next.js applications, use one of the OpenInference span processors with registerOtel from @vercel/otel.
First, install the required dependencies for the AI SDK, OpenTelemetry and OpenInference.
npm install ai @ai-sdk/openai @vercel/otel @arizeai/openinference-vercel @opentelemetry/exporter-trace-otlp-protoThen, in your instrumentation.ts file add the following:
import { registerOTel } from '@vercel/otel';import { isOpenInferenceSpan, OpenInferenceSimpleSpanProcessor,} from '@arizeai/openinference-vercel';import { OTLPTraceExporter } from '@opentelemetry/exporter-trace-otlp-proto';
export function register() { registerOTel({ attributes: { model_id: 'my-ai-app', model_version: '1.0.0', }, spanProcessors: [ new OpenInferenceSimpleSpanProcessor({ exporter: new OTLPTraceExporter({ url: 'https://otlp.arize.com/v1/traces', headers: { space_id: process.env.ARIZE_SPACE_ID, api_key: process.env.ARIZE_API_KEY, }, }), // Optionally add a span filter to only include AI related spans spanFilter: isOpenInferenceSpan, }), ], });}Spans will show up in Arize AX under the project specified in the model_id field above.
You must set the experimental_telemetry flag to true in all calls using the AI SDK.
const result = await generateText({ model: openai('gpt-5-mini'), prompt: 'Please write a haiku.', experimental_telemetry: { isEnabled: true, },});Node.js
In Node.js you can use the NodeSDK or the NodeTraceProvider.
NodeSDK
First, install the required dependencies for the AI SDK, OpenTelemetry and OpenInference.
npm install ai @ai-sdk/openai @opentelemetry/sdk-node @arizeai/openinference-vercel @opentelemetry/exporter-trace-otlp-proto @opentelemetry/resourcesThen, in your instrumentation.ts file add the following:
import { isOpenInferenceSpan, OpenInferenceSimpleSpanProcessor,} from '@arizeai/openinference-vercel';import { OTLPTraceExporter } from '@opentelemetry/exporter-trace-otlp-proto';import { resourceFromAttributes } from '@opentelemetry/resources';import { NodeSDK } from '@opentelemetry/sdk-node';
const sdk = new NodeSDK({ resource: resourceFromAttributes({ model_id: 'my-ai-app', model_version: '1.0.0', }), spanProcessors: [ new OpenInferenceSimpleSpanProcessor({ exporter: new OTLPTraceExporter({ url: 'https://otlp.arize.com/v1/traces', headers: { space_id: process.env.ARIZE_SPACE_ID, api_key: process.env.ARIZE_API_KEY, }, }), spanFilter: isOpenInferenceSpan, }), ],});
sdk.start();Spans will show up in Arize AX under the project specified in the model_id field above.
You must set the experimental_telemetry flag to true in all calls using the AI SDK.
const result = await generateText({ model: openai('gpt-5-mini'), prompt: 'Please write a haiku.', experimental_telemetry: { isEnabled: true, },});NodeTraceProvider
First, install the required dependencies for the AI SDK, OpenTelemetry and OpenInference.
npm install ai @ai-sdk/openai @opentelemetry/sdk-trace-node @arizeai/openinference-vercel @opentelemetry/exporter-trace-otlp-proto @opentelemetry/resourcesThen, in your instrumentation.ts file add the following:
import { isOpenInferenceSpan, OpenInferenceSimpleSpanProcessor,} from '@arizeai/openinference-vercel';import { OTLPTraceExporter } from '@opentelemetry/exporter-trace-otlp-proto';import { resourceFromAttributes } from '@opentelemetry/resources';import { NodeTracerProvider } from '@opentelemetry/sdk-trace-node';
const provider = new NodeTracerProvider({ resource: resourceFromAttributes({ model_id: 'my-ai-app', model_version: '1.0.0', }), spanProcessors: [ new OpenInferenceSimpleSpanProcessor({ exporter: new OTLPTraceExporter({ url: 'https://otlp.arize.com/v1/traces', headers: { space_id: process.env.ARIZE_SPACE_ID, api_key: process.env.ARIZE_API_KEY, }, }), spanFilter: isOpenInferenceSpan, }), ],});provider.register();Spans will show up in Arize AX under the project specified in the model_id field above.
You must set the experimental_telemetry flag to true in all calls using the AI SDK.
const result = await generateText({ model: openai('gpt-5-mini'), prompt: 'Please write a haiku.', experimental_telemetry: { isEnabled: true, },});Resources
After sending spans to your Arize AX project check out other features:
- Rerunning spans in the prompt playground to iterate and compare prompts and parameters
- Add spans to datasets for evaluation and development workflows
- Continuously run online evaluations on your incoming spans to understand application performance
AX has a TypeScript client for managing your datasets and evaluations.