Helicone

The Helicone AI Gateway provides you with access to hundreds of AI models, as well as tracing and monitoring integrated directly through our observability platform.

  • Unified model access: Use one API key to access hundreds of models from leading providers like Anthropic, Google, Meta, and more.
  • Smart provider selection: Always hit the cheapest provider, enabling fallbacks for provider uptimes and rate limits.
  • Simplified tracing: Monitor your LLM's performance and debug applications with Helicone observability by default, including OpenTelemetry support for logs, metrics, and traces.
  • Improve performance and cost: Cache responses to reduce costs and latency.
  • Prompt management: Handle prompt versioning and playground directly from Helicone, so you no longer depeend on engineers to make changes.

Learn more about Helicone's capabilities in the Helicone Documentation.

Setup

The Helicone provider is available in the @helicone/ai-sdk-provider package. You can install it with:

pnpm
npm
yarn
bun
pnpm add @helicone/ai-sdk-provider

Get started

To get started with Helicone, use the createHelicone function to create a provider instance. Then query any model you like.

import { createHelicone } from '@helicone/ai-sdk-provider';
import { generateText } from 'ai';
const helicone = createHelicone({
apiKey: process.env.HELICONE_API_KEY,
});
const result = await generateText({
model: helicone('claude-4.5-haiku'),
prompt: 'Write a haiku about artificial intelligence',
});
console.log(result.text);

You can obtain your Helicone API key from the Helicone Dashboard.

Examples

Here are examples of using Helicone with the AI SDK.

generateText

import { createHelicone } from '@helicone/ai-sdk-provider';
import { generateText } from 'ai';
const helicone = createHelicone({
apiKey: process.env.HELICONE_API_KEY,
});
const { text } = await generateText({
model: helicone('gemini-2.5-flash-lite'),
prompt: 'What is Helicone?',
});
console.log(text);

streamText

const helicone = createHelicone({
apiKey: process.env.HELICONE_API_KEY,
});
const result = await streamText({
model: helicone('deepseek-v3.1-terminus'),
prompt: 'Write a short story about a robot learning to paint',
maxOutputTokens: 300,
});
for await (const chunk of result.textStream) {
process.stdout.write(chunk);
}
console.log('\n\nStream completed!');

Advanced Features

Helicone offers several advanced features to enhance your AI applications:

  1. Model flexibility: Switch between hundreds of models without changing your code or managing multiple API keys.

  2. Cost management: Manage costs per model in real-time through Helicone's LLM observability dashboard.

  3. Observability: Access comprehensive analytics and logs for all your requests through Helicone's LLM observability dashboard.

  4. Prompts management: Manage prompts and versioning through the Helicone dashboard.

  5. Caching: Cache responses to reduce costs and latency.

  6. Regular updates: Automatic access to new models and features as they become available.

For more information about these features and advanced configuration options, visit the Helicone Documentation.