
# Helicone

The [Helicone AI Gateway](https://helicone.ai/) provides you with access to hundreds of AI models, as well as tracing and monitoring integrated directly through our observability platform.

- **Unified model access**: Use one API key to access hundreds of models from leading providers like Anthropic, Google, Meta, and more.
- **Smart provider selection**: Always hit the cheapest provider, enabling fallbacks for provider uptimes and rate limits.
- **Simplified tracing**: Monitor your LLM's performance and debug applications with Helicone observability by default, including OpenTelemetry support for logs, metrics, and traces.
- **Improve performance and cost**: Cache responses to reduce costs and latency.
- **Prompt management**: Handle prompt versioning and playground directly from Helicone, so you no longer depeend on engineers to make changes.

Learn more about Helicone's capabilities in the [Helicone Documentation](https://helicone.ai/docs).

## Setup

The Helicone provider is available in the `@helicone/ai-sdk-provider` package. You can install it with:

<Tabs items={['pnpm', 'npm', 'yarn', 'bun']}>
  <Tab>
    <Snippet text="pnpm add @helicone/ai-sdk-provider" dark />
  </Tab>
  <Tab>
    <Snippet text="npm install @helicone/ai-sdk-provider" dark />
  </Tab>
  <Tab>
    <Snippet text="yarn add @helicone/ai-sdk-provider" dark />
  </Tab>
  <Tab>
    <Snippet text="bun add @helicone/ai-sdk-provider" dark />
  </Tab>
</Tabs>

## Get started

To get started with Helicone, use the `createHelicone` function to create a provider instance. Then query any model you like.

```typescript
import { createHelicone } from '@helicone/ai-sdk-provider';
import { generateText } from 'ai';

const helicone = createHelicone({
  apiKey: process.env.HELICONE_API_KEY,
});

const result = await generateText({
  model: helicone('claude-4.5-haiku'),
  prompt: 'Write a haiku about artificial intelligence',
});

console.log(result.text);
```

You can obtain your Helicone API key from the [Helicone Dashboard](https://us.helicone.ai/settings/api-keys).

## Examples

Here are examples of using Helicone with the AI SDK.

### `generateText`

```javascript
import { createHelicone } from '@helicone/ai-sdk-provider';
import { generateText } from 'ai';

const helicone = createHelicone({
  apiKey: process.env.HELICONE_API_KEY,
});

const { text } = await generateText({
  model: helicone('gemini-2.5-flash-lite'),
  prompt: 'What is Helicone?',
});

console.log(text);
```

### `streamText`

```javascript
const helicone = createHelicone({
  apiKey: process.env.HELICONE_API_KEY,
});

const result = await streamText({
  model: helicone('deepseek-v3.1-terminus'),
  prompt: 'Write a short story about a robot learning to paint',
  maxOutputTokens: 300,
});

for await (const chunk of result.textStream) {
  process.stdout.write(chunk);
}

console.log('\n\nStream completed!');
```

## Advanced Features

Helicone offers several advanced features to enhance your AI applications:

1. **Model flexibility**: Switch between hundreds of models without changing your code or managing multiple API keys.

2. **Cost management**: Manage costs per model in real-time through Helicone's LLM observability dashboard.

3. **Observability**: Access comprehensive analytics and logs for all your requests through Helicone's LLM observability dashboard.

4. **Prompts management**: Manage prompts and versioning through the Helicone dashboard.

5. **Caching**: Cache responses to reduce costs and latency.

6. **Regular updates**: Automatic access to new models and features as they become available.

For more information about these features and advanced configuration options, visit the [Helicone Documentation](https://docs.helicone.ai).


## Navigation

- [Writing a Custom Provider](/v5/providers/community-providers/custom-providers)
- [Qwen](/v5/providers/community-providers/qwen)
- [Ollama](/v5/providers/community-providers/ollama)
- [A2A](/v5/providers/community-providers/a2a)
- [ACP (Agent Client Protocol)](/v5/providers/community-providers/acp)
- [Helicone](/v5/providers/community-providers/helicone)
- [FriendliAI](/v5/providers/community-providers/friendliai)
- [Portkey](/v5/providers/community-providers/portkey)
- [Built-in AI](/v5/providers/community-providers/built-in-ai)
- [Gemini CLI](/v5/providers/community-providers/gemini-cli)
- [MCP Sampling AI Provider](/v5/providers/community-providers/mcp-sampling)
- [Automatic1111](/v5/providers/community-providers/automatic1111)
- [Cloudflare Workers AI](/v5/providers/community-providers/cloudflare-workers-ai)
- [Cloudflare AI Gateway](/v5/providers/community-providers/cloudflare-ai-gateway)
- [OpenRouter](/v5/providers/community-providers/openrouter)
- [Azure AI](/v5/providers/community-providers/azure-ai)
- [Aihubmix](/v5/providers/community-providers/aihubmix)
- [SAP AI Core](/v5/providers/community-providers/sap-ai)
- [Crosshatch](/v5/providers/community-providers/crosshatch)
- [Requesty](/v5/providers/community-providers/requesty)
- [MiniMax](/v5/providers/community-providers/minimax)
- [Mixedbread](/v5/providers/community-providers/mixedbread)
- [Voyage AI](/v5/providers/community-providers/voyage-ai)
- [Jina AI](/v5/providers/community-providers/jina-ai)
- [Mem0](/v5/providers/community-providers/mem0)
- [Letta](/v5/providers/community-providers/letta)
- [Supermemory](/v5/providers/community-providers/supermemory)
- [React Native Apple](/v5/providers/community-providers/react-native-apple)
- [Anthropic Vertex](/v5/providers/community-providers/anthropic-vertex-ai)
- [Spark](/v5/providers/community-providers/spark)
- [Inflection AI](/v5/providers/community-providers/inflection-ai)
- [LangDB](/v5/providers/community-providers/langdb)
- [Zhipu AI](/v5/providers/community-providers/zhipu)
- [SambaNova](/v5/providers/community-providers/sambanova)
- [Dify](/v5/providers/community-providers/dify)
- [Sarvam](/v5/providers/community-providers/sarvam)
- [AI/ML API](/v5/providers/community-providers/aimlapi)
- [Claude Code](/v5/providers/community-providers/claude-code)


[Full Sitemap](/sitemap.md)
