
# OLLM

[OLLM](https://ollm.com/) is the world's first enterprise router aggregating high-security, zero-knowledge LLM providers. It provides a unified API gateway to access AI models with guaranteed military-grade encryption at every layer. The OLLM provider for the AI SDK enables seamless integration with all these models while offering unique advantages:

- **Verifiable Privacy**: All models run with confidential computing for maximum security
- **Universal Model Access**: One API key for models from multiple providers
- **Confidential Computing**: Hardware-level encryption with TEE (Trusted Execution Environment) on all models
- **Military-Grade Security**: End-to-end encryption at every layer of the stack
- **Simple Integration**: OpenAI-compatible API across all models

Learn more about OLLM's capabilities in the [OLLM Website](https://ollm.com).

## Setup

The OLLM provider is available in the `@ofoundation/ollm` module. You can install it with:

<Tabs items={['pnpm', 'npm', 'yarn', 'bun']}>
  <Tab>
    <Snippet text="pnpm add @ofoundation/ollm" dark />
  </Tab>
  <Tab>
    <Snippet text="npm install @ofoundation/ollm" dark />
  </Tab>
  <Tab>
    <Snippet text="yarn add @ofoundation/ollm" dark />
  </Tab>
  <Tab>
    <Snippet text="bun add @ofoundation/ollm" dark />
  </Tab>
</Tabs>

## Provider Instance

To create an OLLM provider instance, use the `createOLLM` function:

```typescript
import { createOLLM } from '@ofoundation/ollm';

const ollm = createOLLM({
  apiKey: 'YOUR_OLLM_API_KEY',
});
```

You can obtain your OLLM API key from the [OLLM Dashboard](https://console.ollm.com/dashboard/api-keys).

## Language Models

All OLLM models run with confidential computing by default. Use `ollm.chatModel()` for chat models:

```typescript
// Confidential computing chat models
const confidentialModel = ollm.chatModel('near/GLM-4.7');
```

You can find the full list of available models in the [OLLM Models](https://ollm.com/models).

## Examples

Here are examples of using OLLM with the AI SDK:

### `generateText`

```typescript
import { createOLLM } from '@ofoundation/ollm';
import { generateText } from 'ai';

const ollm = createOLLM({
  apiKey: 'YOUR_OLLM_API_KEY',
});

const { text } = await generateText({
  model: ollm.chatModel('near/GLM-4.6'),
  prompt: 'What is OLLM?',
});

console.log(text);
```

### `streamText`

```typescript
import { createOLLM } from '@ofoundation/ollm';
import { streamText } from 'ai';

const ollm = createOLLM({
  apiKey: 'YOUR_OLLM_API_KEY',
});

const result = streamText({
  model: ollm.chatModel('near/GLM-4.6'),
  prompt: 'Write a short story about secure AI.',
});

for await (const chunk of result.textStream) {
  console.log(chunk);
}
```

### Using System Messages

```typescript
import { createOLLM } from '@ofoundation/ollm';
import { generateText } from 'ai';

const ollm = createOLLM({
  apiKey: 'YOUR_OLLM_API_KEY',
});

const { text } = await generateText({
  model: ollm.chatModel('near/GLM-4.6'),
  system: 'You are a helpful assistant that responds concisely.',
  prompt: 'What is TypeScript in one sentence?',
});

console.log(text);
```

## Advanced Features

OLLM offers several advanced features to enhance your AI applications with enterprise-grade security:

1. **Zero Data Retention (ZDR)**: Your prompts and completions are never stored or logged by providers.

2. **Confidential Computing**: Hardware-level encryption using TEE technology ensures your data is protected even during processing.

3. **Verifiable Privacy**: Cryptographic proofs that your data was processed securely.

4. **Model Flexibility**: Switch between hundreds of models without changing your code or managing multiple API keys.

5. **Cost Management**: Track usage and costs per model in real-time through the dashboard.

6. **Enterprise Support**: Available for high-volume users with custom SLAs and dedicated support.

7. **Tool Integrations**: Seamlessly works with popular AI development tools including:
   - Cursor
   - Windsurf
   - VS Code
   - Cline
   - Roo Code
   - Replit

For more information about these features and advanced configuration options, visit the [OLLM Documentation](https://ollm.com/docs).

## Additional Resources

- [OLLM Documentation](https://ollm.com/)
- [OLLM Dashboard](https://console.ollm.com/dashboard)
- [OLLM Models](https://console.ollm.com/explorer/models)


## Navigation

- [Writing a Custom Provider](/v7/providers/community-providers/custom-providers)
- [A2A](/v7/providers/community-providers/a2a)
- [ACP (Agent Client Protocol)](/v7/providers/community-providers/acp)
- [Aihubmix](/v7/providers/community-providers/aihubmix)
- [AI/ML API](/v7/providers/community-providers/aimlapi)
- [Anthropic Vertex](/v7/providers/community-providers/anthropic-vertex-ai)
- [Automatic1111](/v7/providers/community-providers/automatic1111)
- [Azure AI](/v7/providers/community-providers/azure-ai)
- [Browser AI](/v7/providers/community-providers/browser-ai)
- [Claude Code](/v7/providers/community-providers/claude-code)
- [Cloudflare AI Gateway](/v7/providers/community-providers/cloudflare-ai-gateway)
- [Cloudflare Workers AI](/v7/providers/community-providers/cloudflare-workers-ai)
- [Codex CLI](/v7/providers/community-providers/codex-cli)
- [Crosshatch](/v7/providers/community-providers/crosshatch)
- [Dify](/v7/providers/community-providers/dify)
- [Firemoon](/v7/providers/community-providers/firemoon)
- [FriendliAI](/v7/providers/community-providers/friendliai)
- [Gemini CLI](/v7/providers/community-providers/gemini-cli)
- [Helicone](/v7/providers/community-providers/helicone)
- [Inflection AI](/v7/providers/community-providers/inflection-ai)
- [Jina AI](/v7/providers/community-providers/jina-ai)
- [LangDB](/v7/providers/community-providers/langdb)
- [Letta](/v7/providers/community-providers/letta)
- [llama.cpp](/v7/providers/community-providers/llama-cpp)
- [LlamaGate](/v7/providers/community-providers/llamagate)
- [MCP Sampling AI Provider](/v7/providers/community-providers/mcp-sampling)
- [Mem0](/v7/providers/community-providers/mem0)
- [MiniMax](/v7/providers/community-providers/minimax)
- [Mixedbread](/v7/providers/community-providers/mixedbread)
- [Ollama](/v7/providers/community-providers/ollama)
- [OpenCode](/v7/providers/community-providers/opencode-sdk)
- [OpenRouter](/v7/providers/community-providers/openrouter)
- [Portkey](/v7/providers/community-providers/portkey)
- [Qwen](/v7/providers/community-providers/qwen)
- [React Native Apple](/v7/providers/community-providers/react-native-apple)
- [Requesty](/v7/providers/community-providers/requesty)
- [Runpod](/v7/providers/community-providers/runpod)
- [SambaNova](/v7/providers/community-providers/sambanova)
- [SAP AI Core](/v7/providers/community-providers/sap-ai)
- [Sarvam](/v7/providers/community-providers/sarvam)
- [Soniox](/v7/providers/community-providers/soniox)
- [Spark](/v7/providers/community-providers/spark)
- [Supermemory](/v7/providers/community-providers/supermemory)
- [Voyage AI](/v7/providers/community-providers/voyage-ai)
- [Zhipu AI (Z.AI)](/v7/providers/community-providers/zhipu)
- [vectorstores](/v7/providers/community-providers/vectorstores)
- [Codex CLI (App Server)](/v7/providers/community-providers/codex-app-server)
- [Apertis](/v7/providers/community-providers/apertis)
- [OLLM](/v7/providers/community-providers/ollm)
- [Cencori](/v7/providers/community-providers/cencori)
- [Hindsight](/v7/providers/community-providers/hindsight)
- [ZeroEntropy](/v7/providers/community-providers/zeroentropy)
- [Flowise](/v7/providers/community-providers/flowise)


[Full Sitemap](/sitemap.md)
