OLLM
OLLM is the world's first enterprise router aggregating high-security, zero-knowledge LLM providers. It provides a unified API gateway to access AI models with guaranteed military-grade encryption at every layer. The OLLM provider for the AI SDK enables seamless integration with all these models while offering unique advantages:
- Verifiable Privacy: All models run with confidential computing for maximum security
- Universal Model Access: One API key for models from multiple providers
- Confidential Computing: Hardware-level encryption with TEE (Trusted Execution Environment) on all models
- Military-Grade Security: End-to-end encryption at every layer of the stack
- Simple Integration: OpenAI-compatible API across all models
Learn more about OLLM's capabilities in the OLLM Website.
Setup
The OLLM provider is available in the @ofoundation/ollm module. You can install it with:
pnpm add @ofoundation/ollm
Provider Instance
To create an OLLM provider instance, use the createOLLM function:
import { createOLLM } from '@ofoundation/ollm';
const ollm = createOLLM({ apiKey: 'YOUR_OLLM_API_KEY',});You can obtain your OLLM API key from the OLLM Dashboard.
Language Models
All OLLM models run with confidential computing by default. Use ollm.chatModel() for chat models:
// Confidential computing chat modelsconst confidentialModel = ollm.chatModel('near/GLM-4.7');You can find the full list of available models in the OLLM Models.
Examples
Here are examples of using OLLM with the AI SDK:
generateText
import { createOLLM } from '@ofoundation/ollm';import { generateText } from 'ai';
const ollm = createOLLM({ apiKey: 'YOUR_OLLM_API_KEY',});
const { text } = await generateText({ model: ollm.chatModel('near/GLM-4.6'), prompt: 'What is OLLM?',});
console.log(text);streamText
import { createOLLM } from '@ofoundation/ollm';import { streamText } from 'ai';
const ollm = createOLLM({ apiKey: 'YOUR_OLLM_API_KEY',});
const result = streamText({ model: ollm.chatModel('near/GLM-4.6'), prompt: 'Write a short story about secure AI.',});
for await (const chunk of result.textStream) { console.log(chunk);}Using System Messages
import { createOLLM } from '@ofoundation/ollm';import { generateText } from 'ai';
const ollm = createOLLM({ apiKey: 'YOUR_OLLM_API_KEY',});
const { text } = await generateText({ model: ollm.chatModel('near/GLM-4.6'), system: 'You are a helpful assistant that responds concisely.', prompt: 'What is TypeScript in one sentence?',});
console.log(text);Advanced Features
OLLM offers several advanced features to enhance your AI applications with enterprise-grade security:
-
Zero Data Retention (ZDR): Your prompts and completions are never stored or logged by providers.
-
Confidential Computing: Hardware-level encryption using TEE technology ensures your data is protected even during processing.
-
Verifiable Privacy: Cryptographic proofs that your data was processed securely.
-
Model Flexibility: Switch between hundreds of models without changing your code or managing multiple API keys.
-
Cost Management: Track usage and costs per model in real-time through the dashboard.
-
Enterprise Support: Available for high-volume users with custom SLAs and dedicated support.
-
Tool Integrations: Seamlessly works with popular AI development tools including:
- Cursor
- Windsurf
- VS Code
- Cline
- Roo Code
- Replit
For more information about these features and advanced configuration options, visit the OLLM Documentation.