Cencori
Cencori is the unified infrastructure layer for AI applications. It provides built-in security, observability, and multi-provider support through a single API.
With Cencori, you get:
- Multi-Provider Gateway: Access OpenAI, Anthropic, Google Gemini, Mistral, DeepSeek, and more through one API
- Built-in Security: Automatic PII detection, prompt injection protection, and content filtering
- Full Observability: Complete audit logs, analytics, and cost tracking for every request
- Transparent Pricing: Credits-based billing with real-time cost visibility
Setup
The Cencori provider is available in the cencori module. You can install it with:
bash pnpm add cencori Provider Instance
To create a Cencori provider instance, use the createCencori function:
import { createCencori } from 'cencori/vercel';
const cencori = createCencori({ apiKey: 'YOUR_CENCORI_API_KEY',});You can obtain your Cencori API key from the Cencori Dashboard.
Environment Variable
Alternatively, you can set the CENCORI_API_KEY environment variable and use the default provider:
import { cencori } from 'cencori/vercel';
// Uses CENCORI_API_KEY environment variableconst model = cencori('gpt-4o');Language Models
Cencori provides access to language models from multiple providers. Use the provider function directly to create a model:
// OpenAI modelsconst gpt4o = cencori('gpt-4o');const gpt4oMini = cencori('gpt-4o-mini');
// Anthropic modelsconst claude = cencori('claude-3-5-sonnet');const opus = cencori('claude-3-opus');
// Google Gemini modelsconst gemini = cencori('gemini-2.5-flash');const geminiPro = cencori('gemini-3-pro');
// Other providersconst mistral = cencori('mistral-large');const deepseek = cencori('deepseek-v3.2');const grok = cencori('grok-4');const llama = cencori('llama-3-70b');You can find the full list of available models in the Cencori Documentation.
Examples
Here are examples of using Cencori with the AI SDK:
generateText
import { cencori } from 'cencori/vercel';import { generateText } from 'ai';
const { text } = await generateText({ model: cencori('gpt-4o'), prompt: 'What is Cencori?',});
console.log(text);streamText
import { cencori } from 'cencori/vercel';import { streamText } from 'ai';
const result = streamText({ model: cencori('claude-3-5-sonnet'), prompt: 'Write a short story about AI safety.',});
for await (const chunk of result.textStream) { console.log(chunk);}Tool Calling
Cencori supports tool calling with all compatible models:
import { cencori } from 'cencori/vercel';import { generateText, tool } from 'ai';import { z } from 'zod';
const { text, toolCalls } = await generateText({ model: cencori('gpt-4o'), prompt: 'What is the weather in San Francisco?', tools: { getWeather: tool({ description: 'Get the weather for a location', parameters: z.object({ location: z.string().describe('The city and state'), }), execute: async ({ location }) => { return { temperature: 72, condition: 'sunny', location }; }, }), },});
console.log(text, toolCalls);Next.js Route Handler
import { cencori } from 'cencori/vercel';import { streamText } from 'ai';
export async function POST(req: Request) { const { messages } = await req.json();
const result = streamText({ model: cencori('gemini-2.5-flash'), messages, });
return result.toDataStreamResponse();}useChat Hook
'use client';
import { useChat } from '@ai-sdk/react';
export default function Chat() { const { messages, input, handleInputChange, handleSubmit } = useChat({ api: '/api/chat', });
return ( <div> {messages.map(m => ( <div key={m.id}> {m.role}: {m.content} </div> ))} <form onSubmit={handleSubmit}> <input value={input} onChange={handleInputChange} /> <button type="submit">Send</button> </form> </div> );}Advanced Features
Cencori offers several advanced features to enhance your AI applications:
-
Built-in Security: Every request passes through automatic safety filters including PII detection, prompt injection protection, and harmful content filtering. Requests that violate safety policies are blocked before reaching the model.
-
Multi-Provider Routing: Switch between 15+ AI providers (OpenAI, Anthropic, Google, Mistral, DeepSeek, xAI, and more) without changing your code. Cencori handles the routing automatically.
-
Cost Tracking: Real-time cost tracking and analytics for every request. View detailed breakdowns by model, provider, and project in your dashboard.
-
Complete Audit Logs: Every request is logged with full payloads, token usage, costs, and safety scores for compliance and debugging.
-
Bring Your Own Keys (BYOK): Use your own API keys for each provider while still benefiting from Cencori's security and observability layer.
-
Provider Failover: Automatic failover to backup providers if your primary provider is unavailable.
For more information about these features and advanced configuration options, visit the Cencori Documentation.
Provider Options
You can customize the provider with additional options:
import { createCencori } from 'cencori/vercel';
const cencori = createCencori({ apiKey: 'YOUR_CENCORI_API_KEY', baseUrl: 'https://cencori.com', // Custom base URL (optional) headers: { // Custom headers (optional) 'X-Custom-Header': 'value', },});Model-Specific Options
You can also pass model-specific options:
const model = cencori('gpt-4o', { userId: 'user-123', // Track usage by user});