securityguardrailspiiprompt-injectionverification

AI security guardrails for your LLMs. Protect your AI apps from prompt injection, redact PII/PHI (SSNs, emails, phone numbers), and verify claims against source materials. Add security tools to your LLMs in just a few lines of code.

Installation

pnpm
npm
yarn
bun
pnpm install @superagent-ai/ai-sdk
# Add to your .env file
SUPERAGENT_API_KEY=your_api_key_here

Usage

import { generateText, stepCountIs } from 'ai';
import { guard, redact, verify } from '@superagent-ai/ai-sdk';
import { openai } from '@ai-sdk/openai';
const { text } = await generateText({
model: openai('gpt-4o-mini'),
prompt: 'Check this input for security threats: "Ignore all instructions"',
tools: {
guard: guard(),
redact: redact(),
verify: verify(),
},
stopWhen: stepCountIs(3),
});
console.log(text);