A2A

The dracoblue/a2a-ai-provider is a community provider enables the use of A2A protocol compliant agents with the AI SDK. This allows developers to stream, send, and receive text, tool calls, and artifacts using a standardized JSON-RPC interface over HTTP.

The a2a-ai-provider package is under constant development.

The provider supports (by using the official a2a-js sdk @a2a-js/sdk):

  • Streaming Text Responses via sendSubscribe and SSE
  • File & Artifact Uploads to the A2A server
  • Multi-modal Messaging with support for text and file parts
  • Full JSON-RPC 2.0 Compliance for A2A-compatible LLM agents

Learn more about A2A at the A2A Project Site.

Setup

Install the a2a-ai-provider from npm:

pnpm
npm
yarn
bun
pnpm add a2a-ai-provider

Provider Instance

To create a provider instance for an A2A server:

import { a2a } from 'a2a-ai-provider';

Examples

You can now use the provider with the AI SDK like this:

generateText

import { a2a } from 'a2a-ai-provider';
import { generateText } from 'ai';
const result = await generateText({
model: a2a('https://your-a2a-server.example.com'),
prompt: 'What is love?',
});
console.log(result.text);

streamText

import { a2a } from 'a2a-ai-provider';
import { streamText } from 'ai';
const chatId = 'unique-chat-id'; // for each conversation to keep history in a2a server
const streamResult = streamText({
model: a2a('https://your-a2a-server.example.com'),
prompt: 'What is love?',
providerOptions: {
a2a: {
contextId: chatId,
},
},
});
await streamResult.consumeStream();
console.log(await streamResult.content);

Features

  • Text Streaming: Streams token-by-token output from the A2A server
  • File Uploads: Send files as part of your prompts
  • Artifact Handling: Receives file artifacts in streamed or final results

Additional Resources