
# `LlamaIndexAdapter`

The `LlamaIndexAdapter` module provides helper functions to transform LlamaIndex output streams into data streams and data stream responses.
See the [LlamaIndex Adapter documentation](/providers/adapters/llamaindex) for more information.

It supports:

- LlamaIndex ChatEngine streams
- LlamaIndex QueryEngine streams

## Import

<Snippet text={`import { LlamaIndexAdapter } from "ai"`} prompt={false} />

## API Signature

### Methods

<PropertiesTable
  content={[
    {
      name: 'toDataStream',
      type: '(stream: AsyncIterable<EngineResponse>, AIStreamCallbacksAndOptions) => AIStream',
      description: 'Converts LlamaIndex output streams to data stream.',
    },
    {
      name: 'toDataStreamResponse',
      type: '(stream: AsyncIterable<EngineResponse>, options?: {init?: ResponseInit, data?: StreamData, callbacks?: AIStreamCallbacksAndOptions}) => Response',
      description:
        'Converts LlamaIndex output streams to data stream response.',
    },
    {
      name: 'mergeIntoDataStream',
      type: '(stream: AsyncIterable<EngineResponse>, options: { dataStream: DataStreamWriter; callbacks?: StreamCallbacks }) => void',
      description:
        'Merges LlamaIndex output streams into an existing data stream.',
    },
  ]}
/>

## Examples

### Convert LlamaIndex ChatEngine Stream

```tsx filename="app/api/completion/route.ts" highlight="15"
import { OpenAI, SimpleChatEngine } from 'llamaindex';
import { LlamaIndexAdapter } from 'ai';

export async function POST(req: Request) {
  const { prompt } = await req.json();

  const llm = new OpenAI({ model: 'gpt-4o' });
  const chatEngine = new SimpleChatEngine({ llm });

  const stream = await chatEngine.chat({
    message: prompt,
    stream: true,
  });

  return LlamaIndexAdapter.toDataStreamResponse(stream);
}
```


## Navigation

- [AIStream](/v4/docs/reference/stream-helpers/ai-stream)
- [StreamingTextResponse](/v4/docs/reference/stream-helpers/streaming-text-response)
- [streamToResponse](/v4/docs/reference/stream-helpers/stream-to-response)
- [OpenAIStream](/v4/docs/reference/stream-helpers/openai-stream)
- [AnthropicStream](/v4/docs/reference/stream-helpers/anthropic-stream)
- [AWSBedrockStream](/v4/docs/reference/stream-helpers/aws-bedrock-stream)
- [AWSBedrockAnthropicStream](/v4/docs/reference/stream-helpers/aws-bedrock-anthropic-stream)
- [AWSBedrockAnthropicMessagesStream](/v4/docs/reference/stream-helpers/aws-bedrock-messages-stream)
- [AWSBedrockCohereStream](/v4/docs/reference/stream-helpers/aws-bedrock-cohere-stream)
- [AWSBedrockLlama2Stream](/v4/docs/reference/stream-helpers/aws-bedrock-llama-2-stream)
- [CohereStream](/v4/docs/reference/stream-helpers/cohere-stream)
- [GoogleGenerativeAIStream](/v4/docs/reference/stream-helpers/google-generative-ai-stream)
- [HuggingFaceStream](/v4/docs/reference/stream-helpers/hugging-face-stream)
- [LangChainAdapter](/v4/docs/reference/stream-helpers/langchain-adapter)
- [LangChainStream](/v4/docs/reference/stream-helpers/langchain-stream)
- [LlamaIndexAdapter](/v4/docs/reference/stream-helpers/llamaindex-adapter)
- [MistralStream](/v4/docs/reference/stream-helpers/mistral-stream)
- [ReplicateStream](/v4/docs/reference/stream-helpers/replicate-stream)
- [InkeepStream](/v4/docs/reference/stream-helpers/inkeep-stream)


[Full Sitemap](/sitemap.md)
