Skip to main content

AI SDK

如果你已经直接使用 Vercel AI SDK 并且想在不切换到完整 Mastra 代理 API 的情况下添加像 processorsmemory 这样的 Mastra 功能,withMastra() 允许你为任何 AI SDK 模型封装这些功能。这在你希望保留现有 AI SDK 代码但又想添加输入/输出处理、对话持久化或内容过滤时非常有用。

🌐 If you're already using the Vercel AI SDK directly and want to add Mastra capabilities like processors or memory without switching to the full Mastra agent API, withMastra() lets you wrap any AI SDK model with these features. This is useful when you want to keep your existing AI SDK code but add input/output processing, conversation persistence, or content filtering.

tip

如果你想将 Mastra 与 AI SDK UI(例如 useChat())一起使用,请访问 AI SDK UI 指南

🌐 If you want to use Mastra together with AI SDK UI (e.g. useChat()), visit the AI SDK UI guide.

安装
Direct link to 安装

🌐 Installation

安装 @mastra/ai-sdk 以开始使用 withMastra() 功能。

🌐 Install @mastra/ai-sdk to begin using the withMastra() function.

npm install @mastra/ai-sdk@latest

示例
Direct link to 示例

🌐 Examples

带处理器
Direct link to 带处理器

🌐 With Processors

处理器允许你在消息发送到模型之前(processInput)和收到响应之后(processOutputResult)对消息进行处理。这个示例创建了一个日志处理器,用于记录每个阶段的消息数量,然后将其封装到 OpenAI 模型中。

🌐 Processors let you transform messages before they're sent to the model (processInput) and after responses are received (processOutputResult). This example creates a logging processor that logs message counts at each stage, then wraps an OpenAI model with it.

src/example.ts
import { openai } from '@ai-sdk/openai';
import { generateText } from 'ai';
import { withMastra } from '@mastra/ai-sdk';
import type { Processor } from '@mastra/core/processors';

const loggingProcessor: Processor<'logger'> = {
id: 'logger',
async processInput({ messages }) {
console.log('Input:', messages.length, 'messages');
return messages;
},
async processOutputResult({ messages }) {
console.log('Output:', messages.length, 'messages');
return messages;
},
};

const model = withMastra(openai('gpt-4o'), {
inputProcessors: [loggingProcessor],
outputProcessors: [loggingProcessor],
});

const { text } = await generateText({
model,
prompt: 'What is 2 + 2?',
});

带内存
Direct link to 带内存

🌐 With Memory

内存会在调用大型语言模型(LLM)之前自动从存储中加载之前的消息,并在之后保存新消息。此示例配置了一个 libSQL 存储后端以保留对话历史,并加载最近的 10 条消息以提供上下文。

🌐 Memory automatically loads previous messages from storage before the LLM call and saves new messages after. This example configures a libSQL storage backend to persist conversation history, loading the last 10 messages for context.

src/memory-example.ts
import { openai } from '@ai-sdk/openai';
import { generateText } from 'ai';
import { withMastra } from '@mastra/ai-sdk';
import { LibSQLStore } from '@mastra/libsql';

const storage = new LibSQLStore({
id: 'my-app',
url: 'file:./data.db',
});
await storage.init();

const model = withMastra(openai('gpt-4o'), {
memory: {
storage,
threadId: 'user-thread-123',
resourceId: 'user-123',
lastMessages: 10,
},
});

const { text } = await generateText({
model,
prompt: 'What did we talk about earlier?',
});

配备处理器和内存
Direct link to 配备处理器和内存

🌐 With Processors & Memory

你可以将处理器和内存结合在一起。输入处理器在内存加载历史消息后运行,输出处理器在内存保存响应之前运行。

🌐 You can combine processors and memory together. Input processors run after memory loads historical messages, and output processors run before memory saves the response.

src/combined-example.ts
import { openai } from '@ai-sdk/openai';
import { generateText } from 'ai';
import { withMastra } from '@mastra/ai-sdk';
import { LibSQLStore } from '@mastra/libsql';

const storage = new LibSQLStore({ id: 'my-app', url: 'file:./data.db' });
await storage.init();

const model = withMastra(openai('gpt-4o'), {
inputProcessors: [myGuardProcessor],
outputProcessors: [myLoggingProcessor],
memory: {
storage,
threadId: 'thread-123',
resourceId: 'user-123',
lastMessages: 10,
},
});

const { text } = await generateText({
model,
prompt: 'Hello!',
});

🌐 Related

  • withMastra() - withMastra() 的 API 参考
  • 处理器 - 了解输入和输出处理器
  • 内存 - Mastra 内存系统概述
  • AI SDK UI - 将 AI SDK UI 钩子与 Mastra 代理、工作流和网络结合使用