处理器接口
🌐 Processor Interface
Processor 接口定义了 Mastra 中所有处理器的契约。处理器可以实现一个或多个方法来处理代理执行管道的不同阶段。
🌐 The Processor interface defines the contract for all processors in Mastra. Processors can implement one or more methods to handle different stages of the agent execution pipeline.
当处理器方法运行时Direct link to 当处理器方法运行时
🌐 When processor methods run
这五种处理器方法在代理执行生命周期的不同阶段运行:
🌐 The five processor methods run at different points in the agent execution lifecycle:
┌─────────────────────────────────────────────────────────────────┐
│ Agent Execution Flow │
├─────────────────────────────────────────────────────────────────┤
│ │
│ User Input │
│ │ │
│ ▼ │
│ ┌─────────────────┐ │
│ │ processInput │ ← Runs ONCE at start │
│ └────────┬────────┘ │
│ │ │
│ ▼ │
│ ┌─────────────────────────────────────────────────────────┐ │
│ │ Agentic Loop │ │
│ │ ┌─────────────────────┐ │ │
│ │ │ processInputStep │ ← Runs at EACH step │ │
│ │ └──────────┬──────────┘ │ │
│ │ │ │ │
│ │ ▼ │ │
│ │ LLM Execution │ │
│ │ │ │ │
│ │ ▼ │ │
│ │ ┌──────────────────────┐ │ │
│ │ │ processOutputStream │ ← Runs on EACH stream chunk │ │
│ │ └──────────┬───────────┘ │ │
│ │ │ │ │
│ │ ▼ │ │
│ │ ┌──────────────────────┐ │ │
│ │ │ processOutputStep │ ← Runs after EACH LLM step │ │
│ │ └──────────┬───────────┘ │ │
│ │ │ │ │
│ │ ▼ │ │
│ │ Tool Execution (if needed) │ │
│ │ │ │ │
│ │ └──────── Loop back if tools called ────────│ │
│ └─────────────────────────────────────────────────────────┘ │
│ │ │
│ ▼ │
│ ┌─────────────────────┐ │
│ │ processOutputResult │ ← Runs ONCE after completion │
│ └─────────────────────┘ │
│ │ │
│ ▼ │
│ Final Response │
│ │
└─────────────────────────────────────────────────────────────────┘
| 方法 | 运行时间 | 使用场景 |
|---|---|---|
processInput | 在开始时运行一次,在代理循环之前 | 验证/转换初始用户输入,添加上下文 |
processInputStep | 在代理循环的每一步,调用每次 LLM 之前 | 在步骤之间转换消息,处理工具结果 |
processOutputStream | 在 LLM 响应的每个流数据块时 | 过滤/修改流式内容,实时检测模式 |
processOutputStep | 在每次 LLM 响应后,执行工具之前 | 验证输出质量,使用重试实现安全措施 |
processOutputResult | 在生成完成后运行一次 | 对最终响应进行后处理,记录结果 |
接口定义Direct link to 接口定义
🌐 Interface definition
interface Processor<TId extends string = string> {
readonly id: TId;
readonly name?: string;
processInput?(args: ProcessInputArgs): Promise<ProcessInputResult> | ProcessInputResult;
processInputStep?(args: ProcessInputStepArgs): ProcessorMessageResult;
processOutputStream?(args: ProcessOutputStreamArgs): Promise<ChunkType | null | undefined>;
processOutputStep?(args: ProcessOutputStepArgs): ProcessorMessageResult;
processOutputResult?(args: ProcessOutputResultArgs): ProcessorMessageResult;
}
属性Direct link to 属性
🌐 Properties
id:
name?:
方法Direct link to 方法
🌐 Methods
processInputDirect link to processInput
在输入消息发送到大型语言模型(LLM)之前处理它们。在代理执行开始时运行一次。
🌐 Processes input messages before they are sent to the LLM. Runs once at the start of agent execution.
processInput?(args: ProcessInputArgs): Promise<ProcessInputResult> | ProcessInputResult;
ProcessInputArgsDirect link to ProcessInputArgs
messages:
systemMessages:
messageList:
abort:
retryCount?:
tracingContext?:
requestContext?:
ProcessInputResultDirect link to ProcessInputResult
该方法可以返回三种类型之一:
🌐 The method can return one of three types:
MastraDBMessage[]:
MessageList:
{ messages, systemMessages }:
processInputStepDirect link to processInputStep
在代理循环的每个步骤处理输入消息,在发送给大型语言模型(LLM)之前执行。不同于仅在开始时运行一次的 processInput,此方法在每个步骤执行,包括工具调用的延续步骤。
🌐 Processes input messages at each step of the agentic loop, before they are sent to the LLM. Unlike processInput which runs once at the start, this runs at every step including tool call continuations.
processInputStep?(args: ProcessInputStepArgs): ProcessorMessageResult;
代理循环中的执行顺序Direct link to 代理循环中的执行顺序
🌐 Execution order in the agentic loop
processInput(仅在开始时一次)processInputStep来自 inputProcessors(在每一步,在调用 LLM 之前)prepareStep回调(作为 processInputStep 流程的一部分运行,在 inputProcessors 之后)- 大型语言模型执行
- 工具执行(如有需要)
- 如果调用了工具,请从第2步重复
ProcessInputStepArgsDirect link to ProcessInputStepArgs
messages:
messageList:
stepNumber:
steps:
systemMessages:
model:
toolChoice?:
activeTools?:
tools?:
providerOptions?:
modelSettings?:
structuredOutput?:
abort:
tracingContext?:
requestContext?:
ProcessInputStepResultDirect link to ProcessInputStepResult
该方法可以返回这些属性的任意组合:
🌐 The method can return any combination of these properties:
model?:
toolChoice?:
activeTools?:
tools?:
messages?:
messageList?:
systemMessages?:
providerOptions?:
modelSettings?:
structuredOutput?:
处理器链Direct link to 处理器链
🌐 Processor chaining
当多个处理器实现 processInputStep 时,它们按顺序运行,变更依次传递:
🌐 When multiple processors implement processInputStep, they run in order and changes chain through:
Processor 1: receives { model: 'gpt-4o' } → returns { model: 'gpt-4o-mini' }
Processor 2: receives { model: 'gpt-4o-mini' } → returns { toolChoice: 'none' }
Final: model = 'gpt-4o-mini', toolChoice = 'none'
系统消息隔离Direct link to 系统消息隔离
🌐 System message isolation
系统消息在每个步骤开始时会重置为其原始值。在 processInputStep 中所做的修改仅影响当前步骤,而不会影响后续步骤。
🌐 System messages are reset to their original values at the start of each step. Modifications made in processInputStep only affect the current step, not subsequent steps.
用例Direct link to 用例
🌐 Use cases
- 基于步骤编号或上下文的动态模型切换
- 在达到一定步骤后禁用工具
- 根据对话上下文动态添加或替换工具
- 在不同供应商之间转换消息部分类型(例如,将
reasoning转换为 Anthropic 的thinking) - 根据步骤编号或累积上下文修改消息
- 添加特定步骤的系统指令
- 根据步骤调整提供商选项(例如,缓存控制)
- 根据步骤上下文修改结构化输出模式
processOutputStreamDirect link to processOutputStream
处理流式输出块,并内置状态管理。允许处理器累积块并根据更大的上下文做出决策。
🌐 Processes streaming output chunks with built-in state management. Allows processors to accumulate chunks and make decisions based on larger context.
processOutputStream?(args: ProcessOutputStreamArgs): Promise<ChunkType | null | undefined>;
ProcessOutputStreamArgsDirect link to ProcessOutputStreamArgs
part:
streamParts:
state:
abort:
messageList?:
tracingContext?:
requestContext?:
返回值Direct link to 返回值
🌐 Return value
- 返回
ChunkType以发出它(可能已修改) - 返回
null或undefined以跳过输出该块
processOutputResultDirect link to processOutputResult
在流式处理或生成完成后处理完整输出结果。
🌐 Processes the complete output result after streaming or generation is finished.
processOutputResult?(args: ProcessOutputResultArgs): ProcessorMessageResult;
ProcessOutputResultArgsDirect link to ProcessOutputResultArgs
messages:
messageList:
abort:
tracingContext?:
requestContext?:
processOutputStepDirect link to processOutputStep
在每次 LLM 响应后、工具执行前处理输出。不同于只在最后运行一次的 processOutputResult,它在每一步都会运行。这是实现可以触发重试的防护措施的理想方法。
🌐 Processes output after each LLM response in the agentic loop, before tool execution. Unlike processOutputResult which runs once at the end, this runs at every step. This is the ideal method for implementing guardrails that can trigger retries.
processOutputStep?(args: ProcessOutputStepArgs): ProcessorMessageResult;
ProcessOutputStepArgsDirect link to ProcessOutputStepArgs
messages:
messageList:
stepNumber:
finishReason?:
toolCalls?:
text?:
systemMessages?:
abort:
retryCount?:
tracingContext?:
requestContext?:
用例Direct link to 用例
🌐 Use cases
- 实现可以请求重试的质量护栏
- 在工具执行之前验证大语言模型输出
- 添加每步日志或指标
- 实现带有重试功能的输出管理
示例:带重试的高质量护栏Direct link to 示例:带重试的高质量护栏
🌐 Example: Quality guardrail with retry
import type { Processor } from "@mastra/core";
export class QualityGuardrail implements Processor {
id = "quality-guardrail";
async processOutputStep({ text, abort, retryCount }) {
const score = await evaluateResponseQuality(text);
if (score < 0.7) {
if (retryCount < 3) {
// Request retry with feedback for the LLM
abort("Response quality too low. Please provide more detail.", {
retry: true,
metadata: { qualityScore: score },
});
} else {
// Max retries reached, block the response
abort("Response quality too low after multiple attempts.");
}
}
return [];
}
}
处理器类型Direct link to 处理器类型
🌐 Processor types
Mastra 提供类型别名,以确保处理器实现所需的方法:
🌐 Mastra provides type aliases to ensure processors implement the required methods:
// Must implement processInput OR processInputStep (or both)
type InputProcessor = Processor & (
| { processInput: required }
| { processInputStep: required }
);
// Must implement processOutputStream, processOutputStep, OR processOutputResult (or any combination)
type OutputProcessor = Processor & (
| { processOutputStream: required }
| { processOutputStep: required }
| { processOutputResult: required }
);
用法示例Direct link to 用法示例
🌐 Usage examples
基本输入处理器Direct link to 基本输入处理器
🌐 Basic input processor
import type { Processor, MastraDBMessage } from "@mastra/core";
export class LowercaseProcessor implements Processor {
id = "lowercase";
async processInput({ messages }): Promise<MastraDBMessage[]> {
return messages.map((msg) => ({
...msg,
content: {
...msg.content,
parts: msg.content.parts?.map((part) =>
part.type === "text"
? { ...part, text: part.text.toLowerCase() }
: part
),
},
}));
}
}
带有 processInputStep 的逐步处理器Direct link to 带有 processInputStep 的逐步处理器
🌐 Per-step processor with processInputStep
import type { Processor, ProcessInputStepArgs, ProcessInputStepResult } from "@mastra/core";
export class DynamicModelProcessor implements Processor {
id = "dynamic-model";
async processInputStep({
stepNumber,
steps,
toolChoice,
}: ProcessInputStepArgs): Promise<ProcessInputStepResult> {
// Use a fast model for initial response
if (stepNumber === 0) {
return { model: "openai/gpt-4o-mini" };
}
// Switch to powerful model after tool calls
if (steps.length > 0 && steps[steps.length - 1].toolCalls?.length) {
return { model: "openai/gpt-4o" };
}
// Disable tools after 5 steps to force completion
if (stepNumber > 5) {
return { toolChoice: "none" };
}
return {};
}
}
带有 processInputStep 的消息转换器Direct link to 带有 processInputStep 的消息转换器
🌐 Message transformer with processInputStep
import type { Processor, MastraDBMessage } from "@mastra/core";
export class ReasoningTransformer implements Processor {
id = "reasoning-transformer";
async processInputStep({ messages, messageList }) {
// Transform reasoning parts to thinking parts at each step
// This is useful when switching between model providers
for (const msg of messages) {
if (msg.role === "assistant" && msg.content.parts) {
for (const part of msg.content.parts) {
if (part.type === "reasoning") {
(part as any).type = "thinking";
}
}
}
}
return messageList;
}
}
混合处理器(输入和输出)Direct link to 混合处理器(输入和输出)
🌐 Hybrid processor (input and output)
import type { Processor, MastraDBMessage, ChunkType } from "@mastra/core";
export class ContentFilter implements Processor {
id = "content-filter";
private blockedWords: string[];
constructor(blockedWords: string[]) {
this.blockedWords = blockedWords;
}
async processInput({ messages, abort }): Promise<MastraDBMessage[]> {
for (const msg of messages) {
const text = msg.content.parts
?.filter((p) => p.type === "text")
.map((p) => p.text)
.join(" ");
if (this.blockedWords.some((word) => text?.includes(word))) {
abort("Blocked content detected in input");
}
}
return messages;
}
async processOutputStream({ part, abort }): Promise<ChunkType | null> {
if (part.type === "text-delta") {
if (this.blockedWords.some((word) => part.textDelta.includes(word))) {
abort("Blocked content detected in output");
}
}
return part;
}
}
带状态的流累加器Direct link to 带状态的流累加器
🌐 Stream accumulator with state
import type { Processor, ChunkType } from "@mastra/core";
export class WordCounter implements Processor {
id = "word-counter";
async processOutputStream({ part, state }): Promise<ChunkType> {
// Initialize state on first chunk
if (!state.wordCount) {
state.wordCount = 0;
}
// Count words in text chunks
if (part.type === "text-delta") {
const words = part.textDelta.split(/\s+/).filter(Boolean);
state.wordCount += words.length;
}
// Log word count on finish
if (part.type === "finish") {
console.log(`Total words: ${state.wordCount}`);
}
return part;
}
}
相关Direct link to 相关
🌐 Related