ModerationProcessor
ModerationProcessor 是一款 混合处理器,可用于输入和输出处理,通过使用大型语言模型(LLM)检测多个类别的不当内容来提供内容审核。该处理器通过根据可配置的审核类别评估消息,并采用灵活的策略处理被标记的内容,从而帮助维护内容安全。
🌐 The ModerationProcessor is a hybrid processor that can be used for both input and output processing to provide content moderation using an LLM to detect inappropriate content across multiple categories. This processor helps maintain content safety by evaluating messages against configurable moderation categories with flexible strategies for handling flagged content.
使用示例Direct link to 使用示例
🌐 Usage example
import { ModerationProcessor } from "@mastra/core/processors";
const processor = new ModerationProcessor({
model: "openrouter/openai/gpt-oss-safeguard-20b",
threshold: 0.7,
strategy: "block",
categories: ["hate", "harassment", "violence"]
});
构造函数参数Direct link to 构造函数参数
🌐 Constructor parameters
options:
选项Direct link to 选项
🌐 Options
model:
categories?:
threshold?:
strategy?:
instructions?:
includeScores?:
chunkWindow?:
providerOptions?:
返回Direct link to 返回
🌐 Returns
id:
name?:
processInput:
processOutputStream:
扩展使用示例Direct link to 扩展使用示例
🌐 Extended usage example
输入处理Direct link to 输入处理
🌐 Input processing
import { Agent } from "@mastra/core/agent";
import { ModerationProcessor } from "@mastra/core/processors";
export const agent = new Agent({
name: "moderated-agent",
instructions: "You are a helpful assistant",
model: "openai/gpt-5.1",
inputProcessors: [
new ModerationProcessor({
model: "openrouter/openai/gpt-oss-safeguard-20b",
categories: ["hate", "harassment", "violence"],
threshold: 0.7,
strategy: "block",
instructions: "Detect and flag inappropriate content in user messages",
includeScores: true
})
]
});
批处理输出处理Direct link to 批处理输出处理
🌐 Output processing with batching
在使用 ModerationProcessor 作为输出处理器时,建议将其与 BatchPartsProcessor 结合使用以优化性能。BatchPartsProcessor 会在将流块传递给审查器之前将其批处理,从而减少进行审查所需的 LLM 调用次数。
🌐 When using ModerationProcessor as an output processor, it's recommended to combine it with BatchPartsProcessor to optimize performance. The BatchPartsProcessor batches stream chunks together before passing them to the moderator, reducing the number of LLM calls required for moderation.
import { Agent } from "@mastra/core/agent";
import { BatchPartsProcessor, ModerationProcessor } from "@mastra/core/processors";
export const agent = new Agent({
name: "output-moderated-agent",
instructions: "You are a helpful assistant",
model: "openai/gpt-5.1",
outputProcessors: [
// Batch stream parts first to reduce LLM calls
new BatchPartsProcessor({
batchSize: 10,
}),
// Then apply moderation on batched content
new ModerationProcessor({
model: "openrouter/openai/gpt-oss-safeguard-20b",
strategy: "filter",
chunkWindow: 1,
}),
]
});
相关Direct link to 相关
🌐 Related