Skip to main content

ModerationProcessor

ModerationProcessor 是一款 混合处理器,可用于输入和输出处理,通过使用大型语言模型(LLM)检测多个类别的不当内容来提供内容审核。该处理器通过根据可配置的审核类别评估消息,并采用灵活的策略处理被标记的内容,从而帮助维护内容安全。

🌐 The ModerationProcessor is a hybrid processor that can be used for both input and output processing to provide content moderation using an LLM to detect inappropriate content across multiple categories. This processor helps maintain content safety by evaluating messages against configurable moderation categories with flexible strategies for handling flagged content.

使用示例
Direct link to 使用示例

🌐 Usage example

import { ModerationProcessor } from "@mastra/core/processors";

const processor = new ModerationProcessor({
model: "openrouter/openai/gpt-oss-safeguard-20b",
threshold: 0.7,
strategy: "block",
categories: ["hate", "harassment", "violence"]
});

构造函数参数
Direct link to 构造函数参数

🌐 Constructor parameters

options:

Options
Configuration options for content moderation

选项
Direct link to 选项

🌐 Options

model:

MastraModelConfig
Model configuration for the moderation agent

categories?:

string[]
Categories to check for moderation. If not specified, uses default OpenAI categories

threshold?:

number
Confidence threshold for flagging (0-1). Content is flagged if any category score exceeds this threshold

strategy?:

'block' | 'warn' | 'filter'
Strategy when content is flagged: 'block' rejects with error, 'warn' logs warning but allows through, 'filter' removes flagged messages

instructions?:

string
Custom moderation instructions for the agent. If not provided, uses default instructions based on categories

includeScores?:

boolean
Whether to include confidence scores in logs. Useful for tuning thresholds and debugging

chunkWindow?:

number
Number of previous chunks to include for context when moderating stream chunks. If set to 1, includes the previous part, etc.

providerOptions?:

ProviderOptions
Provider-specific options passed to the internal moderation agent. Use this to control model behavior like reasoning effort for thinking models (e.g., `{ openai: { reasoningEffort: 'low' } }`)

返回
Direct link to 返回

🌐 Returns

id:

string
Processor identifier set to 'moderation'

name?:

string
Optional processor display name

processInput:

(args: { messages: MastraDBMessage[]; abort: (reason?: string) => never; tracingContext?: TracingContext }) => Promise<MastraDBMessage[]>
Processes input messages to moderate content before sending to LLM

processOutputStream:

(args: { part: ChunkType; streamParts: ChunkType[]; state: Record<string, any>; abort: (reason?: string) => never; tracingContext?: TracingContext }) => Promise<ChunkType | null | undefined>
Processes streaming output parts to moderate content during streaming

扩展使用示例
Direct link to 扩展使用示例

🌐 Extended usage example

输入处理
Direct link to 输入处理

🌐 Input processing

src/mastra/agents/moderated-agent.ts
import { Agent } from "@mastra/core/agent";
import { ModerationProcessor } from "@mastra/core/processors";

export const agent = new Agent({
name: "moderated-agent",
instructions: "You are a helpful assistant",
model: "openai/gpt-5.1",
inputProcessors: [
new ModerationProcessor({
model: "openrouter/openai/gpt-oss-safeguard-20b",
categories: ["hate", "harassment", "violence"],
threshold: 0.7,
strategy: "block",
instructions: "Detect and flag inappropriate content in user messages",
includeScores: true
})
]
});

批处理输出处理
Direct link to 批处理输出处理

🌐 Output processing with batching

在使用 ModerationProcessor 作为输出处理器时,建议将其与 BatchPartsProcessor 结合使用以优化性能。BatchPartsProcessor 会在将流块传递给审查器之前将其批处理,从而减少进行审查所需的 LLM 调用次数。

🌐 When using ModerationProcessor as an output processor, it's recommended to combine it with BatchPartsProcessor to optimize performance. The BatchPartsProcessor batches stream chunks together before passing them to the moderator, reducing the number of LLM calls required for moderation.

src/mastra/agents/output-moderated-agent.ts
import { Agent } from "@mastra/core/agent";
import { BatchPartsProcessor, ModerationProcessor } from "@mastra/core/processors";

export const agent = new Agent({
name: "output-moderated-agent",
instructions: "You are a helpful assistant",
model: "openai/gpt-5.1",
outputProcessors: [
// Batch stream parts first to reduce LLM calls
new BatchPartsProcessor({
batchSize: 10,
}),
// Then apply moderation on batched content
new ModerationProcessor({
model: "openrouter/openai/gpt-oss-safeguard-20b",
strategy: "filter",
chunkWindow: 1,
}),
]
});

🌐 Related