追踪
🌐 Tracing
追踪为应用中的 AI 相关操作提供专门的监控和调试功能。启用后,Mastra 会自动为代理运行、LLM 生成、工具调用和工作流程步骤创建包含 AI 特定上下文和元数据的追踪记录。
🌐 Tracing provides specialized monitoring and debugging for the AI-related operations in your application. When enabled, Mastra automatically creates traces for agent runs, LLM generations, tool calls, and workflow steps with AI-specific context and metadata.
与传统的应用追踪不同,追踪专注于理解你的 AI 流程——捕捉令牌使用情况、模型参数、工具执行细节以及对话流程。这使得调试问题、优化性能以及了解 AI 系统在生产环境中的行为变得更加容易。
🌐 Unlike traditional application tracing, Tracing focuses specifically on understanding your AI pipeline — capturing token usage, model parameters, tool execution details, and conversation flows. This makes it easier to debug issues, optimize performance, and understand how your AI systems behave in production.
工作原理Direct link to 工作原理
🌐 How It Works
轨迹由以下方式创建:
🌐 Traces are created by:
- 配置导出器 → 将追踪数据发送到可观测性平台
- 设置采样策略 → 控制收集哪些追踪
- 运行代理和工作流 → Mastra 会自动为它们添加追踪功能
配置Direct link to 配置
🌐 Configuration
基本配置Direct link to 基本配置
🌐 Basic Config
import { Mastra } from "@mastra/core";
import {
Observability,
DefaultExporter,
CloudExporter,
SensitiveDataFilter,
} from "@mastra/observability";
export const mastra = new Mastra({
observability: new Observability({
configs: {
default: {
serviceName: "mastra",
exporters: [
new DefaultExporter(), // Persists traces to storage for Mastra Studio
new CloudExporter(), // Sends traces to Mastra Cloud (if MASTRA_CLOUD_ACCESS_TOKEN is set)
],
spanOutputProcessors: [
new SensitiveDataFilter(), // Redacts sensitive data like passwords, tokens, keys
],
},
},
}),
storage: new LibSQLStore({
id: 'mastra-storage',
url: "file:./mastra.db", // Storage is required for tracing
}),
});
此配置包括:
🌐 This configuration includes:
- 服务名称:
"mastra"- 在追踪中标识你的服务 - 采样:默认
"always"(100% 的跟踪) - 导出器:
DefaultExporter- 将痕迹持久化到你为 Mastra Studio 配置的存储中CloudExporter- 将跟踪发送到 Mastra 云(需要MASTRA_CLOUD_ACCESS_TOKEN)
- 跨度输出处理器:
SensitiveDataFilter- 编辑敏感字段
导出器Direct link to 导出器
🌐 Exporters
导出器确定你的跟踪数据发送的目的地以及如何存储。它们可以与现有的可观察性堆栈集成,支持数据驻留要求,并可以针对成本和性能进行优化。你可以同时使用多个导出器将相同的跟踪数据发送到不同的目标——例如,将详细的跟踪数据本地存储以进行调试,同时将抽样数据发送到云提供商进行生产监控。
🌐 Exporters determine where your trace data is sent and how it's stored. They integrate with your existing observability stack, support data residency requirements, and can be optimized for cost and performance. You can use multiple exporters simultaneously to send the same trace data to different destinations — for example, storing detailed traces locally for debugging while sending sampled data to a cloud provider for production monitoring.
内部导出器Direct link to 内部导出器
🌐 Internal Exporters
Mastra 提供两个内置导出器:
🌐 Mastra provides two built-in exporters:
外部导出器Direct link to 外部导出器
🌐 External Exporters
除了内部导出器外,Mastra 还支持与流行的可观测性平台集成。这些导出器使你能够利用现有的监控基础设施,并利用平台特定的功能,如警报、仪表板以及与其他应用指标的关联。
🌐 In addition to the internal exporters, Mastra supports integration with popular observability platforms. These exporters allow you to leverage your existing monitoring infrastructure and take advantage of platform-specific features like alerting, dashboards, and correlation with other application metrics.
- Arize - 使用 OpenInference 语义规范将跟踪导出到 Arize Phoenix 或 Arize AX
- Braintrust - 将追踪导出到 Braintrust 的评估和可观测性平台
- Datadog - 通过 OTLP 将追踪发送到 Datadog APM,实现全栈可观测性和 AI 追踪
- Laminar - 通过 OTLP/HTTP(protobuf)将跟踪发送到 Laminar,支持 Laminar 原生的 span 属性和评分器
- Langfuse - 将跟踪发送到 Langfuse 开源 LLM 工程平台
- LangSmith - 将追踪数据推送到 LangSmith 的可观测性和评估工具包中
- PostHog - 将跟踪信息发送到 PostHog,用于 AI 分析和产品洞察
- Sentry - 使用 OpenTelemetry 语义规范将追踪发送到 Sentry 进行 AI 追踪和监控
- OpenTelemetry - 将追踪数据发送到任何兼容 OpenTelemetry 的可观测性系统
- 支持:Dash0、MLflow、New Relic、SigNoz、Traceloop、Zipkin 等!
桥Direct link to 桥
🌐 Bridges
桥接提供与外部追踪系统的双向集成。与将追踪数据发送到外部平台的导出器不同,桥接在外部系统中创建本地 span 并继承其上下文。这使 Mastra 操作能够参与现有的分布式追踪。
🌐 Bridges provide bidirectional integration with external tracing systems. Unlike exporters that send trace data to external platforms, bridges create native spans in external systems and inherit context from them. This enables Mastra operations to participate in existing distributed traces.
- OpenTelemetry 桥接 - 与现有的 OpenTelemetry 基础设施集成
桥梁与导出器Direct link to 桥梁与导出器
🌐 Bridges vs Exporters
| 功能 | 桥接器 | 导出器 |
|---|---|---|
| 在外部系统中创建本地跨度 | 是 | 否 |
| 从外部系统继承上下文 | 是 | 否 |
| 将数据发送到后端 | 通过外部 SDK | 直接 |
| 使用场景 | 现有的分布式追踪 | 独立的 Mastra 追踪 |
你可以将两者一起使用——使用桥接进行上下文传播,并使用导出器将追踪发送到其他目的地。
🌐 You can use both together — a bridge for context propagation and exporters to send traces to additional destinations.
抽样策略Direct link to 抽样策略
🌐 Sampling Strategies
采样可以让你控制收集哪些跟踪,从而帮助你在可观测性需求和资源成本之间取得平衡。在高流量的生产环境中,收集每个跟踪可能既昂贵又不必要。采样策略可以让你捕获具有代表性的跟踪子集,同时确保不会遗漏有关错误或重要操作的关键信息。
🌐 Sampling allows you to control which traces are collected, helping you balance between observability needs and resource costs. In production environments with high traffic, collecting every trace can be expensive and unnecessary. Sampling strategies let you capture a representative subset of traces while ensuring you don't miss critical information about errors or important operations.
Mastra 支持四种采样策略:
🌐 Mastra supports four sampling strategies:
始终采样Direct link to 始终采样
🌐 Always Sample
收集 100% 的跟踪信息。最适合开发、调试或低流量场景,需要完全可视化时使用。
🌐 Collects 100% of traces. Best for development, debugging, or low-traffic scenarios where you need complete visibility.
sampling: {
type: "always";
}
从不试用Direct link to 从不试用
🌐 Never Sample
完全禁用追踪。适用于在某些环境中追踪没有价值,或者当你需要暂时禁用追踪而不移除配置时。
🌐 Disables tracing entirely. Useful for specific environments where tracing adds no value or when you need to temporarily disable tracing without removing configuration.
sampling: {
type: "never";
}
基于比例的抽样Direct link to 基于比例的抽样
🌐 Ratio-Based Sampling
随机抽取一定比例的跟踪记录。非常适合在生产环境中使用,当你希望获得统计洞察而无需承担完整跟踪的成本时。概率值的范围是 0(没有跟踪)到 1(全部跟踪)。
🌐 Randomly samples a percentage of traces. Ideal for production environments where you want statistical insights without the cost of full tracing. The probability value ranges from 0 (no traces) to 1 (all traces).
sampling: {
type: 'ratio',
probability: 0.1 // Sample 10% of traces
}
自定义采样Direct link to 自定义采样
🌐 Custom Sampling
根据请求上下文、元数据或业务规则实现你自己的抽样逻辑。非常适合基于用户等级、请求类型或错误条件等复杂场景的抽样。
🌐 Implements your own sampling logic based on request context, metadata, or business rules. Perfect for complex scenarios like sampling based on user tier, request type, or error conditions.
sampling: {
type: 'custom',
sampler: (options) => {
// Sample premium users at higher rate
if (options?.metadata?.userTier === 'premium') {
return Math.random() < 0.5; // 50% sampling
}
// Default 1% sampling for others
return Math.random() < 0.01;
}
}
完整示例Direct link to 完整示例
🌐 Complete Example
export const mastra = new Mastra({
observability: new Observability({
configs: {
"10_percent": {
serviceName: "my-service",
// Sample 10% of traces
sampling: {
type: "ratio",
probability: 0.1,
},
exporters: [new DefaultExporter()],
},
},
}),
});
多配置设置Direct link to 多配置设置
🌐 Multi-Config Setup
复杂的应用通常需要针对不同的场景进行不同的跟踪配置。在开发过程中,你可能希望进行详细的跟踪并进行全量采样;在生产环境中,需要将采样后的跟踪发送给外部提供商;对于特定功能或客户群体,则可能需要特殊配置。configSelector 函数支持在运行时动态选择配置,允许你根据请求上下文、环境变量、功能标识或者任何自定义逻辑来路由跟踪信息。
🌐 Complex applications often require different tracing configurations for different scenarios. You might want detailed traces with full sampling during development, sampled traces sent to external providers in production, and specialized configurations for specific features or customer segments. The configSelector function enables dynamic configuration selection at runtime, allowing you to route traces based on request context, environment variables, feature flags, or any custom logic.
当出现以下情况时,这种方法特别有价值:
🌐 This approach is particularly valuable when:
- 针对不同的可观察性要求进行 A/B 测试
- 为特定客户或支持案例提供增强的调试功能
- 逐步推出新的追踪提供商,而不影响现有监控
- 通过为不同的请求类型使用不同的采样率来优化成本
- 为合规或数据驻留要求维护独立的跟踪流
请注意,对于特定的执行只能使用一个配置。但一个配置可以同时向多个导出器发送数据。
🌐 Note that only a single config can be used for a specific execution. But a single config can send data to multiple exporters simultaneously.
动态配置选择Direct link to 动态配置选择
🌐 Dynamic Configuration Selection
使用 configSelector 根据请求上下文选择合适的追踪配置:
🌐 Use configSelector to choose the appropriate tracing configuration based on request context:
export const mastra = new Mastra({
observability: new Observability({
configs: {
langfuse: {
serviceName: "langfuse-service",
exporters: [langfuseExporter],
},
braintrust: {
serviceName: "braintrust-service",
exporters: [braintrustExporter],
},
debug: {
serviceName: "debug-service",
sampling: { type: "always" },
exporters: [new DefaultExporter()],
},
},
configSelector: (context, availableTracers) => {
// Use debug config for support requests
if (context.requestContext?.get("supportMode")) {
return "debug";
}
// Route specific customers to different providers
const customerId = context.requestContext?.get("customerId");
if (customerId && premiumCustomers.includes(customerId)) {
return "braintrust";
}
// Route specific requests to langfuse
if (context.requestContext?.get("useExternalTracing")) {
return "langfuse";
}
throw new Error('no config found')
},
}),
});
基于环境的配置Direct link to 基于环境的配置
🌐 Environment-Based Configuration
一种常见的模式是根据部署环境选择配置:
🌐 A common pattern is to select configurations based on deployment environment:
export const mastra = new Mastra({
observability: new Observability({
configs: {
development: {
serviceName: "my-service-dev",
sampling: { type: "always" },
exporters: [new DefaultExporter()],
},
staging: {
serviceName: "my-service-staging",
sampling: { type: "ratio", probability: 0.5 },
exporters: [langfuseExporter],
},
production: {
serviceName: "my-service-prod",
sampling: { type: "ratio", probability: 0.01 },
exporters: [cloudExporter, langfuseExporter],
},
},
configSelector: (context, availableTracers) => {
const env = process.env.NODE_ENV || "development";
return env;
},
}),
});
常见配置模式与故障排除Direct link to 常见配置模式与故障排除
🌐 Common Configuration Patterns & Troubleshooting
维护工作室和云端访问Direct link to 维护工作室和云端访问
🌐 Maintaining Studio and Cloud Access
在添加外部导出器时,包含 DefaultExporter 和 CloudExporter 以保持对 Studio 和 Mastra Cloud 的访问:
🌐 When adding external exporters, include DefaultExporter and CloudExporter to maintain access to Studio and Mastra Cloud:
import {
Observability,
DefaultExporter,
CloudExporter,
SensitiveDataFilter,
} from "@mastra/observability";
import { ArizeExporter } from "@mastra/arize";
export const mastra = new Mastra({
observability: new Observability({
configs: {
production: {
serviceName: "my-service",
exporters: [
new ArizeExporter({
endpoint: process.env.PHOENIX_ENDPOINT,
apiKey: process.env.PHOENIX_API_KEY,
}),
new DefaultExporter(), // Keep Studio access
new CloudExporter(), // Keep Cloud access
],
spanOutputProcessors: [
new SensitiveDataFilter(),
],
},
},
}),
});
此配置会将跟踪信息同时发送到所有三个目标地点:
🌐 This configuration sends traces to all three destinations simultaneously:
- Arize Phoenix/AX 用于外部可观察性
- Studio 的默认导出器
- Mastra 云仪表板的 CloudExporter
记住:一个跟踪可以发送到多个导出器。除非你想使用不同的采样率或处理器,否则无需为每个导出器单独配置。
🌐 Remember: A single trace can be sent to multiple exporters. You don't need separate configs for each exporter unless you want different sampling rates or processors.
添加自定义元数据Direct link to 添加自定义元数据
🌐 Adding Custom Metadata
自定义元数据允许你将附加上下文附加到你的追踪中,使调试问题和理解生产环境中的系统行为更加容易。元数据可以包括业务逻辑细节、性能指标、用户上下文或任何有助于你理解执行过程中发生了什么的信息。
🌐 Custom metadata allows you to attach additional context to your traces, making it easier to debug issues and understand system behavior in production. Metadata can include business logic details, performance metrics, user context, or any information that helps you understand what happened during execution.
你可以使用追踪上下文向任何跨度添加元数据:
🌐 You can add metadata to any span using the tracing context:
execute: async (inputData, context) => {
const startTime = Date.now();
const response = await fetch(inputData.endpoint);
// Add custom metadata to the current span
context?.tracingContext.currentSpan?.update({
metadata: {
apiStatusCode: response.status,
endpoint: inputData.endpoint,
responseTimeMs: Date.now() - startTime,
userTier: inputData.userTier,
region: process.env.AWS_REGION,
},
});
return await response.json();
};
在此设置的元数据将显示在所有已配置的导出器中。
🌐 Metadata set here will be shown in all configured exporters.
来自 RequestContext 的自动元数据Direct link to 来自 RequestContext 的自动元数据
🌐 Automatic Metadata from RequestContext
你无需手动向每个跨度添加元数据,可以配置 Mastra 自动从 RequestContext 中提取值,并将其作为元数据附加到跟踪中的所有跨度。这对于在整个跟踪中一致地跟踪用户标识、环境信息、功能标记或任何请求范围的数据非常有用。
🌐 Instead of manually adding metadata to each span, you can configure Mastra to automatically extract values from RequestContext and attach them as metadata to all spans in a trace. This is useful for consistently tracking user identifiers, environment information, feature flags, or any request-scoped data across your entire trace.
配置级提取Direct link to 配置级提取
🌐 Configuration-Level Extraction
在你的追踪配置中定义要提取的 RequestContext 键。这些键将自动作为元数据包含在使用此配置创建的所有跨度中:
🌐 Define which RequestContext keys to extract in your tracing configuration. These keys will be automatically included as metadata for all spans created with this configuration:
export const mastra = new Mastra({
observability: new Observability({
configs: {
default: {
serviceName: "my-service",
requestContextKeys: ["userId", "environment", "tenantId"],
exporters: [new DefaultExporter()],
},
},
}),
});
现在,当你使用 RequestContext 执行代理或工作流时,这些值会被自动提取:
🌐 Now when you execute agents or workflows with a RequestContext, these values are automatically extracted:
const requestContext = new RequestContext();
requestContext.set("userId", "user-123");
requestContext.set("environment", "production");
requestContext.set("tenantId", "tenant-456");
// All spans in this trace automatically get userId, environment, and tenantId metadata
const result = await agent.generate("Hello", {
requestContext,
});
按请求添加Direct link to 按请求添加
🌐 Per-Request Additions
你可以使用 tracingOptions.requestContextKeys 添加特定跟踪的键。这些键会与配置级别的键合并:
🌐 You can add trace-specific keys using tracingOptions.requestContextKeys. These are merged with the configuration-level keys:
const requestContext = new RequestContext();
requestContext.set("userId", "user-123");
requestContext.set("environment", "production");
requestContext.set("experimentId", "exp-789");
const result = await agent.generate("Hello", {
requestContext,
tracingOptions: {
requestContextKeys: ["experimentId"], // Adds to configured keys
},
});
// All spans now have: userId, environment, AND experimentId
嵌套值提取Direct link to 嵌套值提取
🌐 Nested Value Extraction
使用点表示法从 RequestContext 中提取嵌套值:
🌐 Use dot notation to extract nested values from RequestContext:
export const mastra = new Mastra({
observability: new Observability({
configs: {
default: {
requestContextKeys: ["user.id", "session.data.experimentId"],
exporters: [new DefaultExporter()],
},
},
}),
});
const requestContext = new RequestContext();
requestContext.set("user", { id: "user-456", name: "John Doe" });
requestContext.set("session", { data: { experimentId: "exp-999" } });
// Metadata will include: { user: { id: 'user-456' }, session: { data: { experimentId: 'exp-999' } } }
工作原理Direct link to 工作原理
🌐 How It Works
- TraceState 计算:在追踪开始时(根跨度创建时),Mastra 通过合并配置层级和每个请求的键来计算要提取哪些键
- 自动提取:根跨度(代理运行、工作流执行)会自动从 RequestContext 中提取元数据
- 子跨度提取:如果在创建子跨度时传入
requestContext,子跨度也可以提取元数据 - 元数据优先级:传递给跨度选项的显式元数据始终优先于提取的元数据
向跟踪添加标签Direct link to 向跟踪添加标签
🌐 Adding Tags to Traces
标签是帮助你对追踪进行分类和筛选的字符串标签。与包含结构化键值数据的元数据不同,标签是用于快速筛选和组织的简单字符串。
🌐 Tags are string labels that help you categorize and filter traces. Unlike metadata (which contains structured key-value data), tags are simple strings designed for quick filtering and organization.
在执行代理或工作流时使用 tracingOptions.tags 添加标签:
🌐 Use tracingOptions.tags to add tags when executing agents or workflows:
// With agents
const result = await agent.generate("Hello", {
tracingOptions: {
tags: ["production", "experiment-v2", "user-request"],
},
});
// With workflows
const run = await mastra.getWorkflow("myWorkflow").createRun();
const result = await run.start({
inputData: { data: "process this" },
tracingOptions: {
tags: ["batch-processing", "priority-high"],
},
});
标签的工作原理Direct link to 标签的工作原理
🌐 How Tags Work
- 仅根跨度:标签仅应用于跟踪的根跨度(代理运行或工作流运行跨度)
- 广泛支持:大多数导出工具都支持使用标签来过滤和搜索跟踪记录:
- Braintrust - 原生
tags字段 - Langfuse - 跟踪中的原生
tags字段 - ArizeExporter -
tag.tagsOpenInference 属性 - OtelExporter -
mastra.tags跨度属性 - OtelBridge -
mastra.tags跨度属性
- Braintrust - 原生
- 可与元数据组合使用:你可以在同一个
tracingOptions中同时使用tags和metadata
const result = await agent.generate([{ role: "user", content: "Analyze this" }], {
tracingOptions: {
tags: ["production", "analytics"],
metadata: { userId: "user-123", experimentId: "exp-456" },
},
});
常见标签模式Direct link to 常见标签模式
🌐 Common Tag Patterns
- 环境:
"production","staging","development" - 功能标志:
"feature-x-enabled","beta-user" - 请求类型:
"user-request"、"batch-job"、"scheduled-task" - 优先级:
"priority-high"、"priority-low" - 实验:
"experiment-v1"、"control-group"、"treatment-a"
隐藏敏感输入/输出Direct link to 隐藏敏感输入/输出
🌐 Hiding Sensitive Input/Output
在处理敏感数据时,你可能希望防止输入和输出值被记录到你的可观测性平台中。在 tracingOptions 中使用 hideInput 和 hideOutput 可以将这些数据从追踪中的所有跨度中排除:
🌐 When processing sensitive data, you may want to prevent input and output values from being logged to your observability platforms. Use hideInput and hideOutput in tracingOptions to exclude this data from all spans in a trace:
// Hide input data (e.g., user credentials, PII)
const result = await agent.generate([{ role: "user", content: "Process this sensitive data" }], {
tracingOptions: {
hideInput: true, // Input will be hidden from all spans
},
});
// Hide output data (e.g., generated secrets, confidential results)
const result = await agent.generate([{ role: "user", content: "Generate API keys" }], {
tracingOptions: {
hideOutput: true, // Output will be hidden from all spans
},
});
// Hide both input and output
const result = await agent.generate([{ role: "user", content: "Handle confidential request" }], {
tracingOptions: {
hideInput: true,
hideOutput: true,
},
});
工作原理Direct link to 工作原理
🌐 How It Works
- 全跟踪影响:当在根跨度上设置时,这些选项会应用于跟踪中的所有子跨度(工具调用、模型生成等)。
- 导出时过滤:数据在执行过程中仍可在内部使用,但在将跨度导出到可观测性平台时会被排除
- 可与其他选项组合使用:你可以将
hideInput/hideOutput与tags、metadata以及其他tracingOptions一起使用
const result = await agent.generate([{ role: "user", content: "Sensitive operation" }], {
tracingOptions: {
hideInput: true,
hideOutput: true,
tags: ["sensitive-operation", "pii-handling"],
metadata: { operationType: "credential-processing" },
},
});
为了更精细地控制敏感数据,可以考虑使用 敏感数据过滤器 处理器,它可以编辑特定字段(如密码、令牌和密钥),同时保留输入/输出的其余部分。
🌐 For more granular control over sensitive data, consider using the Sensitive Data Filter processor, which can redact specific fields (like passwords, tokens, and keys) while preserving the rest of the input/output.
子元素范围和元数据提取Direct link to 子元素范围和元数据提取
🌐 Child Spans and Metadata Extraction
在工具或工作流步骤中创建子 span 时,你可以传递 requestContext 参数以启用元数据提取:
🌐 When creating child spans within tools or workflow steps, you can pass the requestContext parameter to enable metadata extraction:
execute: async (inputData, context) => {
// Create child span WITH requestContext - gets metadata extraction
const dbSpan = context?.tracingContext.currentSpan?.createChildSpan({
type: "generic",
name: "database-query",
requestContext: context?.requestContext, // Pass to enable metadata extraction
});
const results = await db.query("SELECT * FROM users");
dbSpan?.end({ output: results });
// Or create child span WITHOUT requestContext - no metadata extraction
const cacheSpan = context?.tracingContext.currentSpan?.createChildSpan({
type: "generic",
name: "cache-check",
// No requestContext - won't extract metadata
});
return results;
};
这使你可以对哪个子跨度包含 RequestContext 元数据进行精细控制。根跨度(代理/工作流执行)总是自动提取元数据,而子跨度仅在你明确传递 requestContext 时才会提取。
🌐 This gives you fine-grained control over which child spans include RequestContext metadata. Root spans (agent/workflow executions) always extract metadata automatically, while child spans only extract when you explicitly pass requestContext.
创建子跨度Direct link to 创建子跨度
🌐 Creating Child Spans
子跨度允许你跟踪工作流步骤或工具中的细粒度操作。它们提供对子操作的可见性,例如数据库查询、API 调用、文件操作或复杂计算。这种层级结构有助于你识别性能瓶颈并了解操作的确切顺序。
🌐 Child spans allow you to track fine-grained operations within your workflow steps or tools. They provide visibility into sub-operations like database queries, API calls, file operations, or complex calculations. This hierarchical structure helps you identify performance bottlenecks and understand the exact sequence of operations.
在工具调用或工作流程步骤中创建子跨度以跟踪特定操作:
🌐 Create child spans inside a tool call or workflow step to track specific operations:
execute: async (inputData, context) => {
// Create another child span for the main database operation
const querySpan = context?.tracingContext.currentSpan?.createChildSpan({
type: "generic",
name: "database-query",
input: { query: inputData.query },
metadata: { database: "production" },
});
try {
const results = await db.query(inputData.query);
querySpan?.end({
output: results.data,
metadata: {
rowsReturned: results.length,
queryTimeMs: results.executionTime,
cacheHit: results.fromCache,
},
});
return results;
} catch (error) {
querySpan?.error({
error,
metadata: { retryable: isRetryableError(error) },
});
throw error;
}
};
子跨度会自动继承其父跨度的跟踪上下文,从而在你的可观测性平台中维持关系层级。
🌐 Child spans automatically inherit the trace context from their parent, maintaining the relationship hierarchy in your observability platform.
跨度格式Direct link to 跨度格式
🌐 Span Formatting
Mastra 提供两种方式在跨度数据到达你的可观测性平台之前进行转换:跨度处理器和自定义跨度格式化器。两者都允许你修改、过滤或丰富追踪数据,但它们在不同的层面上运作,并且用途不同。
🌐 Mastra provides two ways to transform span data before it reaches your observability platform: span processors and custom span formatters. Both allow you to modify, filter, or enrich trace data, but they operate at different levels and serve different purposes.
| 特性 | Span 处理器 | 自定义 Span 格式化器 |
|---|---|---|
| 配置级别 | 可观测性配置 | 每个导出器 |
| 作用对象 | 内部 Span 对象 | 导出的 ExportedSpan 数据 |
| 适用范围 | 所有导出器 | 单个导出器 |
| 异步支持 | 否 | 是 |
| 使用场景 | 安全、过滤、增强 | 特定平台格式化、异步增强 |
使用span 处理器进行同步转换,这些转换应适用于所有导出器(例如对敏感数据进行编辑)。当不同的导出器需要同一数据的不同表示(例如一个平台使用纯文本,另一个使用结构化数据),或者当你需要执行异步操作(例如从外部 API 获取数据)时,请使用自定义 span 格式化器。
🌐 Use span processors for synchronous transformations that should apply to all exporters (like redacting sensitive data). Use custom span formatters when different exporters need different representations of the same data (like plain text for one platform and structured data for another), or when you need to perform asynchronous operations like fetching data from external APIs.
跨距处理器Direct link to 跨距处理器
🌐 Span Processors
跨度处理器在追踪数据导出之前对其进行转换、过滤或丰富。它们充当跨度创建与导出之间的管道,使你能够出于安全、合规或调试的目的修改跨度。处理器只运行一次,并影响所有导出器。
🌐 Span processors transform, filter, or enrich trace data before it's exported. They act as a pipeline between span creation and export, enabling you to modify spans for security, compliance, or debugging purposes. Processors run once and affect all exporters.
内置处理器Direct link to 内置处理器
🌐 Built-in Processors
- 敏感数据过滤器 会编辑敏感信息。在默认的可观测性配置中启用。
创建自定义处理器Direct link to 创建自定义处理器
🌐 Creating Custom Processors
你可以通过实现 SpanOutputProcessor 接口来创建自定义 span 处理器。以下是一个将 span 中所有输入文本转换为小写的简单示例:
🌐 You can create custom span processors by implementing the SpanOutputProcessor interface. Here's a simple example that converts all input text in spans to lowercase:
import type { SpanOutputProcessor, AnySpan } from "@mastra/observability";
export class LowercaseInputProcessor implements SpanOutputProcessor {
name = "lowercase-processor";
process(span: AnySpan): AnySpan {
span.input = `${span.input}`.toLowerCase();
return span;
}
async shutdown(): Promise<void> {
// Cleanup if needed
}
}
// Use the custom processor
export const mastra = new Mastra({
observability: new Observability({
configs: {
development: {
spanOutputProcessors: [new LowercaseInputProcessor(), new SensitiveDataFilter()],
exporters: [new DefaultExporter()],
},
},
}),
});
处理器会按照它们被定义的顺序执行,这使你可以串联多个转换。常见用例包括:
🌐 Processors are executed in the order they're defined, allowing you to chain multiple transformations. Common use cases include:
- 编辑敏感数据(密码、令牌、API 密钥)
- 添加特定环境的元数据
- 根据条件筛选跨度
- 规范化数据格式
- 用业务背景丰富跨度
自定义跨度格式化器Direct link to 自定义跨度格式化器
🌐 Custom Span Formatters
自定义跨度格式化器可以改变跨度在特定可观测性平台上的显示方式。与跨度处理器不同,格式化器是按导出器配置的,从而允许为不同的目标设置不同的格式。格式化器支持同步和异步操作。
🌐 Custom span formatters transform how spans appear in specific observability platforms. Unlike span processors, formatters are configured per-exporter, allowing different formatting for different destinations. Formatters support both synchronous and asynchronous operations.
用例Direct link to 用例
🌐 Use Cases
- 从 AI SDK 消息中提取纯文本 - 将结构化消息数组转换为可读文本
- 转换输入/输出格式 - 自定义数据在特定平台上的显示方式
- 特定平台的字段映射 - 根据平台需求添加或删除字段
- 异步数据增强 - 从外部 API 或数据库获取附加上下文
配置Direct link to 配置
🌐 Configuration
在任何导出器配置中添加 customSpanFormatter:
🌐 Add a customSpanFormatter to any exporter configuration:
import { BraintrustExporter } from "@mastra/braintrust";
import { LangfuseExporter } from "@mastra/langfuse";
import { SpanType } from "@mastra/core/observability";
import type { CustomSpanFormatter } from "@mastra/core/observability";
// Formatter that extracts plain text from AI messages
const plainTextFormatter: CustomSpanFormatter = (span) => {
if (span.type === SpanType.AGENT_RUN && Array.isArray(span.input)) {
const userMessage = span.input.find((m) => m.role === "user");
return {
...span,
input: userMessage?.content ?? span.input,
};
}
return span;
};
export const mastra = new Mastra({
observability: new Observability({
configs: {
default: {
serviceName: "my-service",
exporters: [
// Braintrust gets plain text formatting
new BraintrustExporter({
customSpanFormatter: plainTextFormatter,
}),
// Langfuse keeps the original structured format
new LangfuseExporter(),
],
},
},
}),
});
链接多个格式化器Direct link to 链接多个格式化器
🌐 Chaining Multiple Formatters
使用 chainFormatters 来组合多个格式化器。链式支持同步和异步格式化器:
🌐 Use chainFormatters to combine multiple formatters. Chains support both sync and async formatters:
import { chainFormatters } from "@mastra/observability";
const inputFormatter: CustomSpanFormatter = (span) => ({
...span,
input: extractPlainText(span.input),
});
const outputFormatter: CustomSpanFormatter = (span) => ({
...span,
output: extractPlainText(span.output),
});
const exporter = new BraintrustExporter({
customSpanFormatter: chainFormatters([inputFormatter, outputFormatter]),
});
异步格式化器Direct link to 异步格式化器
🌐 Async Formatters
自定义 span 格式化器支持异步操作,可用于从外部 API 或数据库获取数据以丰富你的 span 的使用场景:
🌐 Custom span formatters support asynchronous operations, enabling use cases like fetching data from external APIs or databases to enrich your spans:
import type { CustomSpanFormatter } from "@mastra/core/observability";
// Async formatter that enriches spans with user data
const userEnrichmentFormatter: CustomSpanFormatter = async (span) => {
const userId = span.metadata?.userId;
if (!userId) return span;
// Fetch user data from your API or database
const userData = await fetchUserData(userId);
return {
...span,
metadata: {
...span.metadata,
userName: userData.name,
userEmail: userData.email,
department: userData.department,
},
};
};
// Async formatter that looks up additional context
const contextEnrichmentFormatter: CustomSpanFormatter = async (span) => {
if (span.type !== SpanType.AGENT_RUN) return span;
// Fetch experiment configuration
const experimentConfig = await getExperimentConfig(span.metadata?.experimentId);
return {
...span,
metadata: {
...span.metadata,
experimentVariant: experimentConfig?.variant,
experimentGroup: experimentConfig?.group,
},
};
};
// Use async formatters with an exporter
const exporter = new BraintrustExporter({
customSpanFormatter: userEnrichmentFormatter,
});
// Or chain sync and async formatters together
const exporter = new LangfuseExporter({
customSpanFormatter: chainFormatters([
plainTextFormatter, // sync
userEnrichmentFormatter, // async
contextEnrichmentFormatter, // async
]),
});
异步格式化器会增加跨度导出的延迟。保持异步操作快速(低于100毫秒),以避免减慢应用速度。考虑对经常访问的数据使用缓存。
序列化选项Direct link to 序列化选项
🌐 Serialization Options
序列化选项控制在导出前如何截断跨度数据(输入、输出和属性)。当处理大负载、深度嵌套的对象,或需要优化跟踪存储时,这非常有用。
🌐 Serialization options control how span data (input, output, and attributes) is truncated before export. This is useful when working with large payloads, deeply nested objects, or when you need to optimize trace storage.
配置Direct link to 配置
🌐 Configuration
将 serializationOptions 添加到你的可观测性配置中:
🌐 Add serializationOptions to your observability configuration:
export const mastra = new Mastra({
observability: new Observability({
configs: {
default: {
serviceName: "my-service",
serializationOptions: {
maxStringLength: 2048, // Maximum length for string values (default: 1024)
maxDepth: 10, // Maximum depth for nested objects (default: 6)
maxArrayLength: 100, // Maximum number of items in arrays (default: 50)
maxObjectKeys: 75, // Maximum number of keys in objects (default: 50)
},
exporters: [new DefaultExporter()],
},
},
}),
});
可用选项Direct link to 可用选项
🌐 Available Options
| 选项 | 默认值 | 描述 |
|---|---|---|
maxStringLength | 1024 | 字符串值的最大长度。较长的字符串会被截断。 |
maxDepth | 6 | 嵌套对象的最大深度。更深的层级将被省略。 |
maxArrayLength | 50 | 数组中的最大项数。额外的项将被省略。 |
maxObjectKeys | 50 | 对象中的最大键数。额外的键将被省略。 |
用例Direct link to 用例
🌐 Use Cases
增加调试限制:如果你的代理或工具需要处理大型文档、API 响应或数据结构,请增加这些限制以在跟踪中捕获更多上下文:
serializationOptions: {
maxStringLength: 8192, // Capture longer text content
maxDepth: 12, // Handle deeply nested JSON responses
maxArrayLength: 200, // Keep more items from large lists
}
为生产环境减少追踪大小:在不需要完整负载可见性时,降低这些值以减少存储成本并提高性能:
serializationOptions: {
maxStringLength: 256, // Truncate strings aggressively
maxDepth: 3, // Shallow object representation
maxArrayLength: 10, // Keep only first few items
maxObjectKeys: 20, // Limit object keys
}
所有选项都是可选的——如果未指定,将使用上面显示的默认值。
🌐 All options are optional — if not specified, they fall back to the defaults shown above.
正在检索跟踪 IDDirect link to 正在检索跟踪 ID
🌐 Retrieving Trace IDs
当你在启用跟踪的情况下执行代理或工作流时,响应中会包含一个 traceId,你可以使用它在可观测性平台中查找完整跟踪。这对于调试、客户支持或将跟踪与系统中的其他事件关联非常有用。
🌐 When you execute agents or workflows with tracing enabled, the response includes a traceId that you can use to look up the full trace in your observability platform. This is useful for debugging, customer support, or correlating traces with other events in your system.
代理跟踪IDDirect link to 代理跟踪ID
🌐 Agent Trace IDs
generate 和 stream 方法都会在它们的响应中返回追踪 ID:
🌐 Both generate and stream methods return the trace ID in their response:
// Using generate
const result = await agent.generate("Hello");
console.log("Trace ID:", result.traceId);
// Using stream
const streamResult = await agent.stream("Tell me a story");
console.log("Trace ID:", streamResult.traceId);
工作流跟踪IDDirect link to 工作流跟踪ID
🌐 Workflow Trace IDs
工作流执行也会返回追踪 ID:
🌐 Workflow executions also return trace IDs:
// Create a workflow run
const run = await mastra.getWorkflow("myWorkflow").createRun();
// Start the workflow
const result = await run.start({
inputData: { data: "process this" },
});
console.log("Trace ID:", result.traceId);
// Or stream the workflow
const { stream, getWorkflowState } = run.stream({
inputData: { data: "process this" },
});
// Get the final state which includes the trace ID
const finalState = await getWorkflowState();
console.log("Trace ID:", finalState.traceId);
使用追踪 IDDirect link to 使用追踪 ID
🌐 Using Trace IDs
一旦你有了追踪 ID,你可以:
🌐 Once you have a trace ID, you can:
- 在 Studio 中查找追踪:导航到追踪视图并按 ID 搜索
- 外部平台中的查询跟踪:在 Langfuse、Braintrust、MLflow 或你的可观测性平台中使用该 ID
- 与日志关联:在你的应用日志中包含跟踪 ID 以便交叉引用
- 用于调试的共享:向支持团队或开发者提供追踪 ID 以便进行调查
只有在启用追踪时才会有追踪 ID。如果追踪被禁用或采样排除了该请求,traceId 将是 undefined。
🌐 The trace ID is only available when tracing is enabled. If tracing is disabled or sampling excludes the request, traceId will be undefined.
与外部跟踪系统集成Direct link to 与外部跟踪系统集成
🌐 Integrating with External Tracing Systems
在具有现有分布式追踪(OpenTelemetry、Datadog 等)的应用中运行 Mastra 代理或工作流时,你可以将 Mastra 跟踪连接到你的父跟踪上下文。这可以创建整个请求流程的统一视图,使你更容易理解 Mastra 操作在更广泛系统中的作用。
🌐 When running Mastra agents or workflows within applications that have existing distributed tracing (OpenTelemetry, Datadog, etc.), you can connect Mastra traces to your parent trace context. This creates a unified view of your entire request flow, making it easier to understand how Mastra operations fit into the broader system.
传递外部跟踪 IDDirect link to 传递外部跟踪 ID
🌐 Passing External Trace IDs
使用 tracingOptions 参数来指定来自父系统的追踪上下文:
🌐 Use the tracingOptions parameter to specify the trace context from your parent system:
// Get trace context from your existing tracing system
const parentTraceId = getCurrentTraceId(); // Your tracing system
const parentSpanId = getCurrentSpanId(); // Your tracing system
// Execute Mastra operations as part of the parent trace
const result = await agent.generate("Analyze this data", {
tracingOptions: {
traceId: parentTraceId,
parentSpanId: parentSpanId,
},
});
// The Mastra trace will now appear as a child in your distributed trace
OpenTelemetry 集成Direct link to OpenTelemetry 集成
🌐 OpenTelemetry Integration
与 OpenTelemetry 的集成使 Mastra 跟踪能够无缝出现在你现有的可观测性平台中:
🌐 Integration with OpenTelemetry allows Mastra traces to appear seamlessly in your existing observability platform:
import { trace } from "@opentelemetry/api";
// Get the current OpenTelemetry span
const currentSpan = trace.getActiveSpan();
const spanContext = currentSpan?.spanContext();
if (spanContext) {
const result = await agent.generate(userMessage, {
tracingOptions: {
traceId: spanContext.traceId,
parentSpanId: spanContext.spanId,
},
});
}
工作流程集成Direct link to 工作流程集成
🌐 Workflow Integration
工作流支持相同的跟踪传播模式:
🌐 Workflows support the same pattern for trace propagation:
const workflow = mastra.getWorkflow("data-pipeline");
const run = await workflow.createRun();
const result = await run.start({
inputData: { data: "..." },
tracingOptions: {
traceId: externalTraceId,
parentSpanId: externalSpanId,
},
});
身份证格式要求Direct link to 身份证格式要求
🌐 ID Format Requirements
Mastra 验证跟踪 ID 和跨度 ID 以确保兼容性:
🌐 Mastra validates trace and span IDs to ensure compatibility:
- 跟踪 ID:1-32 个十六进制字符(OpenTelemetry 使用 32 个)
- 跨度 ID:1-16 个十六进制字符(OpenTelemetry 使用 16 个)
无效的 ID 会被优雅地处理——Mastra 会记录错误并继续运行:
🌐 Invalid IDs are handled gracefully — Mastra logs an error and continues:
- 无效的追踪 ID → 生成一个新的追踪 ID
- 无效的父跨度 ID → 忽略父关系
这确保了即使在输入格式错误的情况下,跟踪也不会导致你的应用崩溃。
🌐 This ensures tracing never crashes your application, even with malformed input.
示例:Express 中间件Direct link to 示例:Express 中间件
🌐 Example: Express Middleware
这是一个完整的示例,展示了在 Express 应用中跟踪传播的方式:
🌐 Here's a complete example showing trace propagation in an Express application:
import { trace } from "@opentelemetry/api";
import express from "express";
const app = express();
app.post("/api/analyze", async (req, res) => {
// Get current OpenTelemetry context
const currentSpan = trace.getActiveSpan();
const spanContext = currentSpan?.spanContext();
const result = await agent.generate(req.body.message, {
tracingOptions: spanContext
? {
traceId: spanContext.traceId,
parentSpanId: spanContext.spanId,
}
: undefined,
});
res.json(result);
});
这会创建一个包含 HTTP 请求处理和 Mastra 代理执行的单一分布式追踪,可在你选择的可观测性平台上查看。
🌐 This creates a single distributed trace that includes both the HTTP request handling and the Mastra agent execution, viewable in your observability platform of choice.
在无服务器环境中清除痕迹Direct link to 在无服务器环境中清除痕迹
🌐 Flushing Traces in Serverless Environments
在像 Vercel 的流式计算、AWS Lambda 或 Cloudflare Workers 这样的无服务器环境中,运行时实例可以被多个请求重用。flush() 方法可以确保在运行时终止之前导出所有缓冲的跨度,而无需关闭导出器(关闭导出器将阻止将来的导出)。
🌐 In serverless environments like Vercel's fluid compute, AWS Lambda, or Cloudflare Workers, runtime instances can be reused across multiple requests. The flush() method allows you to ensure all buffered spans are exported before the runtime terminates, without shutting down the exporter (which would prevent future exports).
Serverless environments have ephemeral filesystems. Use external storage instead of local file storage (file:./mastra.db). See the Vercel deployment guide for a complete setup example.
使用 flush()Direct link to 使用 flush()
🌐 Using flush()
在可观测性实例上调用 flush() 以刷新所有导出器:
🌐 Call flush() on the observability instance to flush all exporters:
// Get the observability instance from Mastra
const observability = mastra.getObservability();
// Flush all buffered spans to all exporters
await observability.flush();
何时使用 flush()Direct link to 何时使用 flush()
🌐 When to Use flush()
在以下情况下使用 flush():
🌐 Use flush() in these scenarios:
- 无服务器函数执行结束:确保在运行时暂停或终止之前导出跨度
- 在长时间运行的操作之前:在可能耗时的操作之前刷新已累积的跨度
- 定期刷新:在长时间运行的进程中,定期刷新以确保数据能够及时可用
// Example: Vercel serverless function
export async function POST(req: Request) {
const result = await agent.generate([{ role: "user", content: await req.text() }]);
// Ensure spans are exported before function completes
const observability = mastra.getObservability();
await observability.flush();
return Response.json(result);
}
flush() vs shutdown()Direct link to flush() vs shutdown()
| 方法 | 行为 | 使用场景 |
|---|---|---|
flush() | 导出缓冲的跨度,保持导出器活动 | 无服务器环境,定期刷新 |
shutdown() | 导出缓冲的跨度,释放资源 | 应用关闭,优雅终止 |
当你需要确保数据被导出但又希望导出器为未来请求保持准备状态时,使用 flush()。只有在应用终止时才使用 shutdown()。
🌐 Use flush() when you need to ensure data is exported but want to keep the exporter ready for future requests. Use shutdown() only when the application is terminating.
追踪的内容Direct link to 追踪的内容
🌐 What Gets Traced
Mastra 会自动创建以下范围:
🌐 Mastra automatically creates spans for:
代理操作Direct link to 代理操作
🌐 Agent Operations
- 代理运行 - 完成带有指令和工具的执行
- 大型语言模型调用 - 模型与令牌和参数的交互
- 工具执行 - 带有输入和输出的函数调用
- 内存操作 - 线程和语义回忆
工作流程操作Direct link to 工作流程操作
🌐 Workflow Operations
- 工作流运行 - 从开始到结束的完整执行
- 各个步骤 - 带输入/输出的步骤处理
- 控制流 - 条件语句、循环、并行执行
- 等待操作 - 延迟和事件等待
另请参阅Direct link to 另请参阅
🌐 See Also
参考文档Direct link to 参考文档
🌐 Reference Documentation
导出器Direct link to 导出器
🌐 Exporters
- DefaultExporter - 存储持久化
- CloudExporter - Mastra 云集成
- ConsoleExporter - 调试输出
- Arize - Arize Phoenix 和 Arize AX 集成
- Braintrust - Braintrust 集成
- Langfuse - Langfuse 集成
- MLflow - MLflow OTLP 端点设置
- OpenTelemetry - 兼容 OTEL 的平台
桥Direct link to 桥
🌐 Bridges
- OpenTelemetry 桥接 - OTEL 上下文集成
处理器Direct link to 处理器
🌐 Processors
- 敏感数据过滤器 - 数据编辑