Skip to main content

Agent.network()

实验性功能

这是一个实验性 API,未来版本可能会有所更改。network() 方法支持多代理协作和工作流编排。在生产环境中使用时请谨慎。

.network() 方法支持多智能体协作和路径规划。此方法接收消息和可选的执行选项。

🌐 The .network() method enables multi-agent collaboration and routing. This method accepts messages and optional execution options.

使用示例
Direct link to 使用示例

🌐 Usage example

import { Agent } from "@mastra/core/agent";
import { agent1, agent2 } from "./agents";
import { workflow1 } from "./workflows";
import { tool1, tool2 } from "./tools";

const agent = new Agent({
id: "network-agent",
name: "Network Agent",
instructions:
"You are a network agent that can help users with a variety of tasks.",
model: "openai/gpt-5.1",
agents: {
agent1,
agent2,
},
workflows: {
workflow1,
},
tools: {
tool1,
tool2,
},
});

await agent.network(`
Find me the weather in Tokyo.
Based on the weather, plan an activity for me.
`);

参数
Direct link to 参数

🌐 Parameters

messages:

string | string[] | CoreMessage[] | AiMessageType[] | UIMessageWithMetadata[]
The messages to send to the agent. Can be a single string, array of strings, or structured message objects.

options?:

MultiPrimitiveExecutionOptions
Optional configuration for the network process.

选项
Direct link to 选项

🌐 Options

maxSteps?:

number
Maximum number of steps to run during execution.

memory?:

object
Configuration for memory. This is the preferred way to manage memory.

thread:

string | { id: string; metadata?: Record<string, any>, title?: string }
The conversation thread, as a string ID or an object with an `id` and optional `metadata`.

resource:

string
Identifier for the user or resource associated with the thread.

options?:

MemoryConfig
Configuration for memory behavior, like message history and semantic recall.

tracingContext?:

TracingContext
Tracing context for creating child spans and adding metadata. Automatically injected when using Mastra's tracing system.

currentSpan?:

Span
Current span for creating child spans and adding metadata. Use this to create custom child spans or update span attributes during execution.

tracingOptions?:

TracingOptions
Options for Tracing configuration.

metadata?:

Record<string, any>
Metadata to add to the root trace span. Useful for adding custom attributes like user IDs, session IDs, or feature flags.

requestContextKeys?:

string[]
Additional RequestContext keys to extract as metadata for this trace. Supports dot notation for nested values (e.g., 'user.id').

traceId?:

string
Trace ID to use for this execution (1-32 hexadecimal characters). If provided, this trace will be part of the specified trace.

parentSpanId?:

string
Parent span ID to use for this execution (1-16 hexadecimal characters). If provided, the root span will be created as a child of this span.

tags?:

string[]
Tags to apply to this trace. String labels for categorizing and filtering traces.

telemetry?:

TelemetrySettings
Settings for OTLP telemetry collection during streaming (not Tracing).

isEnabled?:

boolean
Enable or disable telemetry. Disabled by default while experimental.

recordInputs?:

boolean
Enable or disable input recording. Enabled by default. You might want to disable input recording to avoid recording sensitive information.

recordOutputs?:

boolean
Enable or disable output recording. Enabled by default. You might want to disable output recording to avoid recording sensitive information.

functionId?:

string
Identifier for this function. Used to group telemetry data by function.

modelSettings?:

CallSettings
Model-specific settings like temperature, maxOutputTokens, topP, etc. These settings control how the language model generates responses.

temperature?:

number
Controls randomness in generation (0-2). Higher values make output more random.

maxOutputTokens?:

number
Maximum number of tokens to generate in the response. Note: Use maxOutputTokens (not maxTokens) as per AI SDK v5 convention.

maxRetries?:

number
Maximum number of retry attempts for failed requests.

topP?:

number
Nucleus sampling parameter (0-1). Controls diversity of generated text.

topK?:

number
Top-k sampling parameter. Limits vocabulary to k most likely tokens.

presencePenalty?:

number
Penalty for token presence (-2 to 2). Reduces repetition.

frequencyPenalty?:

number
Penalty for token frequency (-2 to 2). Reduces repetition of frequent tokens.

stopSequences?:

string[]
Stop sequences. If set, the model will stop generating text when one of the stop sequences is generated.

structuredOutput?:

StructuredOutputOptions
Configuration for generating a typed structured output from the network result.

schema:

ZodSchema | JSONSchema7
The schema to validate the output against. Can be a Zod schema or JSON Schema.

model?:

MastraModelConfig
Model to use for generating the structured output. Defaults to the agent's model.

instructions?:

string
Custom instructions for generating the structured output.

runId?:

string
Unique ID for this generation run. Useful for tracking and debugging purposes.

requestContext?:

RequestContext
Request Context for dependency injection and contextual information.

traceId?:

string
The trace ID associated with this execution when Tracing is enabled. Use this to correlate logs and debug execution flow.

返回
Direct link to 返回

🌐 Returns

stream:

MastraAgentNetworkStream<NetworkChunkType>
A custom stream that extends ReadableStream<NetworkChunkType> with additional network-specific properties

status:

Promise<RunStatus>
A promise that resolves to the current workflow run status

result:

Promise<WorkflowResult<TState, TOutput, TSteps>>
A promise that resolves to the final workflow result

usage:

Promise<{ promptTokens: number; completionTokens: number; totalTokens: number }>
A promise that resolves to token usage statistics

object:

Promise<OUTPUT | undefined>
A promise that resolves to the structured output object. Only available when structuredOutput option is provided. Resolves to undefined if no schema was specified.

objectStream:

ReadableStream<Partial<OUTPUT>>
A stream of partial objects during structured output generation. Useful for streaming partial results as they're being generated.

结构化输出
Direct link to 结构化输出

🌐 Structured Output

当你需要来自网络的已键入、验证的结果时,请使用 structuredOutput 选项。网络将在任务完成后生成与你的模式匹配的响应。

🌐 When you need typed, validated results from your network, use the structuredOutput option. The network will generate a response matching your schema after task completion.

import { z } from "zod";

const resultSchema = z.object({
summary: z.string().describe("A brief summary of the findings"),
recommendations: z.array(z.string()).describe("List of recommendations"),
confidence: z.number().min(0).max(1).describe("Confidence score"),
});

const stream = await agent.network("Research AI trends and summarize", {
structuredOutput: {
schema: resultSchema,
},
});

// Consume the stream
for await (const chunk of stream) {
// Handle streaming events
}

// Get the typed result
const result = await stream.object;
// result is typed as { summary: string; recommendations: string[]; confidence: number }
console.log(result?.summary);
console.log(result?.recommendations);

流式部分对象
Direct link to 流式部分对象

🌐 Streaming Partial Objects

你也可以在对象生成的过程中部分地进行流式传输:

🌐 You can also stream partial objects as they're being generated:

const stream = await agent.network("Analyze data", {
structuredOutput: { schema: resultSchema },
});

// Stream partial objects
for await (const partial of stream.objectStream) {
console.log("Partial result:", partial);
}

// Get final result
const final = await stream.object;

块类型
Direct link to 块类型

🌐 Chunk Types

在使用结构化输出时,会生成额外的块类型:

🌐 When using structured output, additional chunk types are emitted:

  • network-object:在流式传输过程中发出部分对象
  • network-object-result:已发出最终结构化对象