Skip to main content

MastraModelOutput

MastraModelOutput 类由 .stream() 返回,提供基于流和基于 Promise 的模型输出访问。它支持结构化输出生成、工具调用、推断以及全面的使用跟踪。

🌐 The MastraModelOutput class is returned by .stream() and provides both streaming and promise-based access to model outputs. It supports structured output generation, tool calls, reasoning, and comprehensive usage tracking.

// MastraModelOutput is returned by agent.stream()
const stream = await agent.stream("Hello world");

有关设置和基本使用,请参阅 .stream() 方法的文档。

🌐 For setup and basic usage, see the .stream() method documentation.

流式属性
Direct link to 流式属性

🌐 Streaming Properties

这些属性可以在生成模型输出时实时访问它们:

🌐 These properties provide real-time access to model outputs as they're generated:

fullStream:

ReadableStream<ChunkType<OUTPUT>>
Complete stream of all chunk types including text, tool calls, reasoning, metadata, and control chunks. Provides granular access to every aspect of the model's response.
ReadableStream

ChunkType:

ChunkType<OUTPUT>
All possible chunk types that can be emitted during streaming

textStream:

ReadableStream<string>
Stream of incremental text content only. Filters out all metadata, tool calls, and control chunks to provide just the text being generated.

objectStream:

ReadableStream<Partial<OUTPUT>>
Stream of progressive structured object updates when using output schemas. Emits partial objects as they're built up, allowing real-time visualization of structured data generation.
ReadableStream

PartialSchemaOutput:

Partial<OUTPUT>
Partially completed object matching the defined schema

elementStream:

ReadableStream<OUTPUT extends (infer T)[] ? T : never>
Stream of individual array elements when the output schema defines an array type. Each element is emitted as it's completed rather than waiting for the entire array.

基于 Promise 的属性
Direct link to 基于 Promise 的属性

🌐 Promise-based Properties

这些属性在流完成后会解析为最终值:

🌐 These properties resolve to final values after the stream completes:

text:

Promise<string>
The complete concatenated text response from the model. Resolves when text generation is finished.

object:

Promise<OUTPUT>
The complete structured object response when using output schemas. Validated against the schema before resolving. Rejects if validation fails.
Promise

InferSchemaOutput:

OUTPUT
Fully typed object matching the exact schema definition

reasoning:

Promise<string>
Complete reasoning text for models that support reasoning (like OpenAI's o1 series). Returns empty string for models without reasoning capability.

reasoningText:

Promise<string | undefined>
Alternative access to reasoning content. May be undefined for models that don't support reasoning, while 'reasoning' returns empty string.

toolCalls:

Promise<ToolCallChunk[]>
Array of all tool call chunks made during execution. Each chunk contains tool metadata and execution details.
ToolCallChunk

type:

'tool-call'
Chunk type identifier

runId:

string
Execution run identifier

from:

ChunkFrom
Source of the chunk (AGENT, WORKFLOW, etc.)

payload:

ToolCallPayload
Tool call data including toolCallId, toolName, args, and execution details

toolResults:

Promise<ToolResultChunk[]>
Array of all tool result chunks corresponding to the tool calls. Contains execution results and error information.
ToolResultChunk

type:

'tool-result'
Chunk type identifier

runId:

string
Execution run identifier

from:

ChunkFrom
Source of the chunk (AGENT, WORKFLOW, etc.)

payload:

ToolResultPayload
Tool result data including toolCallId, toolName, result, and error status

usage:

Promise<LanguageModelUsage>
Token usage statistics including input tokens, output tokens, total tokens, and reasoning tokens (for reasoning models).
Record

inputTokens:

number
Tokens consumed by the input prompt

outputTokens:

number
Tokens generated in the response

totalTokens:

number
Sum of input and output tokens

reasoningTokens?:

number
Hidden reasoning tokens (for reasoning models)

cachedInputTokens?:

number
Number of input tokens that were a cache hit

finishReason:

Promise<string | undefined>
Reason why generation stopped (e.g., 'stop', 'length', 'tool_calls', 'content_filter'). Undefined if the stream hasn't finished.
enum

stop:

'stop'
Model finished naturally

length:

'length'
Hit maximum token limit

tool_calls:

'tool_calls'
Model called tools

content_filter:

'content_filter'
Content was filtered

response:

Promise<Response>
Response metadata and messages from the model provider.
Response

id?:

string
Response ID from the model provider

timestamp?:

Date
Response timestamp

modelId?:

string
Model identifier used for this response

headers?:

Record<string, string>
Response headers from the model provider

messages?:

ResponseMessage[]
Response messages in model format

uiMessages?:

UIMessage[]
Response messages in UI format, includes any metadata added by output processors

错误属性
Direct link to 错误属性

🌐 Error Properties

error:

string | Error | { message: string; stack: string; } | undefined
Error information if the stream encountered an error. Undefined if no errors occurred. Can be a string message, Error object, or serialized error with stack trace.

方法
Direct link to 方法

🌐 Methods

getFullOutput:

() => Promise<FullOutput>
Returns a comprehensive output object containing all results: text, structured object, tool calls, usage statistics, reasoning, and metadata. Convenient single method to access all stream results.
FullOutput

text:

string
Complete text response

object?:

OUTPUT
Structured output if schema was provided

toolCalls:

ToolCallChunk[]
All tool call chunks made

toolResults:

ToolResultChunk[]
All tool result chunks

usage:

Record<string, number>
Token usage statistics

reasoning?:

string
Reasoning text if available

finishReason?:

string
Why generation finished

response:

Response
Response metadata and messages from the model provider

consumeStream:

(options?: ConsumeStreamOptions) => Promise<void>
Manually consume the entire stream without processing chunks. Useful when you only need the final promise-based results and want to trigger stream consumption.
ConsumeStreamOptions

onError?:

(error: Error) => void
Callback for handling stream errors

使用示例
Direct link to 使用示例

🌐 Usage Examples

基本文本流
Direct link to 基本文本流

🌐 Basic Text Streaming

const stream = await agent.stream("Write a haiku");

// Stream text as it's generated
for await (const text of stream.textStream) {
process.stdout.write(text);
}

// Or get the complete text
const fullText = await stream.text;
console.log(fullText);

结构化输出流
Direct link to 结构化输出流

🌐 Structured Output Streaming

const stream = await agent.stream("Generate user data", {
structuredOutput: {
schema: z.object({
name: z.string(),
age: z.number(),
email: z.string(),
}),
},
});

// Stream partial objects
for await (const partial of stream.objectStream) {
console.log("Progress:", partial); // { name: "John" }, { name: "John", age: 30 }, ...
}

// Get final validated object
const user = await stream.object;
console.log("Final:", user); // { name: "John", age: 30, email: "john@example.com" }

### Tool Calls and Results

```typescript
const stream = await agent.stream("What's the weather in NYC?", {
tools: { weather: weatherTool }
});

// Monitor tool calls
const toolCalls = await stream.toolCalls;
const toolResults = await stream.toolResults;

console.log("Tools called:", toolCalls);
console.log("Results:", toolResults);

完全输出访问
Direct link to 完全输出访问

🌐 Complete Output Access

const stream = await agent.stream("Analyze this data");

const output = await stream.getFullOutput();
console.log({
text: output.text,
usage: output.usage,
reasoning: output.reasoning,
finishReason: output.finishReason,
});

全流处理
Direct link to 全流处理

🌐 Full Stream Processing

const stream = await agent.stream("Complex task");

for await (const chunk of stream.fullStream) {
switch (chunk.type) {
case "text-delta":
process.stdout.write(chunk.payload.text);
break;
case "tool-call":
console.log(`Calling ${chunk.payload.toolName}...`);
break;
case "reasoning-delta":
console.log(`Reasoning: ${chunk.payload.text}`);
break;
case "finish":
console.log(`Done! Reason: ${chunk.payload.stepResult.reason}`);
// Access response messages with any metadata added by output processors
const uiMessages = chunk.payload.response?.uiMessages;
if (uiMessages) {
console.log("Response messages:", uiMessages);
}
break;
}
}

错误处理
Direct link to 错误处理

🌐 Error Handling

const stream = await agent.stream("Analyze this data");

try {
// Option 1: Handle errors in consumeStream
await stream.consumeStream({
onError: (error) => {
console.error("Stream error:", error);
},
});

const result = await stream.text;
} catch (error) {
console.error("Failed to get result:", error);
}

// Option 2: Check error property
const result = await stream.getFullOutput();
if (stream.error) {
console.error("Stream had errors:", stream.error);
}

🌐 Related Types

  • .stream() - 返回 MastraModelOutput 的方法
  • ChunkType - 全部流中可能的块类型