已废弃:此方法已被废弃,仅适用于 V1 模型。对于 V2 模型,请改用新的 .stream() 方法。有关升级的详细信息,请参阅 迁移指南。
Signal object that allows you to abort the agent's execution. When the signal is aborted, all ongoing operations will be terminated.
Additional context messages to provide to the agent.
experimental_output?:
Zod schema | JsonSchema7
Enables structured output generation alongside text generation and tool calls. The model will generate responses that conform to the provided schema.
Custom instructions that override the agent's default instructions for this specific generation. Useful for dynamically modifying agent behavior without creating a new agent instance.
output?:
Zod schema | JsonSchema7
Defines the expected structure of the output. Can be a JSON Schema object or a Zod schema.
Configuration for memory. This is the preferred way to manage memory.
thread:
string | { id: string; metadata?: Record<string, any>, title?: string }
The conversation thread, as a string ID or an object with an `id` and optional `metadata`.
Identifier for the user or resource associated with the thread.
Configuration for memory behavior, like message history and semantic recall.
Maximum number of execution steps allowed.
Maximum number of retries. Set to 0 to disable retries.
memoryOptions?:
MemoryConfig
**Deprecated.** Use `memory.options` instead. Configuration options for memory management.
lastMessages?:
number | false
Number of recent messages to include in context, or false to disable.
semanticRecall?:
boolean | { topK: number; messageRange: number | { before: number; after: number }; scope?: 'thread' | 'resource' }
Enable semantic recall to find relevant past messages. Can be a boolean or detailed configuration.
workingMemory?:
WorkingMemory
Configuration for working memory functionality.
threads?:
{ generateTitle?: boolean | { model: DynamicArgument<MastraLanguageModel>; instructions?: DynamicArgument<string> } }
Thread-specific configuration, including automatic title generation.
onFinish?:
StreamTextOnFinishCallback<any> | StreamObjectOnFinishCallback<OUTPUT>
Callback function called when streaming completes. Receives the final result.
onStepFinish?:
StreamTextOnStepFinishCallback<any> | never
Callback function called after each execution step. Receives step details as a JSON string. Unavailable for structured output
**Deprecated.** Use `memory.resource` instead. Identifier for the user or resource interacting with the agent. Must be provided if threadId is provided.
telemetry?:
TelemetrySettings
Settings for telemetry collection during streaming.
Enable or disable telemetry. Disabled by default while experimental.
Enable or disable input recording. Enabled by default. You might want to disable input recording to avoid recording sensitive information.
Enable or disable output recording. Enabled by default. You might want to disable output recording to avoid recording sensitive information.
Identifier for this function. Used to group telemetry data by function.
Controls randomness in the model's output. Higher values (e.g., 0.8) make the output more random, lower values (e.g., 0.2) make it more focused and deterministic.
**Deprecated.** Use `memory.thread` instead. Identifier for the conversation thread. Allows for maintaining context across multiple interactions. Must be provided if resourceId is provided.
Save messages incrementally after each stream step completes (default: false).
providerOptions?:
Record<string, Record<string, JSONValue>>
Additional provider-specific options that are passed through to the underlying LLM provider. The structure is `{ providerName: { optionKey: value } }`. For example: `{ openai: { reasoningEffort: 'high' }, anthropic: { maxTokens: 1000 } }`.
openai?:
Record<string, JSONValue>
OpenAI-specific options. Example: `{ reasoningEffort: 'high' }`
anthropic?:
Record<string, JSONValue>
Anthropic-specific options. Example: `{ maxTokens: 1000 }`
google?:
Record<string, JSONValue>
Google-specific options. Example: `{ safetySettings: [...] }`
[providerName]?:
Record<string, JSONValue>
Other provider-specific options. The key is the provider name and the value is a record of provider-specific options.
Unique ID for this generation run. Useful for tracking and debugging purposes.
requestContext?:
RequestContext
Request Context for dependency injection and contextual information.
Maximum number of tokens to generate.
Nucleus sampling. This is a number between 0 and 1. It is recommended to set either `temperature` or `topP`, but not both.
Only sample from the top K options for each subsequent token. Used to remove 'long tail' low probability responses.
Presence penalty setting. It affects the likelihood of the model to repeat information that is already in the prompt. A number between -1 (increase repetition) and 1 (maximum penalty, decrease repetition).
Frequency penalty setting. It affects the likelihood of the model to repeatedly use the same words or phrases. A number between -1 (increase repetition) and 1 (maximum penalty, decrease repetition).
Stop sequences. If set, the model will stop generating text when one of the stop sequences is generated.
The seed (integer) to use for random sampling. If set and supported by the model, calls will generate deterministic results.
textStream?:
AsyncGenerator<string>
Async generator that yields text chunks as they become available.
fullStream?:
Promise<ReadableStream>
Promise that resolves to a ReadableStream for the complete response.
Promise that resolves to the complete text response.
usage?:
Promise<{ totalTokens: number; promptTokens: number; completionTokens: number }>
Promise that resolves to token usage information.
finishReason?:
Promise<string>
Promise that resolves to the reason why the stream finished.
新的 .stream() 方法提供了增强的功能,包括兼容 AI SDK v5+、更好的结构化输出处理以及改进的回调系统。有关详细的迁移说明,请参阅 迁移指南。
🌐 The new .stream() method offers enhanced capabilities including AI SDK v5+ compatibility, better structured output handling, and improved callback system. See the migration guide for detailed migration instructions.