Agent.getLLM()
.getLLM() 方法用于获取为代理配置的语言模型实例,如果它是一个函数则会解析它。此方法提供对支持代理功能的底层大型语言模型(LLM)的访问。
🌐 The .getLLM() method retrieves the language model instance configured for an agent, resolving it if it's a function. This method provides access to the underlying LLM that powers the agent's capabilities.
使用示例Direct link to 使用示例
🌐 Usage example
await agent.getLLM();
参数Direct link to 参数
🌐 Parameters
options?:
{ requestContext?: RequestContext; model?: MastraLanguageModel | DynamicArgument<MastraLanguageModel> }
= {}
Optional configuration object containing request context and optional model override.
返回Direct link to 返回
🌐 Returns
llm:
MastraLLMV1 | Promise<MastraLLMV1>
The language model instance configured for the agent, either as a direct instance or a promise that resolves to the LLM.
扩展使用示例Direct link to 扩展使用示例
🌐 Extended usage example
await agent.getLLM({
requestContext: new RequestContext(),
model: "openai/gpt-5.1",
});
选项参数Direct link to 选项参数
🌐 Options parameters
requestContext?:
RequestContext
= new RequestContext()
Request Context for dependency injection and contextual information.
model?:
MastraLanguageModel | DynamicArgument<MastraLanguageModel>
Optional model override. If provided, this model will be used used instead of the agent's configured model.
相关Direct link to 相关
🌐 Related