Skip to main content

模型提供商

🌐 Model Providers

Mastra 提供了一个统一的接口,用于跨多个供应商使用大型语言模型(LLM),通过单一 API 即可访问来自 75 个供应商的 2006 个模型。

🌐 Mastra provides a unified interface for working with LLMs across multiple providers, giving you access to 2006 models from 75 providers through a single API.

功能
Direct link to 功能

🌐 Features

  • 适用于任何模型的统一 API - 无需安装和管理额外的提供商依赖,即可访问任何模型。
  • 访问最新的人工智能 - 在新模型发布的第一时间使用,无论它们来自哪个提供商。通过Mastra的供应商无关接口,避免被供应商锁定。
  • 混合使用模型 - 对不同任务使用不同的模型。例如,对于大上下文处理运行 GPT-4o-mini,然后切换到 Claude Opus 4.1 进行推断任务。
  • 模型回退 - 如果某个提供商出现故障,Mastra 可以在应用层自动切换到另一个提供商,从而相比 API 网关最小化延迟。

基本用法
Direct link to 基本用法

🌐 Basic usage

无论你使用的是 OpenAI、Anthropic、Google,还是像 OpenRouter 这样的网关,都可以将模型指定为 "provider/model-name",其余的由 Mastra 来处理。

🌐 Whether you're using OpenAI, Anthropic, Google, or a gateway like OpenRouter, specify the model as "provider/model-name" and Mastra handles the rest.

Mastra 会读取相关的环境变量(例如 ANTHROPIC_API_KEY)并将请求转发给提供商。如果缺少 API 密钥,你将收到一个明确的运行时错误,准确显示需要设置的变量。

🌐 Mastra reads the relevant environment variable (e.g. ANTHROPIC_API_KEY) and routes requests to the provider. If an API key is missing, you'll get a clear runtime error showing exactly which variable to set.

src/mastra/agents/my-agent.ts
import { Agent } from "@mastra/core/agent";

const agent = new Agent({
id: "my-agent",
name: "My Agent",
instructions: "You are a helpful assistant",
model: "openai/gpt-5"
})

模型目录
Direct link to 模型目录

🌐 Model directory

使用左侧的导航浏览可用模型目录,或在下面进行探索。

🌐 Browse the directory of available models using the navigation on the left, or explore below.

你也可以直接在编辑器中发现模型。Mastra 为 model 字段提供完整的自动补全功能——只需开始输入,你的 IDE 就会显示可用选项。

🌐 You can also discover models directly in your editor. Mastra provides full autocomplete for the model field - just start typing, and your IDE will show available options.

或者,在 Studio 界面中浏览和测试模型。

🌐 Alternatively, browse and test models in Studio UI.

info

在开发过程中,我们会每小时自动刷新你的本地模型列表,确保你的 TypeScript 自动补齐和 Studio 始终与最新模型保持同步。要禁用,请设置 MASTRA_AUTO_REFRESH_PROVIDERS=false。在生产环境中,自动刷新默认是关闭的。

🌐 In development, we auto-refresh your local model list every hour, ensuring your TypeScript autocomplete and Studio stay up-to-date with the latest models. To disable, set MASTRA_AUTO_REFRESH_PROVIDERS=false. Auto-refresh is disabled by default in production.

混合搭配模型
Direct link to 混合搭配模型

🌐 Mix and match models

有些模型运行更快但能力较弱,而有些模型则提供更大的上下文窗口或更强的推断能力。可以使用同一提供商的不同模型,也可以跨提供商组合使用,以适应每个任务的需求。

🌐 Some models are faster but less capable, while others offer larger context windows or stronger reasoning skills. Use different models from the same provider, or mix and match across providers to fit each task.

src/mastra/agents/reasoning-agent.ts
import { Agent } from "@mastra/core/agent";

// Use a cost-effective model for document processing
const documentProcessor = new Agent({
id: "document-processor",
name: "Document Processor",
instructions: "Extract and summarize key information from documents",
model: "openai/gpt-4o-mini"
})

// Use a powerful reasoning model for complex analysis
const reasoningAgent = new Agent({
id: "reasoning-agent",
name: "Reasoning Agent",
instructions: "Analyze data and provide strategic recommendations",
model: "anthropic/claude-opus-4-1"
})

动态模型选择
Direct link to 动态模型选择

🌐 Dynamic model selection

由于模型只是字符串,你可以根据请求上下文、变量或任何其他逻辑动态选择它们。

🌐 Since models are just strings, you can select them dynamically based on request context, variables, or any other logic.

src/mastra/agents/dynamic-assistant-agent.ts
const agent = new Agent({
id: "dynamic-assistant",
name: "Dynamic Assistant",
model: ({ requestContext }) => {
const provider = requestContext.get("provider-id");
const model = requestContext.get("model-id");
return `${provider}/${model}`;
},
});

这可以实现强大的模式:

🌐 This enables powerful patterns:

  • A/B 测试 - 比较模型在生产环境中的表现。
  • 用户可选择的模型 - 让用户在你的应用中选择他们偏好的模型。
  • 多租户应用 - 每个客户都可以使用自己的 API 密钥和模型偏好。

特定于提供商的选项
Direct link to 特定于提供商的选项

🌐 Provider-specific options

不同的模型提供商会提供各自的配置选项。在 OpenAI 中,你可能会调整 reasoningEffort。在 Anthropic 中,你可能会调节 cacheControl。Mastra 允许你在代理级别或每条消息上设置这些特定的 providerOptions

🌐 Different model providers expose their own configuration options. With OpenAI, you might adjust the reasoningEffort. With Anthropic, you might tune cacheControl. Mastra lets you set these specific providerOptions either at the agent level or per message.

src/mastra/agents/planner-agent.ts
// Agent level (apply to all future messages)
const planner = new Agent({
id: "planner",
name: "Planner",
instructions: {
role: "system",
content: "You are a helpful assistant.",
providerOptions: {
openai: { reasoningEffort: "low" }
}
},
model: "openai/o3-pro",
});

const lowEffort =
await planner.generate("Plan a simple 3 item dinner menu");

// Message level (apply only to this message)
const highEffort = await planner.generate([
{
role: "user",
content: "Plan a simple 3 item dinner menu for a celiac",
providerOptions: {
openai: { reasoningEffort: "high" }
}
}
]);

自定义头
Direct link to 自定义头

🌐 Custom headers

如果你需要指定自定义头,例如组织 ID 或其他特定提供商的字段,请使用此语法。

🌐 If you need to specify custom headers, such as an organization ID or other provider-specific fields, use this syntax.

src/mastra/agents/custom-agent.ts
const agent = new Agent({
id: "custom-agent",
name: "Custom Agent",
model: {
id: "openai/gpt-4-turbo",
apiKey: process.env.OPENAI_API_KEY,
headers: {
"OpenAI-Organization": "org-abc123"
}
}
});
info

配置因提供商而异。有关自定义标头的详细信息,请参阅左侧导航中的提供商页面。

🌐 Configuration differs by provider. See the provider pages in the left navigation for details on custom headers.

模型回退
Direct link to 模型回退

🌐 Model fallbacks

依赖单一模型会为你的应用带来单点故障。模型备用功能可以在模型和提供商之间实现自动故障切换。如果主要模型不可用,请求将会尝试使用下一个配置的备用模型,直到成功为止。

🌐 Relying on a single model creates a single point of failure for your application. Model fallbacks provide automatic failover between models and providers. If the primary model becomes unavailable, requests are retried against the next configured fallback until one succeeds.

src/mastra/agents/resilient-assistant-agent.ts
import { Agent } from '@mastra/core/agent';

const agent = new Agent({
id: 'resilient-assistant',
name: 'Resilient Assistant',
instructions: 'You are a helpful assistant.',
model: [
{
model: "openai/gpt-5",
maxRetries: 3,
},
{
model: "anthropic/claude-4-5-sonnet",
maxRetries: 2,
},
{
model: "google/gemini-2.5-pro",
maxRetries: 2,
},
],
});

Mastra 会首先尝试你的主要模型。如果遇到 500 错误、速率限制或超时,它会自动切换到你的第一个备用模型。如果这也失败,它会继续尝试下一个。每个模型在切换前都有自己的重试次数。

🌐 Mastra tries your primary model first. If it encounters a 500 error, rate limit, or timeout, it automatically switches to your first fallback. If that fails too, it moves to the next. Each model gets its own retry count before moving on.

你的用户永远不会遇到中断——响应会以相同的格式返回,只是来自不同的模型。当系统在回退链中移动时,错误上下文会得到保留,从而确保错误能够干净地传播,同时保持流式兼容性。

🌐 Your users never experience the disruption - the response comes back with the same format, just from a different model. The error context is preserved as the system moves through your fallback chain, ensuring clean error propagation while maintaining streaming compatibility.

使用 Mastra 的本地模型
Direct link to 使用 Mastra 的本地模型

🌐 Use local models with Mastra

Mastra 还支持本地模型,如 gpt-ossQwen3DeepSeek 以及更多可以在你自己的硬件上运行的模型。运行本地模型的应用需要提供一个与 OpenAI 兼容的 API 服务器,以便 Mastra 进行连接。我们推荐使用 LMStudio(请参见 运行 LMStudio 服务器)。

🌐 Mastra also supports local models like gpt-oss, Qwen3, DeepSeek and many more that you run on your own hardware. The application running your local model needs to provide an OpenAI-compatible API server for Mastra to connect to. We recommend using LMStudio (see Running the LMStudio server).

对于自定义提供程序,id${providerId}/${modelId})是必需的,但它仅用于显示目的。modelId 需要是你想使用的实际模型。一个例子是:custom/my-qwen3-model

🌐 For a custom provider the id (${providerId}/${modelId}) is required but it will only be used for display purposes. The modelId needs to be the actual model you want to use. An example would be: custom/my-qwen3-model.

对于 url重要的是你需要使用 Mastra 的 model 设置下的 OpenAI 兼容端点的基础 URL,而不是单独的聊天端点。

🌐 For the url it's important that you use the base URL of the OpenAI-compatible endpoint with Mastra's model setting and not the individual chat endpoints.

src/mastra/agents/my-agent.ts
import { Agent } from "@mastra/core/agent";

const agent = new Agent({
id: "my-agent",
name: "My Agent",
instructions: "You are a helpful assistant",
model: {
id: "custom/my-qwen3-model",
url: "http://your-custom-openai-compatible-endpoint.com/v1"
}
})

示例:LMStudio
Direct link to 示例:LMStudio

🌐 Example: LMStudio

启动 LMStudio 服务器后,本地服务器可在 http://localhost:1234 访问,并提供诸如 /v1/models/v1/chat/completions 等端点。url 将是 http://localhost:1234/v1。对于 id,你可以使用 (lmstudio/${modelId}),该内容将在 LMStudio 界面中显示。

🌐 After starting the LMStudio server, the local server is available at http://localhost:1234 and it provides endpoints like /v1/models, /v1/chat/completions, etc. The url will be http://localhost:1234/v1. For the id you can use (lmstudio/${modelId}) which will be displayed in the LMStudio interface.

src/mastra/agents/my-agent.ts
import { Agent } from "@mastra/core/agent";

const agent = new Agent({
id: "my-agent",
name: "My Agent",
instructions: "You are a helpful assistant",
model: {
id: "lmstudio/qwen/qwen3-30b-a3b-2507",
url: "http://localhost:1234/v1"
}
})

将 AI SDK 与 Mastra 一起使用
Direct link to 将 AI SDK 与 Mastra 一起使用

🌐 Use AI SDK with Mastra

Mastra 支持 AI SDK 提供模块,如果你需要直接使用它们。

🌐 Mastra supports AI SDK provider modules, should you need to use them directly.

src/mastra/agents/my-agent.ts
import { groq } from '@ai-sdk/groq';
import { Agent } from "@mastra/core/agent";

const agent = new Agent({
id: "my-agent",
name: "My Agent",
model: groq('gemma2-9b-it')
})

你可以在任何接受 "provider/model" 字符串的地方使用 AI SDK 模型(例如 groq('gemma2-9b-it')),包括在模型路由回退和 评分器 中。

🌐 You can use an AI SDK model (e.g. groq('gemma2-9b-it')) anywhere that accepts a "provider/model" string, including within model router fallbacks and scorers.