Vercel
Vercel 汇集了来自多个提供商的模型,并提供增强功能,如速率限制和故障切换。通过 Mastra 的模型路由可访问 184 个模型。
🌐 Vercel aggregates models from multiple providers with enhanced features like rate limiting and failover. Access 184 models through Mastra's model router.
在Vercel 文档中了解更多。
🌐 Learn more in the Vercel documentation.
用法Direct link to 用法
🌐 Usage
src/mastra/agents/my-agent.ts
import { Agent } from "@mastra/core/agent";
const agent = new Agent({
id: "my-agent",
name: "My Agent",
instructions: "You are a helpful assistant",
model: "vercel/alibaba/qwen-3-14b"
});
info
Mastra 使用与 OpenAI 兼容的 /chat/completions 端点。一些特定提供商的功能可能无法使用。详见 Vercel 文档。
🌐 Mastra uses the OpenAI-compatible /chat/completions endpoint. Some provider-specific features may not be available. Check the Vercel documentation for details.
配置Direct link to 配置
🌐 Configuration
# Use gateway API key
VERCEL_API_KEY=your-gateway-key
# Or use provider API keys directly
OPENAI_API_KEY=sk-...
ANTHROPIC_API_KEY=ant-...
可用型号Direct link to 可用型号
🌐 Available Models
| Model |
|---|
alibaba/qwen-3-14b |
alibaba/qwen-3-235b |
alibaba/qwen-3-30b |
alibaba/qwen-3-32b |
alibaba/qwen3-235b-a22b-thinking |
alibaba/qwen3-coder |
alibaba/qwen3-coder-30b-a3b |
alibaba/qwen3-coder-plus |
alibaba/qwen3-embedding-0.6b |
alibaba/qwen3-embedding-4b |
alibaba/qwen3-embedding-8b |
alibaba/qwen3-max |
alibaba/qwen3-max-preview |
alibaba/qwen3-next-80b-a3b-instruct |
alibaba/qwen3-next-80b-a3b-thinking |
alibaba/qwen3-vl-instruct |
alibaba/qwen3-vl-thinking |
amazon/nova-2-lite |
amazon/nova-lite |
amazon/nova-micro |
amazon/nova-pro |
amazon/titan-embed-text-v2 |
anthropic/claude-3-haiku |
anthropic/claude-3-opus |
anthropic/claude-3.5-haiku |
anthropic/claude-3.5-sonnet |
anthropic/claude-3.5-sonnet-20240620 |
anthropic/claude-3.7-sonnet |
anthropic/claude-haiku-4.5 |
anthropic/claude-opus-4 |
anthropic/claude-opus-4.1 |
anthropic/claude-opus-4.5 |
anthropic/claude-sonnet-4 |
anthropic/claude-sonnet-4.5 |
arcee-ai/trinity-mini |
bfl/flux-kontext-max |
bfl/flux-kontext-pro |
bfl/flux-pro-1.0-fill |
bfl/flux-pro-1.1 |
bfl/flux-pro-1.1-ultra |
bytedance/seed-1.6 |
bytedance/seed-1.8 |
cohere/command-a |
cohere/embed-v4.0 |
deepseek/deepseek-r1 |
deepseek/deepseek-v3 |
deepseek/deepseek-v3.1 |
deepseek/deepseek-v3.1-terminus |
deepseek/deepseek-v3.2 |
deepseek/deepseek-v3.2-exp |
deepseek/deepseek-v3.2-thinking |
google/gemini-2.0-flash |
google/gemini-2.0-flash-lite |
google/gemini-2.5-flash |
google/gemini-2.5-flash-image |
google/gemini-2.5-flash-image-preview |
google/gemini-2.5-flash-lite |
google/gemini-2.5-flash-lite-preview-09-2025 |
google/gemini-2.5-flash-preview-09-2025 |
google/gemini-2.5-pro |
google/gemini-3-flash |
google/gemini-3-pro-image |
google/gemini-3-pro-preview |
google/gemini-embedding-001 |
google/imagen-4.0-fast-generate-001 |
google/imagen-4.0-generate-001 |
google/imagen-4.0-ultra-generate-001 |
google/text-embedding-005 |
google/text-multilingual-embedding-002 |
inception/mercury-coder-small |
kwaipilot/kat-coder-pro-v1 |
meituan/longcat-flash-chat |
meituan/longcat-flash-thinking |
meta/llama-3.1-70b |
meta/llama-3.1-8b |
meta/llama-3.2-11b |
meta/llama-3.2-1b |
meta/llama-3.2-3b |
meta/llama-3.2-90b |
meta/llama-3.3-70b |
meta/llama-4-maverick |
meta/llama-4-scout |
minimax/minimax-m2 |
minimax/minimax-m2.1 |
minimax/minimax-m2.1-lightning |
mistral/codestral |
mistral/codestral-embed |
mistral/devstral-2 |
mistral/devstral-small |
mistral/devstral-small-2 |
mistral/magistral-medium |
mistral/magistral-small |
mistral/ministral-14b |
mistral/ministral-3b |
mistral/ministral-8b |
mistral/mistral-embed |
mistral/mistral-large-3 |
mistral/mistral-medium |
mistral/mistral-nemo |
mistral/mistral-small |
mistral/mixtral-8x22b-instruct |
mistral/pixtral-12b |
mistral/pixtral-large |
moonshotai/kimi-k2-0905 |
moonshotai/kimi-k2-thinking |
moonshotai/kimi-k2-thinking-turbo |
moonshotai/kimi-k2-turbo |
moonshotai/kimi-k2.5 |
morph/morph-v3-fast |
morph/morph-v3-large |
nvidia/nemotron-3-nano-30b-a3b |
nvidia/nemotron-nano-12b-v2-vl |
nvidia/nemotron-nano-9b-v2 |
openai/codex-mini |
openai/gpt-3.5-turbo |
openai/gpt-3.5-turbo-instruct |
openai/gpt-4-turbo |
openai/gpt-4.1 |
openai/gpt-4.1-mini |
openai/gpt-4.1-nano |
openai/gpt-4o |
openai/gpt-4o-mini |
openai/gpt-5 |
openai/gpt-5-chat |
openai/gpt-5-codex |
openai/gpt-5-mini |
openai/gpt-5-nano |
openai/gpt-5-pro |
openai/gpt-5.1-codex |
openai/gpt-5.1-codex-max |
openai/gpt-5.1-codex-mini |
openai/gpt-5.1-instant |
openai/gpt-5.1-thinking |
openai/gpt-5.2 |
openai/gpt-5.2-chat |
openai/gpt-5.2-codex |
openai/gpt-5.2-pro |
openai/gpt-oss-120b |
openai/gpt-oss-20b |
openai/gpt-oss-safeguard-20b |
openai/o1 |
openai/o3 |
openai/o3-deep-research |
openai/o3-mini |
openai/o3-pro |
openai/o4-mini |
openai/text-embedding-3-large |
openai/text-embedding-3-small |
openai/text-embedding-ada-002 |
perplexity/sonar |
perplexity/sonar-pro |
perplexity/sonar-reasoning |
perplexity/sonar-reasoning-pro |
prime-intellect/intellect-3 |
recraft/recraft-v2 |
recraft/recraft-v3 |
vercel/v0-1.0-md |
vercel/v0-1.5-md |
voyage/voyage-3-large |
voyage/voyage-3.5 |
voyage/voyage-3.5-lite |
voyage/voyage-code-2 |
voyage/voyage-code-3 |
voyage/voyage-finance-2 |
voyage/voyage-law-2 |
xai/grok-2-vision |
xai/grok-3 |
xai/grok-3-fast |
xai/grok-3-mini |
xai/grok-3-mini-fast |
xai/grok-4 |
xai/grok-4-fast-non-reasoning |
xai/grok-4-fast-reasoning |
xai/grok-4.1-fast-non-reasoning |
xai/grok-4.1-fast-reasoning |
xai/grok-code-fast-1 |
xiaomi/mimo-v2-flash |
zai/glm-4.5 |
zai/glm-4.5-air |
zai/glm-4.5v |
zai/glm-4.6 |
zai/glm-4.6v |
zai/glm-4.6v-flash |
zai/glm-4.7 |