Skip to main content

Memory 类

🌐 Memory Class

Memory 类为 Mastra 提供了一个强大的系统,用于管理对话历史和基于线程的消息存储。它支持对话的持久化存储、语义搜索功能以及高效的消息检索。你必须为对话历史配置存储提供商,如果启用语义回忆功能,还需要提供向量存储和嵌入器。

🌐 The Memory class provides a robust system for managing conversation history and thread-based message storage in Mastra. It enables persistent storage of conversations, semantic search capabilities, and efficient message retrieval. You must configure a storage provider for conversation history, and if you enable semantic recall you will also need to provide a vector store and embedder.

使用示例
Direct link to 使用示例

🌐 Usage example

src/mastra/agents/test-agent.ts
import { Memory } from "@mastra/memory";
import { Agent } from "@mastra/core/agent";

export const agent = new Agent({
name: "test-agent",
instructions: "You are an agent with memory.",
model: "openai/gpt-5.1",
memory: new Memory({
options: {
workingMemory: {
enabled: true,
},
},
}),
});

要在代理上启用 workingMemory,你需要在主 Mastra 实例上配置存储提供程序。更多信息请参见 Mastra 类

构造函数参数
Direct link to 构造函数参数

🌐 Constructor parameters

storage?:

MastraCompositeStore
Storage implementation for persisting memory data. Defaults to `new DefaultStorage({ config: { url: "file:memory.db" } })` if not provided.

vector?:

MastraVector | false
Vector store for semantic search capabilities. Set to `false` to disable vector operations.

embedder?:

EmbeddingModel<string> | EmbeddingModelV2<string>
Embedder instance for vector embeddings. Required when semantic recall is enabled.

options?:

MemoryConfig
Memory configuration options.

选项参数
Direct link to 选项参数

🌐 Options parameters

lastMessages?:

number | false
= 10
Number of most recent messages to retrieve. Set to false to disable.

readOnly?:

boolean
= false
When true, prevents memory from saving new messages and provides working memory as read-only context (without the updateWorkingMemory tool). Useful for read-only operations like previews, internal routing agents, or sub agents that should reference but not modify memory.

semanticRecall?:

boolean | { topK: number; messageRange: number | { before: number; after: number }; scope?: 'thread' | 'resource' }
= false
Enable semantic search in message history. Can be a boolean or an object with configuration options. When enabled, requires both vector store and embedder to be configured. Default topK is 4, default messageRange is {before: 1, after: 1}.

workingMemory?:

WorkingMemory
= { enabled: false, template: '# User Information\n- **First Name**:\n- **Last Name**:\n...' }
Configuration for working memory feature. Can be `{ enabled: boolean; template?: string; schema?: ZodObject<any> | JSONSchema7; scope?: 'thread' | 'resource' }` or `{ enabled: boolean }` to disable.

generateTitle?:

boolean | { model: DynamicArgument<MastraLanguageModel>; instructions?: DynamicArgument<string> }
= false
Controls automatic thread title generation from the user's first message. Can be a boolean or an object with custom model and instructions.

返回
Direct link to 返回

🌐 Returns

memory:

Memory
A new Memory instance with the specified configuration.

扩展使用示例
Direct link to 扩展使用示例

🌐 Extended usage example

src/mastra/agents/test-agent.ts
import { Memory } from "@mastra/memory";
import { Agent } from "@mastra/core/agent";
import { LibSQLStore, LibSQLVector } from "@mastra/libsql";

export const agent = new Agent({
name: "test-agent",
instructions: "You are an agent with memory.",
model: "openai/gpt-5.1",
memory: new Memory({
storage: new LibSQLStore({
id: 'test-agent-storage',
url: "file:./working-memory.db",
}),
vector: new LibSQLVector({
id: 'test-agent-vector',
url: "file:./vector-memory.db",
}),
options: {
lastMessages: 10,
semanticRecall: {
topK: 3,
messageRange: 2,
scope: "resource",
},
workingMemory: {
enabled: true,
},
generateTitle: true,
},
}),
});

带索引配置的 PostgreSQL
Direct link to 带索引配置的 PostgreSQL

🌐 PostgreSQL with index configuration

src/mastra/agents/pg-agent.ts
import { Memory } from "@mastra/memory";
import { Agent } from "@mastra/core/agent";
import { ModelRouterEmbeddingModel } from "@mastra/core/llm";
import { PgStore, PgVector } from "@mastra/pg";

export const agent = new Agent({
name: "pg-agent",
instructions: "You are an agent with optimized PostgreSQL memory.",
model: "openai/gpt-5.1",
memory: new Memory({
storage: new PgStore({
id: 'pg-agent-storage',
connectionString: process.env.DATABASE_URL,
}),
vector: new PgVector({
id: 'pg-agent-vector',
connectionString: process.env.DATABASE_URL,
}),
embedder: new ModelRouterEmbeddingModel("openai/text-embedding-3-small"),
options: {
lastMessages: 20,
semanticRecall: {
topK: 5,
messageRange: 3,
scope: "resource",
indexConfig: {
type: "hnsw", // Use HNSW for better performance
metric: "dotproduct", // Optimal for OpenAI embeddings
m: 16, // Number of bi-directional links
efConstruction: 64, // Construction-time candidate list size
},
},
workingMemory: {
enabled: true,
},
},
}),
});

🌐 Related