Skip to main content

createGraphRAGTool()

createGraphRAGTool() 创建了一个工具,通过构建文档之间语义关系的图谱来增强 RAG。它在底层使用 GraphRAG 系统提供基于图的检索,通过直接相似性和关联关系找到相关内容。

🌐 The createGraphRAGTool() creates a tool that enhances RAG by building a graph of semantic relationships between documents. It uses the GraphRAG system under the hood to provide graph-based retrieval, finding relevant content through both direct similarity and connected relationships.

使用示例
Direct link to 使用示例

🌐 Usage Example

import { createGraphRAGTool } from "@mastra/rag";
import { ModelRouterEmbeddingModel } from "@mastra/core/llm";

const graphTool = createGraphRAGTool({
vectorStoreName: "pinecone",
indexName: "docs",
model: new ModelRouterEmbeddingModel("openai/text-embedding-3-small"),
graphOptions: {
dimension: 1536,
threshold: 0.7,
randomWalkSteps: 100,
restartProb: 0.15,
},
});

参数
Direct link to 参数

🌐 Parameters

note

参数要求: 大多数字段可以在创建时设置为默认值。一些字段可以在运行时通过请求上下文或输入进行覆盖。如果创建时和运行时都缺少必填字段,将会抛出错误。请注意,modeliddescription 只能在创建时设置。

id?:

string
Custom ID for the tool. By default: 'GraphRAG {vectorStoreName} {indexName} Tool'. (Set at creation only.)

description?:

string
Custom description for the tool. By default: 'Access and analyze relationships between information in the knowledge base to answer complex questions about connections and patterns.' (Set at creation only.)

vectorStoreName:

string
Name of the vector store to query. (Can be set at creation or overridden at runtime.)

indexName:

string
Name of the index within the vector store. (Can be set at creation or overridden at runtime.)

model:

EmbeddingModel
Embedding model to use for vector search. (Set at creation only.)

enableFilter?:

boolean
= false
Enable filtering of results based on metadata. (Set at creation only, but will be automatically enabled if a filter is provided in the request context.)

includeSources?:

boolean
= true
Include the full retrieval objects in the results. (Can be set at creation or overridden at runtime.)

graphOptions?:

GraphOptions
= Default graph options
Configuration for the graph-based retrieval

providerOptions?:

Record<string, Record<string, any>>
Provider-specific options for the embedding model (e.g., outputDimensionality). **Important**: Only works with AI SDK EmbeddingModelV2 models. For V1 models, configure options when creating the model itself.

vectorStore?:

MastraVector | VectorStoreResolver
Direct vector store instance or a resolver function for dynamic selection. Use a function for multi-tenant applications where the vector store is selected based on request context. When provided, `vectorStoreName` becomes optional.

GraphOptions
Direct link to GraphOptions

dimension?:

number
= 1536
Dimension of the embedding vectors

threshold?:

number
= 0.7
Similarity threshold for creating edges between nodes (0-1)

randomWalkSteps?:

number
= 100
Number of steps in random walk for graph traversal. (Can be set at creation or overridden at runtime.)

restartProb?:

number
= 0.15
Probability of restarting random walk from query node. (Can be set at creation or overridden at runtime.)

返回
Direct link to 返回

🌐 Returns

该工具返回一个对象,其中包含:

🌐 The tool returns an object with:

relevantContext:

string
Combined text from the most relevant document chunks, retrieved using graph-based ranking

sources:

QueryResult[]
Array of full retrieval result objects. Each object contains all information needed to reference the original document, chunk, and similarity score.

QueryResult 对象结构
Direct link to QueryResult 对象结构

🌐 QueryResult object structure

{
id: string; // Unique chunk/document identifier
metadata: any; // All metadata fields (document ID, etc.)
vector: number[]; // Embedding vector (if available)
score: number; // Similarity score for this retrieval
document: string; // Full chunk/document text (if available)
}

默认工具描述
Direct link to 默认工具描述

🌐 Default Tool Description

默认描述强调:

🌐 The default description focuses on:

  • 分析文档之间的关系
  • 寻找模式和联系
  • 回答复杂问题

高级示例
Direct link to 高级示例

🌐 Advanced Example

const graphTool = createGraphRAGTool({
vectorStoreName: "pinecone",
indexName: "docs",
model: new ModelRouterEmbeddingModel("openai/text-embedding-3-small"),
graphOptions: {
dimension: 1536,
threshold: 0.8, // Higher similarity threshold
randomWalkSteps: 200, // More exploration steps
restartProb: 0.2, // Higher restart probability
},
});

自定义描述示例
Direct link to 自定义描述示例

🌐 Example with Custom Description

const graphTool = createGraphRAGTool({
vectorStoreName: "pinecone",
indexName: "docs",
model: "openai/text-embedding-3-small ",
description:
"Analyze document relationships to find complex patterns and connections in our company's historical data",
});

此示例展示了如何为特定用例自定义工具描述,同时保持其关系分析的核心功能。

🌐 This example shows how to customize the tool description for a specific use case while maintaining its core purpose of relationship analysis.

示例:使用请求上下文
Direct link to 示例:使用请求上下文

🌐 Example: Using Request Context

const graphTool = createGraphRAGTool({
vectorStoreName: "pinecone",
indexName: "docs",
model: "openai/text-embedding-3-small ",
});

在使用请求上下文时,通过请求上下文在执行时提供所需的参数:

🌐 When using request context, provide required parameters at execution time via the request context:

const requestContext = new RequestContext<{
vectorStoreName: string;
indexName: string;
topK: number;
filter: any;
}>();
requestContext.set("vectorStoreName", "my-store");
requestContext.set("indexName", "my-index");
requestContext.set("topK", 5);
requestContext.set("filter", { category: "docs" });
requestContext.set("randomWalkSteps", 100);
requestContext.set("restartProb", 0.15);

const response = await agent.generate(
"Find documentation from the knowledge base.",
{
requestContext,
},
);

有关请求上下文的更多信息,请参见:

🌐 For more information on request context, please see:

面向多租户应用的动态向量存储
Direct link to 面向多租户应用的动态向量存储

🌐 Dynamic Vector Store for Multi-Tenant Applications

对于每个租户数据隔离的多租户应用,你可以传递一个解析器函数,而不是使用静态向量存储:

🌐 For multi-tenant applications where each tenant has isolated data, you can pass a resolver function instead of a static vector store:

import { createGraphRAGTool, VectorStoreResolver } from "@mastra/rag";
import { PgVector } from "@mastra/pg";

const vectorStoreResolver: VectorStoreResolver = async ({ requestContext }) => {
const tenantId = requestContext?.get("tenantId");

return new PgVector({
id: `pg-vector-${tenantId}`,
connectionString: process.env.POSTGRES_CONNECTION_STRING!,
schemaName: `tenant_${tenantId}`,
});
};

const graphTool = createGraphRAGTool({
indexName: "embeddings",
model: new ModelRouterEmbeddingModel("openai/text-embedding-3-small"),
vectorStore: vectorStoreResolver,
});

有关更多详情,请参见 createVectorQueryTool - 动态矢量存储

🌐 See createVectorQueryTool - Dynamic Vector Store for more details.

🌐 Related