memory
The Knowledge Graph Memory Server is a custom MCP server implementation designed to provide persistent memory using a local knowledge graph, allowing Claude to remember user information across sessions.
Top Comments
Knowledge Graph Memory Server
A custom MCP server implementation based on @modelcontextprotocol, designed to provide persistent memory using a local knowledge graph. This allows Claude to remember user information across sessions and enables efficient, recursive retrieval of project structures and related knowledge. The server supports atomic observation storage, project hash tagging, and robust error handling for seamless integration with Claude and other MCP-compatible clients.
Core Concepts
Entities
Entities are the primary nodes in the knowledge graph. Each entity has:
- A unique name (identifier)
- An entity type (e.g., "person", "organization", "event")
- A list of observations
Example:
{
"name": "John_Smith",
"entityType": "person",
"observations": ["Speaks fluent Spanish"]
}
Relations
Relations define directed connections between entities. They are always stored in active voice and describe how entities interact or relate to each other.
Example:
{
"from": "John_Smith",
"to": "Anthropic",
"relationType": "works_at"
}
Observations
Observations are discrete pieces of information about an entity. They are:
- Stored as strings
- Attached to specific entities
- Can be added or removed independently
- Should be atomic (one fact per observation)
Example:
{
"entityName": "John_Smith",
"observations": [
"Speaks fluent Spanish",
"Graduated in 2019",
"Prefers morning meetings"
]
}
System Architecture
The Memory MCP Server is built with a modular architecture consisting of several key components that work together to provide a robust knowledge graph-based memory system.
Core Components
MCPServer
- Server implementation using
@modelcontextprotocol/sdk
- Exposes tools for interacting with the knowledge graph
- Handles tool requests and routes them to the KnowledgeGraphManager
- Implements robust error handling for tool calls
KnowledgeGraphManager
- Core class for managing the knowledge graph data structure
- Handles CRUD operations for entities and relations
- Implements file locking mechanism to prevent race conditions
- Provides robust error handling for file operations
Mutex
- Simple mutex implementation for file operations
- Prevents concurrent access to the memory file
- Uses a queue system for waiting operations
- Provides a
withLock
helper method for executing functions with the mutex
FileStorage
- Manages persistent storage of the knowledge graph
- Uses line-delimited JSON format for efficient updates
- Implements read/write operations with error handling
- Supports atomic file operations to prevent data corruption
EnvironmentConfig
- Handles environment variable configuration
- Supports both
MEMORY_FILE
andMEMORY_FILE_PATH
variables - Provides fallback to default memory file path
- Creates necessary directories for the memory file
ProjectHashSystem
- Generates unique hash identifiers for project entities
- Tags entities and relations with project hashes
- Enables efficient retrieval of project-related entities and relations
- Supports the
getProjectByHash
API for direct project access
RecursiveTraversal
- Implements graph traversal algorithms for related entities
- Supports configurable recursion depth to prevent infinite loops
- Enhances
openNodes
andsearchNodes
APIs with recursive retrieval - Provides complete project structures with a single query
Component Relationships
┌─────────────────┐ ┌───────────────────────┐
│ │ │ │
│ MCPServer │─────▶│ KnowledgeGraphManager │
│ │ │ │
└───────────┬─────┘ └───────────┬───────────┘
│ │
│ │
│ ▼
┌───────────▼─────────┐ ┌───────────────────────┐
│ │ │ │
│ ProjectHashSystem │◀────▶│ FileStorage │
│ │ │ │
└───────────┬─────────┘ └───────────┬───────────┘
│ │
│ │
▼ ▼
┌─────────────────────┐ ┌───────────────────────┐
│ │ │ │
│RecursiveTraversal │ │ Mutex │
│ │ │ │
└─────────────────────┘ └───────────────────────┘
Data Structure
The Memory MCP Server uses a knowledge graph data structure composed of entities and relations, enhanced with project hash tagging for efficient retrieval.
Enhanced Entity Interface
Entities are the primary nodes in the knowledge graph, with the following structure:
interface EntityInterface {
name: string; // Unique identifier
entityType: string; // Type classification
observations: string[]; // Associated observations
projectHash?: string; // Optional project identifier
}
Enhanced Relation Interface
Relations define directed connections between entities, with the following structure:
interface RelationInterface {
from: string; // Source entity name
to: string; // Target entity name
relationType: string; // Relationship type in active voice
projectHash?: string; // Optional project identifier
}
Knowledge Graph Structure
The knowledge graph is stored as a collection of entities and relations, with project hash tagging enabling efficient filtering and retrieval of related data.
interface KnowledgeGraph {
entities: EntityInterface[];
relations: RelationInterface[];
}
Logic Flow
The Memory MCP Server implements several key workflows for managing the knowledge graph:
API Request Flow
- Client sends a tool request to the MCPServer
- MCPServer validates the request and routes it to the appropriate handler
- Handler calls the KnowledgeGraphManager to perform the requested operation
- KnowledgeGraphManager acquires a lock using the Mutex
- KnowledgeGraphManager performs the operation on the knowledge graph
- Changes are persisted to storage using FileStorage
- Lock is released and response is returned to the client
Project Hash Tagging
- When a Project entity is created, a unique hash is generated
- The Project entity is tagged with this hash
- When relations are created involving a Project entity, they inherit the project hash
- This enables efficient filtering and retrieval of project-related entities and relations
Recursive Traversal
- Client requests entities using
openNodes
orsearchNodes
with recursive=true - Server retrieves the requested entities
- If any entity has a
projectHash
, the server retrieves all entities and relations with the same project hash - If no project hash is present, the server falls back to recursive graph traversal up to the specified depth (default: 5)
- All related entities and relations are returned in a single response
File Locking Mechanism
- Before performing operations on the knowledge graph, a lock is acquired
- If another operation has the lock, the request waits in a queue
- Once the lock is acquired, the operation is performed
- After the operation completes, the lock is released
- This prevents race conditions and ensures data consistency
API
Tools
-
create_entities
- Create multiple new entities in the knowledge graph
- Input:
entities
(array of objects)- Each object contains:
name
(string): Entity identifierentityType
(string): Type classificationobservations
(string[]): Associated observations
- Each object contains:
- Ignores entities with existing names
-
create_relations
- Create multiple new relations between entities
- Input:
relations
(array of objects)- Each object contains:
from
(string): Source entity nameto
(string): Target entity namerelationType
(string): Relationship type in active voice
- Each object contains:
- Skips duplicate relations
-
add_observations
- Add new observations to existing entities
- Input:
observations
(array of objects)- Each object contains:
entityName
(string): Target entitycontents
(string[]): New observations to add
- Each object contains:
- Returns added observations per entity
- Fails if entity doesn't exist
-
delete_entities
- Remove entities and their relations
- Input:
entityNames
(string[]) - Cascading deletion of associated relations
- Silent operation if entity doesn't exist
-
delete_observations
- Remove specific observations from entities
- Input:
deletions
(array of objects)- Each object contains:
entityName
(string): Target entityobservations
(string[]): Observations to remove
- Each object contains:
- Silent operation if observation doesn't exist
-
delete_relations
- Remove specific relations from the graph
- Input:
relations
(array of objects)- Each object contains:
from
(string): Source entity nameto
(string): Target entity namerelationType
(string): Relationship type
- Each object contains:
- Silent operation if relation doesn't exist
-
open_nodes
- Open specific nodes in the knowledge graph by their names
- Input:
names
(string[]): An array of entity names to retrieverecursive
(boolean, optional): Whether to recursively include related entities and relationships (default: true)depth
(number, optional): Maximum recursion depth when recursive is true (default: 5)
- Returns entities and their relationships
-
search_nodes
- Search for nodes in the knowledge graph based on a query
- Input:
query
(string): The search query to match against entity names, types, and observation contentrecursive
(boolean, optional): Whether to recursively include related entities and relationships (default: true)depth
(number, optional): Maximum recursion depth when recursive is true (default: 5)
- Returns matching entities and their relationships
-
get_project_by_hash
- Retrieve a project and all its related entities and relations by project hash
- Input:
projectHash
(string): The hash of the project to retrieve - Returns the project entity and all related entities and relations
-
read_graph
- Read the entire knowledge graph
- No input required
- Returns complete graph structure with all entities and relations
-
search_nodes
- Search for nodes based on query
- Input:
query
(string) - Searches across:
- Entity names
- Entity types
- Observation content
- Returns matching entities and their relations
-
open_nodes
- Retrieve specific nodes by name
- Input:
names
(string[]) - Returns:
- Requested entities
- Relations between requested entities
- Silently skips non-existent nodes
Usage with Claude Desktop
Setup
git clone https://github.com/htooayelwinict/memory.git
cd memory
npm run build
add this to your claude_desktop_config.json:
Node
{
"mcpServers": {
"memory": {
"command": "node",
"args": [
"your_memory_path/memory/dist/index.js"
],
"env": {
"MEMORY_FILE": "your_memory_path/memory/mcp_memory.json"
}
}
}
}
Node with custom setting
The server can be configured using the following environment variables:
{
"mcpServers": {
"memory": {
"command": "node",
"args": [
"your_memory_path/memory/dist/index.js"
],
"env": {
"MEMORY_FILE": "your_memory_path/memory/mcp_memory.json"
}
}
}
}
MEMORY_FILE
: Path to the memory storage JSON file (default:memory.json
in the server directory)
for windsurf global rules
Memory Rules:
- When running any `read_graph` MCP tool, always provide a dummy parameter (e.g., `{ dummy: null }`) to prevent errors.
- Always include `projectHash` in every entity variable for proper tracking.
- Always retrieve and associate the user-provided project name together with its `projectHash`.
System Prompt
The prompt for utilizing memory depends on the use case. Changing the prompt will help the model determine the frequency and types of memories created.
Here is an example prompt for chat personalization. You could use this prompt in the "Custom Instructions" field of a Claude.ai Project.
Follow these steps for each interaction:
1. User Identification:
- You should assume that you are interacting with default_user
- If you have not identified default_user, proactively try to do so.
2. Memory Retrieval:
- Always begin your chat by saying only "Remembering..." and retrieve all relevant information from your knowledge graph
- Always refer to your knowledge graph as your "memory"
3. Memory
- While conversing with the user, be attentive to any new information that falls into these categories:
a) Basic Identity (age, gender, location, job title, education level, etc.)
b) Behaviors (interests, habits, etc.)
c) Preferences (communication style, preferred language, etc.)
d) Goals (goals, targets, aspirations, etc.)
e) Relationships (personal and professional relationships up to 3 degrees of separation)
4. Memory Update:
- If any new information was gathered during the interaction, update your memory as follows:
a) Create entities for recurring organizations, people, and significant events
b) Connect them to the current entities using relations
b) Store facts about them as observations
Project Hash Tagging
The Memory MCP Server has been enhanced with project hash tagging functionality to enable more efficient retrieval of project structures:
- Added
projectHash
field to Entity and Relation interfaces - Implemented
generateProjectHash
method to create unique identifiers for projects - Updated
createEntities
to automatically tag Project entities with a hash - Updated
createRelations
to propagate project hashes to relations connected to project entities - Added
getProjectByHash
method for direct retrieval of all entities and relations in a project - Enhanced
openNodes
to use hash-based retrieval when a Project entity is requested
Recursive Retrieval
The Memory MCP Server now supports recursive retrieval of entity relationships, allowing users to get complete project structures with a single query:
- Added a recursive approach to both
openNodes
andsearchNodes
methods - Implemented a
collectRelatedEntitiesAndRelations
helper method that traverses the graph to find all related entities - Added optional parameters to control recursion:
recursive
: Boolean flag to enable/disable recursive retrieval (default: true)depth
: Maximum recursion depth to prevent infinite loops (default: 5)
- Updated tool schemas and handlers to support these new parameters
- Enhanced logging to provide more detailed information about the recursive retrieval process
License
This MCP server is licensed under the MIT License. This means you are free to use, modify, and distribute the software, subject to the terms and conditions of the MIT License. For more details, please see the LICENSE file in the project repository.
Related MCP Servers
View all knowledge_and_memory servers →git-mcp
by idosal
GitMCP is a free, open-source, remote Model Context Protocol (MCP) server that transforms GitHub projects into documentation hubs, enabling AI tools to access up-to-date documentation and code.