freshcrate
Home > MCP Servers > synaptic-memory

synaptic-memory

Brain-inspired knowledge graph: spreading activation, Hebbian learning, memory consolidation.

Description

Brain-inspired knowledge graph: spreading activation, Hebbian learning, memory consolidation.

README

Synaptic Memory

Brain-inspired knowledge graph for LLM agents and multi-agent systems.

Agents automatically structure their operational data โ€” tool calls, decisions, outcomes, lessons โ€” into an auto-constructed ontology, enabling self-retrieval and reasoning over past experiences. Library + MCP server.

CI PyPI Python License


Why

LLM agents don't remember. They repeat the same mistakes, fail to leverage past successes, and can't access accumulated team knowledge.

Traditional RAG stops at "chunk documents and search by vector." But agents need more than document retrieval โ€” they need structured experience:

  • "What decision did I make last time in this situation, and what was the outcome?"
  • "Has this pattern failed before? Why?"
  • "What rules should I follow when using this tool?"

Synaptic Memory borrows the answer from how the brain works.


Differentiators

Synaptic Memory Cognee Mem0 LightRAG
Agent experience learning Hebbian co-activation - - -
Memory consolidation (4-tier) L0 โ†’ L1 โ†’ L2 โ†’ L3 - Partial -
Auto-ontology construction Rules + LLM + Embedding LLM only - LLM only
Multi-axis ranking relevance x importance x recency x vitality x context - - -
Zero-dep core Pure Python - - -
MCP server 16 tools - - -
Korean optimization FTS + synonym tuning - - -

Benchmarks (FTS only, no embedding)

Dataset Corpus MRR nDCG@10 R@10
Allganize RAG-Eval (Finance/Medical/Legal) 300 0.793 0.810 0.870
HotPotQA-24 (multi-hop, Cognee comparison) 226 0.754 0.636 0.729
AutoRAGRetrieval (enterprise) 720 0.639 0.677 0.800
KLUE-MRC (Korean QA) 500 0.607 0.643 0.760

Design Philosophy โ€” Four Principles from the Brain

1. Spreading Activation โ€” Associative Search

When the brain hears "deploy," it co-activates CI/CD, rollback, incidents, and monitoring.

Search: "deployment"
  โ†’ FTS match: [CI/CD pipeline, deployment automation]
  โ†’ Neighbor activation: [rollback strategy, canary deployment, incident response rules]
  โ†’ Resonance ranking: relevance ร— importance ร— recency ร— vitality ร— context

2. Hebbian Learning โ€” "What fires together, wires together"

Agent uses [PostgreSQL selection] + [vector search implementation] together โ†’ success
  โ†’ edge weight += 0.1 โ†’ co-activated in future searches

Agent uses [skip tests] + [production deploy] together โ†’ failure
  โ†’ edge weight -= 0.15 โ†’ failure experience surfaces first

3. Memory Consolidation โ€” Keep only what matters

L0 (Raw, 72h)      โ† All records. Deleted after 72h if not accessed.
L1 (Sprint, 90d)   โ† 3+ accesses. Retained for 90 days.
L2 (Monthly, 365d) โ† 10+ accesses. Retained for 1 year.
L3 (Permanent)     โ† 80%+ success rate. Permanently preserved. (Demoted below 60%)

4. Auto-Ontology โ€” Structure knowledge for future retrieval

"When will an agent search for this knowledge?" โ€” metadata is auto-generated based on predicted future queries:

await graph.add("Payment Outage Postmortem", "PG API timeout caused...")

# LLM auto-generates:
# kind: LESSON
# tags: ["payment", "PG", "timeout", "circuit-breaker"]
# search_keywords: ["payment failure cause", "PG outage response", "API timeout fix"]
# search_scenarios: ["searching past cases when payment system fails"]
# relations to existing nodes: --[LEARNED_FROM]--> "deployment decision"

Three-tier auto-construction:

Mode Configuration Cost Details
Rule-based RuleBasedClassifier() Free Keyword matching, zero-dep
+ Embedding + RuleBasedRelationDetector() + embedder Free (local) Cosine similarity auto-linking
+ LLM LLMClassifier() + LLMRelationDetector() Local/API Search keyword prediction, semantic relation extraction

Install

pip install synaptic-memory                      # Core (zero deps)
pip install synaptic-memory[embedding]           # + auto-embedding (Ollama/vLLM)
pip install synaptic-memory[sqlite]              # + SQLite FTS5
pip install synaptic-memory[scale]               # Neo4j + Qdrant + MinIO + embedding
pip install synaptic-memory[mcp]                 # + MCP server
pip install synaptic-memory[all]                 # Everything

Quick Start

1. In-memory โ€” zero-dep, instant start

from synaptic import SynapticGraph, ActivityTracker

async def main():
    graph = SynapticGraph.memory()
    tracker = ActivityTracker(graph)

    # Search past experiences (intent auto-inferred)
    result = await graph.agent_search("DB migration failure")

    # Record a decision
    session = await tracker.start_session(agent_id="my-agent")
    decision = await tracker.record_decision(
        session.id,
        title="Choose PostgreSQL",
        rationale="Need vector search + ACID",
        alternatives=["MongoDB", "SQLite"],
    )

    # Record outcome โ†’ auto Hebbian learning
    await tracker.record_outcome(
        decision.id,
        title="Migration succeeded",
        content="Achieved zero downtime",
        success=True,
    )

2. SQLite โ€” lightweight production

from synaptic import SynapticGraph

graph = SynapticGraph.sqlite("knowledge.db")
await graph.backend.connect()

# RuleBasedClassifier + RelationDetector + Ontology included automatically.
# Just add content โ€” kind and relations are auto-classified.
await graph.add("Refund Policy", "Refunds available within 7 days...")  # โ†’ kind=RULE (auto)

3. Full โ€” LLM classification + embedding + relation detection

from synaptic import SynapticGraph
from synaptic.backends.sqlite import SQLiteBackend
from synaptic.extensions.llm_provider import OllamaLLMProvider

graph = SynapticGraph.full(
    SQLiteBackend("knowledge.db"),
    llm=OllamaLLMProvider(model="qwen3:0.6b"),
    embed_api_base="http://localhost:8080/v1",
    embed_model="BAAI/bge-m3",
)
await graph.backend.connect()

# LLM auto-generates: kind classification + tags + search keywords + search scenarios
# Embeddings include search_keywords โ†’ improved vector search accuracy
# Semantic relations auto-detected against existing nodes (DEPENDS_ON, LEARNED_FROM, etc.)
node = await graph.add("Payment Outage Postmortem", "PG API timeout caused...")

4. Custom โ€” manual composition

Instead of factory methods, compose each component directly:

from synaptic import SynapticGraph, OpenAIEmbeddingProvider
from synaptic.backends.sqlite import SQLiteBackend

graph = SynapticGraph(
    SQLiteBackend("knowledge.db"),
    embedder=OpenAIEmbeddingProvider("http://gpu-server:8080/v1", model="BAAI/bge-m3"),
)
await graph.backend.connect()

# Auto: title + content โ†’ vector generation โ†’ stored
await graph.add("Deployment Strategy", "Blue-green deployment for zero downtime")
# Auto: query โ†’ vector generation โ†’ FTS + vector hybrid search
result = await graph.search("deployment approach")

5. Kuzu โ€” Embedded Property Graph

from synaptic import SynapticGraph

graph = SynapticGraph.kuzu("knowledge.kuzu")
await graph.backend.connect()
await graph.add("Deploy Policy", "Auto-deploy after PR merge")

Kuzu runs in-process (like SQLite for graphs) โ€” native openCypher, FTS and vector indexes via bundled extensions, no server required. MIT licensed.

6. Scale โ€” CompositeBackend

from synaptic import SynapticGraph
from synaptic.backends.composite import CompositeBackend
from synaptic.backends.kuzu import KuzuBackend
from synaptic.backends.qdrant import QdrantBackend
from synaptic.backends.minio_store import MinIOBackend

composite = CompositeBackend(
    graph=KuzuBackend("knowledge.kuzu"),
    vector=QdrantBackend("http://localhost:6333"),
    blob=MinIOBackend("localhost:9000", access_key="minio", secret_key="secret"),
)
await composite.connect()

graph = SynapticGraph.full(composite, embed_api_base="http://gpu-server:8080/v1")

# Internal routing:
# - embedding โ†’ Qdrant, content > 100KB โ†’ MinIO, everything else โ†’ Kuzu

Architecture

SynapticGraph (Facade)
  โ”‚
  โ”œโ”€โ”€ Auto-Ontology โ”€โ”€โ”€โ”€โ”€ RuleBasedClassifier / LLMClassifier
  โ”‚                       RuleBasedRelationDetector / LLMRelationDetector
  โ”œโ”€โ”€ OntologyRegistry โ”€โ”€ Type hierarchy + property inheritance + constraint validation
  โ”œโ”€โ”€ ActivityTracker โ”€โ”€โ”€ Session / tool call / decision / outcome capture
  โ”œโ”€โ”€ AgentSearch โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€ 6 intent-based search strategies
  โ”œโ”€โ”€ HybridSearch โ”€โ”€โ”€โ”€โ”€โ”€โ”€ FTS + vector โ†’ synonym โ†’ LLM rewrite
  โ”œโ”€โ”€ ResonanceScorer โ”€โ”€โ”€โ”€ 5-axis resonance (relevance ร— importance ร— recency ร— vitality ร— context)
  โ”œโ”€โ”€ HebbianEngine โ”€โ”€โ”€โ”€โ”€โ”€ Co-activation reinforcement / weakening
  โ”œโ”€โ”€ ConsolidationCascade  L0โ†’L3 lifecycle
  โ”œโ”€โ”€ EmbeddingProvider โ”€โ”€ Auto vector generation (Ollama/vLLM/OpenAI)
  โ”œโ”€โ”€ LLMProvider โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€ LLM for ontology construction (Ollama/OpenAI)
  โ””โ”€โ”€ Exporters โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€ Markdown, JSON
       โ”‚
  StorageBackend (Protocol)
       โ”‚
  โ”Œโ”€โ”€โ”€โ”€โ”ผโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”ฌโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”ฌโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”
  โ”‚    โ”‚          โ”‚               โ”‚              โ”‚
Memory SQLite  PostgreSQL      Kuzu       CompositeBackend
(dev)  (FTS5)  (pgvector)   (embedded    (Kuzu+Qdrant+MinIO)
                             Cypher)

5-axis Resonance Scoring

Score = 0.55 ร— relevance     Search match score [0,1]
      + 0.15 ร— importance    (success - failure) / access_count [0,1]
      + 0.20 ร— recency       exp(-0.05 ร— days_since_update) [0,1]
      + 0.10 ร— vitality      Periodic decay ร—0.95 [0,1]
      + (context weight) ร— context  Session tag Jaccard similarity [0,1]

Weights vary by intent. past_failures emphasizes importance; context_explore emphasizes context. Same query, different intent, different results.


Ontology

from synaptic import OntologyRegistry, TypeDef, PropertyDef, build_agent_ontology

ontology = build_agent_ontology()

# Add custom type
ontology.register_type(TypeDef(
    name="incident",
    parent="agent_activity",
    description="Production incident",
    properties=[
        PropertyDef(name="severity", value_type="str", required=True),
    ],
))

graph = SynapticGraph(backend, ontology=ontology)
# โ†’ Auto-validated on graph.add() and graph.link()

Default Ontology

knowledge                          agent_activity
  โ”œโ”€โ”€ concept                        โ”œโ”€โ”€ session
  โ”œโ”€โ”€ entity                         โ”œโ”€โ”€ tool_call
  โ”œโ”€โ”€ lesson                         โ”œโ”€โ”€ observation
  โ”œโ”€โ”€ decision                       โ”œโ”€โ”€ reasoning
  โ”œโ”€โ”€ rule                           โ””โ”€โ”€ outcome
  โ””โ”€โ”€ artifact

Backends

Backend Graph Traversal Vector Search Scale Use Case
MemoryBackend Python BFS cosine ~10K Testing
SQLiteBackend CTE recursive - ~100K Embedded (no graph)
KuzuBackend Cypher (embedded) HNSW (optional) ~10M Embedded graph (recommended)
PostgreSQLBackend CTE recursive pgvector HNSW ~1M Production (single DB stack)
QdrantBackend - HNSW + quantization ~10B Vector-only
MinIOBackend - - ~10TB Blob (S3-compatible)
CompositeBackend Kuzu Qdrant Unlimited Unified router

MCP Server โ€” 16 Tools

synaptic-mcp                                              # stdio (Claude Code)
synaptic-mcp --db ./knowledge.db                         # SQLite
synaptic-mcp --embed-url http://localhost:8080/v1        # + auto-embedding

Knowledge (7) โ€” knowledge_search, knowledge_add, knowledge_link, knowledge_reinforce, knowledge_stats, knowledge_export, knowledge_consolidate

Agent Workflow (4) โ€” agent_start_session, agent_log_action, agent_record_decision, agent_record_outcome

Semantic Search (3) โ€” agent_find_similar, agent_get_reasoning_chain, agent_explore_context

Ontology (2) โ€” ontology_define_type, ontology_query_schema

Dev

uv sync --extra dev --extra sqlite --extra neo4j --extra qdrant --extra minio
uv run pytest -v                              # 266+ tests
uv run pytest tests/benchmark/ -v -s          # Benchmarks (8 datasets + ablation)
uv run ruff check --fix && uv run ruff format

License

MIT

Release History

VersionChangesUrgencyDate
v0.16.0# Synaptic Memory v0.16.0 โ€” release notes **Release date**: 2026-04-17 ยท **License**: MIT ยท **Install**: `pip install "synaptic-memory[sqlite,korean,vector,mcp]"` --- ## TL;DR v0.16.0 is the cleanup release that makes Synaptic's benchmark numbers match what the SDK actually does. Five changes: 1. **`graph.search()` now defaults to `engine="evidence"`** โ€” the hybrid BM25 + HNSW + PPR + MMR pipeline that the MCP tool path already used. Legacy HybridSearch is deprecated and will be reHigh4/17/2026
v0.13.0## Highlights Major improvements to multi-turn agent search quality on structured data. ### ๐Ÿ”ฅ Agent benchmark results | Dataset | Before | After | |---------|--------|-------| | X2BEE Hard agent | 1/19 (5%) | **17/19 (89%)** | | assort Hard agent | 1/15 (7%) | **12/15 (80%)** | | KRRA Hard MRR | 0.808 | **1.000** (15/15) | ### ๐ŸŒ Public benchmarks (EvidenceSearch + embed + reranker) | Dataset | Before | After | |---------|--------|-------| | HotPotQA-24 | 0.727 | **0.964** | | Allganize RAHigh4/12/2026
v0.9.0## Synaptic Memory v0.9.0 ### Core Improvements - **BM25 hybrid scoring** โ€” corpus-size adaptive (large: BM25 80%, small: substring 70%) - **Supersede detection** โ€” same-title nodes prefer newest (by updated_at) - **Auto-chunking API** โ€” `add_document(chunk_size=1000)` with sentence-boundary splitting + PART_OF edges - **PPR edge type weights** โ€” CAUSED 1.0 > RELATED 0.4 (reduces noise from S2 ablation -14~-32%) - **Kind soft boost** โ€” hard filter โ†’ 1.5x score boost (MRR +9%) - **Phrase node fiMedium3/23/2026

Dependencies & License Audit

Loading dependencies...

Similar Packages

memoraGive your AI agents persistent memory.v0.2.27
hybrid-orchestrator๐Ÿค– Implement hybrid human-AI orchestration patterns in Python to coordinate agents, manage sessions, and enable smooth AI-human handoffs.master@2026-04-21
zotero-mcp-lite๐Ÿš€ Run a high-performance MCP server for Zotero, enabling customizable workflows without cloud dependency or API keys.main@2026-04-21
Cognio๐Ÿง  Enhance AI conversations with Cognio, a persistent memory server that retains context and enables meaningful semantic search across sessions.main@2026-04-21
mcp-yandex-trackerManage and automate tasks in Yandex Tracker using a robust MCP integration for efficient issue tracking and project control.main@2026-04-21