Already inside Claude Code, Gemini CLI, or Aider? Paste one of these prompts and let the agent set it up for you:
Claude Code:
Install m3-memory for persistent memory. Run: pip install m3-memory
Then add {"mcpServers":{"memory":{"command":"mcp-memory"}}} to my
~/.claude/settings.json under "mcpServers". Make sure Ollama is running
with nomic-embed-text. Then use /mcp to verify the memory server loaded.
Gemini CLI:
Install m3-memory for persistent memory. Run: pip install m3-memory
Then add {"mcpServers":{"memory":{"command":"mcp-memory"}}} to my
~/.gemini/settings.json under "mcpServers". Make sure Ollama is running
with nomic-embed-text.
Aider / Any MCP agent:
Install m3-memory for persistent memory. Run: pip install m3-memory
Then add {"mcpServers":{"memory":{"command":"mcp-memory"}}} to the
MCP config file for this agent. Make sure Ollama is running with
nomic-embed-text.
After install, test it:
Write a memory: "M3 Memory installed successfully on [today's date]"
Then search for: "M3 install"
The Problem
Every new session, your AI agent has amnesia. It forgets your project structure, your preferences, the decisions you made together yesterday. You paste the same context. You re-explain the same architecture. You correct the same mistakes.
When facts change ā a port number, a dependency version ā there's no mechanism to update what the agent "knows." Contradictions accumulate silently until something breaks.
With M3 Memory
Your agents remember. Architecture decisions, server configs, debugging history, your preferences ā all searchable, all persistent across sessions and devices.
When facts change, M3 detects the contradiction, updates the record, and preserves the full history. No stale data. No manual cleanup. You just talk to your agent, and it knows what it should know.
The Moment It Clicks
Session
You say
Agent response
Session 1
"Our API server runs on port 8080."
Stored.
Session 2 (3 days later)
"We moved the API to port 9000."
Contradiction detected. Updated. History preserved.
Session 3 (a week later)
"What port is the API on?"
Without M3: "I don't have that information. Could you tell me?"
With M3: "Port 9000. (Updated from 8080 ā change recorded March 12th.)"
No prompts. No manual logic. Automatic contradiction resolution with full history.
Who This Is For
For you if...
Not for you if...
You use Claude Code, Gemini CLI, Aider, or any MCP agent
You're building LangChain/CrewAI pipelines ā see Mem0
You want memory that survives across sessions + devices
You prefer local-first: no cloud, no API costs, works offline
You only need short-term chat context in a single session
You care about privacy and data ownership
Use Cases
Coding agents
Remember architecture decisions, configs, and debugging steps across sessions
Personal assistants
Persist user preferences, goals, and history long-term
Dev workflows
Track environment changes, server configs, and fixes automatically
Multi-device setups
Write a memory on your MacBook, pick it up on your Windows desktop ā same knowledge graph, synced locally
Features
Hybrid Search
Three-stage pipeline: FTS5 keyword matching, semantic vector similarity, and MMR diversity re-ranking. Better recall than vector-only search, especially for technical content with exact names and versions.
Automatic Contradiction Detection
Write conflicting information and M3 detects it automatically. The outdated memory is superseded via bitemporal versioning, a supersedes relationship is recorded, and the full history is preserved.
Bitemporal History
Query as_of="2026-01-15" to see exactly what your agent believed on any past date. Every change is tracked with both the time the fact was true and the time it was recorded.
Knowledge Graph
Related facts are linked on write when cosine similarity exceeds 0.7. Eight relationship types (related, supports, contradicts, extends, supersedes, references, consolidates, message). Traverse up to 3 hops with memory_graph.
Cross-Device Sync
Bi-directional delta sync across SQLite, PostgreSQL, and ChromaDB. Write on your MacBook, continue on your Windows desktop. No cloud intermediary.
GDPR Built-In
gdpr_forget (Article 17 ā Right to Erasure) and gdpr_export (Article 20 ā Data Portability) as native MCP tools.
Fully Local + Private
Local embeddings via Ollama, LM Studio, or any OpenAI-compatible endpoint. Zero cloud calls. Zero API costs. Works completely offline.
Self-Maintaining
Automatic decay, expiry purging, orphan pruning, deduplication, and retention enforcement. Old memories consolidate into LLM-generated summaries.
Core Tools
Start with three ā memory_write, memory_search, and memory_update ā that covers 90% of daily use. The rest is there when you need it.
Tool
What it does
memory_write
Store a memory ā facts, decisions, preferences, configs, observations
memory_search
Retrieve relevant memories using hybrid search
memory_suggest
Same as search, with full score breakdown (vector, BM25, MMR)
Your AI should remember. Your data should stay yours.
M3 Memory: the foundation for agents that don't forget.
Release History
Version
Changes
Urgency
Date
v2026.4.20
## Fixes - Resolve SQLite deadlock in `memory_consolidate` via connection reuse in `memory_link_impl` - Resolve database contention and sync hangs on large DBs (migration timeout bump, SAVEPOINT-wrapped pg_sync, VACUUM skip for >500MB DBs, WAL contention fix in `record_history`) ## Features - `memory_search` gains `recency_bias` (float) and `adaptive_k` (bool) params - `conversation_search` auto-pairs user turns with adjacent assistant replies (0.85x score) - `precedes` / `follows` added to `VA
High
4/17/2026
v2026.4.6
## What's New M3 Memory is a local-first agentic memory layer for MCP agents ā 25 tools, hybrid search (FTS5 + vector + MMR), contradiction detection, cross-device sync, GDPR-ready, 100% local. ## Install ```bash pip install m3-memory ``` Or clone for development: ```bash git clone https://github.com/skynetcmd/m3-memory.git cd m3-memory pip install -r requirements.txt ``` ## Highlights - **25 MCP tools** ā write, search, link, graph, verify, sync, export, and more - **Hybrid search** ā FT
High
4/10/2026
Dependencies & License Audit
Loading dependencies...
Similar Packages
zotero-mcp-liteš Run a high-performance MCP server for Zotero, enabling customizable workflows without cloud dependency or API keys.main@2026-04-21
spoon-awesome-skillš Explore high-quality skills for SpoonOS, Web3, AI productivity, and enterprise tooling with 57+ curated Python scripts across multiple challenge tracks.master@2026-04-21
antigravity-awesome-skillsš Explore 255+ essential skills for AI coding assistants like Claude Code and GitHub Copilot to enhance your development workflow.main@2026-04-21
sqltools_mcpš Access multiple databases seamlessly with SQLTools MCP, a versatile service supporting MySQL, PostgreSQL, SQL Server, DM8, and SQLite without multiple servers.main@2026-04-21
claude-copilotTransform Claude Code into a full development team. 11 specialized agents (Architect, Engineer, QA, Security, UX, DevOps, and more), persistent memory across sessions, and 25,000+ on-demand skills. Wov2.10.0