Persistent, private memory for AI agents. Your agents forget everything between sessions β Remnic fixes that.
Remnic gives AI agents long-term memory that survives across conversations. Decisions, preferences, project context, personal details, past mistakes β everything your agent learns persists and resurfaces exactly when it's needed. All data stays on your machine as plain markdown files. No cloud services, no subscriptions, no sharing your data with third parties.
Engram is now Remnic. Canonical packages live under the
@remnic/*scope:@remnic/core,@remnic/server,@remnic/cli. OpenClaw installs should use@remnic/plugin-openclaw. The legacyengramCLI name remains available as a forwarder during the rename window. Python users:remnic-hermeson PyPI.
Every bit of support is genuinely appreciated and helps keep this project alive and free for everyone.
If you're able to, sponsoring on GitHub or sending a Lightning donation to joshuaswarren@strike.me directly funds continued development, new integrations, and keeping Remnic open source.
If financial support isn't an option, you can still make a big difference β star the repo on GitHub, share it on social media, or recommend it to a friend or colleague. Word of mouth is how most people find Remnic, and it means the world.
Every AI agent session starts from zero. Your agent doesn't know your name, your projects, the decisions you've already made, or the bugs you already debugged. Whether it's a personal assistant, a coding agent, a research agent, or a multi-agent team β they all forget everything between conversations. You re-explain the same context over and over, and your agents still make the same mistakes.
OpenClaw's built-in memory works for simple cases, but it doesn't scale. It lacks semantic search, lifecycle management, entity tracking, and governance. Third-party memory services exist, but they cost money and require sending your private data to someone else's servers.
Remnic is an open-source, local-first memory system that replaces OpenClaw's default memory with something much more capable β while keeping everything on your machine. It watches your agent conversations, extracts durable knowledge, and injects the right memories back at the start of every session. Use OpenAI or a local LLM (Ollama, LM Studio, etc.) for extraction β your choice.
Remnic is the universal memory layer for AI agents. It works natively with OpenClaw, Claude Code, Codex CLI, Hermes Agent, and any MCP-compatible client (Replit, Cursor, etc.). When you tell any agent a preference, every agent knows it β they share one memory store.
| Without Remnic | With Remnic |
|---|---|
| Re-explain who you are and what you're working on | Agent recalls your identity, projects, and preferences automatically |
| Repeat context for every task | Entity knowledge surfaces people, projects, tools, and relationships on demand |
| Lose debugging and research context between sessions | Past root causes, dead ends, and findings are recalled β no repeated work |
| Manually restate preferences every session | Preferences persist across sessions, agents, and projects |
| Context-switching tax when resuming work | Session-start recall brings you back to speed instantly |
| Default OpenClaw memory doesn't scale | Hybrid search, lifecycle management, namespaces, and governance |
| Third-party memory services cost money and share your data | Everything stays local β your filesystem, your rules |
openclaw plugins install @remnic/plugin-openclaw --pinTell any OpenClaw agent:
Install the @remnic/plugin-openclaw plugin and configure it as my memory system.
Your agent will run the install command, update openclaw.json, and restart the gateway for you.
git clone https://github.com/joshuaswarren/remnic.git \
~/.openclaw/extensions/remnic
cd ~/.openclaw/extensions/remnic
npm ci && npm run buildFrom npm (recommended):
npm install -g @remnic/cli # Installs `remnic` plus the legacy `engram` forwarder
remnic init # Create remnic.config.json
export OPENAI_API_KEY=sk-...
export REMNIC_AUTH_TOKEN=$(openssl rand -hex 32)
remnic daemon start # Start background server
remnic status # Verify it's running
remnic query "hello" --explain # Test query with tier breakdownFrom source (requires Node.js 22.12+ and pnpm):
git clone https://github.com/joshuaswarren/remnic.git
cd remnic
pnpm install && pnpm run build
cd packages/remnic-cli && pnpm link --global # Makes `remnic` and `engram` available on PATH
cd ../..
remnic init # Create remnic.config.json
export OPENAI_API_KEY=sk-...
export REMNIC_AUTH_TOKEN=$(openssl rand -hex 32)
remnic daemon start # Start background server
remnic status # Verify it's running
remnic query "hello" --explain # Test query with tier breakdownNote:
remnicis the canonical CLI. The legacyengrambinary is a compatibility forwarder to the same implementation. Runningpnpm link --globalfrompackages/remnic-cli/(not the repo root) makes both names available on PATH. Alternatively, invoke directly:npx tsx packages/remnic-cli/src/index.ts <command>.
The standalone CLI provides 15+ commands for memory management, project onboarding, curation, diff-aware sync, dedup, connectors, spaces, and benchmarks -- all without requiring OpenClaw. See the Platform Migration Guide for the full command reference.
Once the Remnic daemon is running, connect any supported agent:
remnic connectors install claude-code # Claude Code (hooks + MCP)
remnic connectors install codex-cli # Codex CLI (hooks + MCP)
remnic connectors install replit # Replit (MCP only)
pip install remnic-hermes # Hermes Agent (Python MemoryProvider)Each connector generates a unique auth token, installs the appropriate plugin/hooks, and verifies the connection. All agents share the same memory store β tell one agent your preference, and every agent remembers it.
| Platform | Integration | Auto-recall | Auto-observe |
|---|---|---|---|
| OpenClaw | Memory slot plugin | Every session | Every response |
| Claude Code | Native hooks + MCP | Every prompt | Every tool use |
| Codex CLI | Native hooks + MCP | Every prompt | Every tool use |
| Hermes | Python MemoryProvider | Every LLM call | Every turn |
| Replit | MCP only | On demand | On demand |
After installation, add the Remnic bridge plugin to your openclaw.json:
Gateway model source: When
modelSourceis"gateway", Remnic routes all LLM calls (extraction, consolidation, reranking) through an OpenClaw agent persona's model chain instead of its own config. Extraction starts on thegatewayAgentIdchain directly in this mode;localLlm*settings do not control primary extraction order. Define agent personas inopenclaw.json β agents.list[]with aprimarymodel andfallbacks[]array β Remnic tries each in order until one succeeds. This lets you build multi-provider fallback chains like Fireworks β local LLM β cloud OpenAI. See the Gateway Model Source guide for full setup.
Restart the gateway:
launchctl kickstart -k gui/$(id -u)/ai.openclaw.gateway # macOS
# or: systemctl restart openclaw-gateway # LinuxStart a conversation β Remnic begins learning immediately.
Note: This shows only the minimal config. Remnic has 60+ configuration options for search backends, capture modes, memory OS features, and more. See the full config reference for every setting.
Remnic scores every extracted fact locally (see src/importance.ts) and uses that score as a write gate. Facts whose level falls below extractionMinImportanceLevel are dropped before they ever hit disk, so turn-level chatter like "hi", "k", or heartbeat pings never become fact memories.
Default: "low" β only "trivial" content is dropped. Raise to "normal" or higher for a stricter gate.
{
"plugins": {
"entries": {
"openclaw-engram": {
"config": {
// Allowed values: "trivial" | "low" | "normal" | "high" | "critical"
"extractionMinImportanceLevel": "normal"
}
}
}
}
}Category boosts still apply before the gate, so corrections, principles, preferences, and commitments stay above "normal" even when their raw text would otherwise score low. Every gated fact increments the importance_gated counter (grep metric:importance_gated in ~/.openclaw/logs/gateway.log) and the final extraction log line reports the gated count.
openclaw engram setup --json # Validates config, scaffolds directories
openclaw engram doctor --json # Health diagnostics with remediation hints
openclaw engram config-review --json # Opinionated config tuning recommendationsStart the Remnic server directly for the current shell session:
# Generate a token
export REMNIC_AUTH_TOKEN="$(openssl rand -base64 32)"
npx remnic-server --host 127.0.0.1 --port 4318 --auth-token "$REMNIC_AUTH_TOKEN"If you want to use remnic daemon start, persist the token in
remnic.config.json first. daemon start will hand off to launchd/systemd
when a service is installed, and those service templates read server.authToken
from config rather than inheriting your shell's exported token.
The HTTP API path remains /engram/v1/... during the v1.x compatibility window.
Add to ~/.codex/config.toml:
[mcp_servers.remnic]
url = "http://127.0.0.1:4318/mcp"
bearer_token_env_var = "REMNIC_AUTH_TOKEN"That's it. Codex now has access to Remnic's recall, store, and entity tools. See the full Codex integration guide for session-start hooks, cross-machine setup, and automatic recall at session start.
Run the stdio MCP server:
openclaw engram access mcp-servePoint your MCP client's command at openclaw engram access mcp-serve. This
is the OpenClaw-hosted stdio compatibility path. For standalone Remnic installs,
prefer the HTTP MCP endpoint exposed by remnic daemon start or remnic-server.
Claude Code (MCP over HTTP): Start the Remnic server, then add to ~/.claude.json:
{
"mcpServers": {
"remnic": {
"url": "http://localhost:4318/mcp",
"headers": {
"Authorization": "Bearer ${REMNIC_AUTH_TOKEN}"
}
}
}
}See the Standalone Server Guide for multi-tenant setups and connecting multiple agent harnesses.
Remnic also works as a standalone tool without OpenClaw. Install and run the CLI directly:
npm install -g @remnic/cli
remnic init # create remnic.config.json
export OPENAI_API_KEY=sk-...
export REMNIC_AUTH_TOKEN=$(openssl rand -hex 32)
remnic daemon start # start background server
remnic query "hello" # verifyThe CLI provides 15+ commands for querying, onboarding projects, curating files, managing spaces, running benchmarks, and more. See the full CLI reference for all commands.
Remnic works with 10+ coding tools via MCP or HTTP. See the Connector Setup Guide for config snippets for Claude Code, Codex CLI, Cursor, GitHub Copilot, Cline, Roo Code, Windsurf, Amp, Replit, and any generic MCP client.
OpenClaw remains the recommended path for most users. The standalone CLI is useful for CI/CD pipelines, scripted memory operations, and environments without OpenClaw.
@remnic/core β Framework-agnostic engine (re-exports orchestrator, config, storage, search, extraction, graph, trust zones)
@remnic/cli β Standalone CLI binary (15+ commands)
@remnic/server β Standalone HTTP/MCP server
@remnic/bench β Benchmarks + CI regression gates
@remnic/hermes-provider β HTTP client for remote Remnic instances
Engram operates in three phases:
Recall β Before each conversation, inject relevant memories into context
Buffer β After each turn, accumulate content until a trigger fires
Extract β Periodically, extract structured memories using an LLM
Memories are stored as plain markdown files with YAML frontmatter β fully portable, git-friendly, no database required:
---
id: decision-1738789200000-a1b2
category: decision
confidence: 0.92
tags: ["architecture", "search"]
---
Decided to use the port/adapter pattern for search backends
so alternative engines can replace QMD without changing core logic.Memory categories include: fact, decision, preference, correction, relationship, principle, commitment, moment, skill, rule, and more.
Engram is organized as a monorepo with a core engine, standalone server/CLI, and native plugins for multiple AI platforms:
βββββββββββββββββββ
β @remnic/core β
β (engine) β
ββββββββββ¬βββββββββ
β
ββββββββββββ¬βββββββββββ¬ββββ΄βββββ¬βββββββββββ¬βββββββββββ
β β β β β β
βββββββ΄ββββββ βββ΄βββββββ ββ΄ββββββ ββ΄βββββββββ β Native β
β @remnic/ β β@remnic/β βremnicβ β@remnic/ β β Plugins β
β cli β βserver β β-hermesβ βplugin- β β β
β β β β β β βopenclaw β β β
βββββββββββββ ββββββββββ ββββββββ βββββββββββ ββββββββββββ
β β
βββββββ΄ββββββ ββββββββββββββββΌβββββββββββ
β @remnic/ β β β β
β bench β claude-code codex replit
βββββββββββββ
The old @joshuaswarren/openclaw-engram package is deprecated. Use @remnic/plugin-openclaw for OpenClaw installs and @remnic/* for standalone or multi-platform use.
All memory lives on your filesystem as plain markdown files. No cloud dependency, no subscriptions, no proprietary formats, no sending your private conversations to third-party servers. Back it up with git, rsync, or Time Machine. Move it between machines with a folder copy. You own your data completely.
OpenClaw's built-in memory is basic β it works for getting started, but lacks semantic search, entity tracking, lifecycle management, governance, and multi-agent isolation. Engram is a drop-in replacement that brings all of those capabilities while keeping the same local-first philosophy.
Remnic uses hybrid search (BM25 + vector + reranking via QMD) to find semantically relevant memories. It doesn't just match keywords β it understands what you're working on and surfaces the right context.
Use OpenAI for extraction and reranking, run entirely offline with a local LLM (Ollama, LM Studio), or route through the gateway model chain to use any provider with automatic fallback. The local-llm-heavy preset is optimized for fully local operation. See the Local LLM Guide and the Gateway Model Source section for multi-provider setups.
Start with zero config. Enable features as your needs grow:
| Level | What You Get |
|---|---|
| Defaults | Automatic extraction, recall injection, entity tracking, lifecycle management |
| + Search tuning | Choose from 6 search backends (QMD, Orama, LanceDB, Meilisearch, remote, noop) |
| + Capture control | implicit, explicit, or hybrid capture modes for memory write policy |
| + Memory OS | Memory boxes, graph reasoning, compounding, shared context, identity continuity |
| + LCM | Lossless Context Management β never lose conversation context to compaction |
| + Parallel retrieval | Three specialized agents (DirectFact, Contextual, Temporal) run in parallel β same latency, broader coverage |
| + Advanced | Trust zones, causal trajectories, harmonic retrieval, evaluation harness, poisoning defense |
Use a preset to jump to a recommended level: conservative, balanced, research-max, or local-llm-heavy.
- OpenClaw β Native plugin with automatic extraction and recall injection
- Codex CLI β MCP-over-HTTP with session-start hooks for automatic recall
- Any MCP client β stdio or HTTP transport, 8 tools available
- Scripts & automation β Authenticated REST API for custom integrations
- Local LLMs β Run extraction and reranking with local models (Ollama, LM Studio, etc.)
Run Engram as a standalone HTTP server that multiple agent harnesses share. Isolate tenants with namespace policies, feed conversations from any client via the observe endpoint, and search archived history with LCM full-text search. Works with OpenClaw, Codex CLI, Claude Code, and custom HTTP agents. See the Standalone Server Guide.
- 672 tests with CI enforcement
- Evaluation harness with benchmark packs, shadow recall recording, and CI delta gates
- Governance system with review queues, shadow/apply modes, and reversible transitions
- Namespace isolation for multi-agent deployments
- Rate limiting on write paths with idempotency support
- Automatic memory extraction β Facts, decisions, preferences, corrections extracted from conversations
- Observe endpoint β Feed conversation messages from any agent into the extraction pipeline via HTTP or MCP
- Recall injection β Relevant memories injected before each agent turn
- Entity tracking β People, projects, tools, companies tracked as structured entities
- Lifecycle management β Memories age through active, validated, stale, archived states
- Episode/Note model β Memories classified as time-specific events or stable beliefs
| Backend | Type | Best For |
|---|---|---|
| QMD (default) | Hybrid BM25+vector+reranking | Best recall quality |
| Orama | Embedded, pure JS | Zero native deps |
| LanceDB | Embedded, native Arrow | Large collections |
| Meilisearch | Server-based | Shared search |
| Remote | HTTP REST | Custom services |
| Noop | No-op | Extraction only |
See the Search Backends Guide or write your own.
These capabilities can be enabled progressively:
- Memory Boxes β Groups related memories into topic-windowed episodes
- Graph Recall β Entity-relationship graph for causal and timeline queries
- Compounding β Weekly synthesis surfaces patterns and recurring mistakes
- Shared Context β Cross-agent memory sharing for multi-agent setups
- Identity Continuity β Consistent agent personality across sessions
- Hot/Cold Tiering β Automatic migration of aging memories to cold storage
- Memory Cache β Process-level singleton cache for
readAllMemories()β turns 15s disk scans into <100ms cache hits, shared across all sessions - Semantic Consolidation β Finds clusters of semantically similar memories, synthesizes canonical versions via LLM, archives originals to reduce bloat
- Native Knowledge β Search curated markdown (workspace docs, Obsidian vaults) without extracting into memory
- Behavior Loop Tuning β Runtime self-tuning of extraction and recall parameters
When your AI agent hits its context window limit, the runtime silently compresses old messages β and that context is gone forever. LCM fixes this by proactively archiving every message into a local SQLite database and building a hierarchical summary DAG (directed acyclic graph) alongside it. When context gets compacted, LCM injects compressed session history back into recall, so your agent never loses track of what happened earlier in the conversation.
- Proactive archiving β Every message is indexed with full-text search before compaction can discard it
- Hierarchical summaries β Leaf summaries cover ~8 turns, depth-1 covers ~32, depth-2 ~128, etc.
- Fresh tail protection β Recent turns always use the most detailed (leaf-level) summaries
- Three-level summarization β Normal LLM summary, aggressive bullet compression, and deterministic truncation (guaranteed convergence, no LLM needed)
- MCP expansion tools β Agents can search, describe, or expand any part of conversation history on demand
- Zero data loss β Raw messages are retained for the configured retention period (default 90 days)
Enable it in your openclaw.json:
{
"plugins": {
"entries": {
"openclaw-engram": {
"config": {
"lcmEnabled": true
// All other LCM settings have sensible defaults
}
}
}
}
}See the LCM Guide for architecture details, configuration options, and how it complements native compaction.
Engram's default retrieval runs a single hybrid search pass. Parallel Specialized Retrieval (inspired by Supermemory's ASMR technique) runs three specialized agents in parallel so total latency equals max(agents) not sum(agents).
| Agent | What It Does | Cost |
|---|---|---|
| DirectFact | Scans entity filenames for keyword overlap with the query | File I/O only, <5ms |
| Contextual | Existing hybrid BM25+vector search (unchanged) | Same as current |
| Temporal | Reads the temporal date index, returns recent memories with recency decay scoring | File I/O + math, <10ms |
Zero additional LLM cost. The DirectFact and Temporal agents reuse existing indexes with no new embeddings or inference. The Contextual agent is the same hybrid search already running.
Results from all three agents are merged by path, deduplicated, and weighted (direct=1.0Γ, temporal=0.85Γ, contextual=0.7Γ) before returning the top N results. Any agent error degrades gracefully without blocking the others.
Enable it in your openclaw.json:
{
"plugins": {
"entries": {
"openclaw-engram": {
"config": {
"parallelRetrievalEnabled": true
// Optional tuning:
// "parallelMaxResultsPerAgent": 20,
// "parallelAgentWeights": { "direct": 1.0, "contextual": 0.7, "temporal": 0.85 }
}
}
}
}
}Set parallelMaxResultsPerAgent: 0 to disable an individual agent's results without disabling the feature entirely.
Over time, memory stores accumulate redundant facts β the same information extracted multiple times across sessions, expressed slightly differently. Semantic consolidation finds clusters of similar memories using token overlap, synthesizes a single canonical version via LLM, and archives the originals. This reduces storage bloat, speeds up recall, and improves memory quality.
- Conservative by default β Only merges when 80%+ token overlap is detected across 3+ memories
- LLM synthesis β Uses your configured model to combine unique information from all cluster members
- Safe archival β Originals are archived (not deleted) with full provenance tracking
- Configurable β Adjust threshold, cluster size, excluded categories, model, and schedule
- Excluded categories β Corrections and commitments are never consolidated (configurable)
Enable it in your openclaw.json:
{
"plugins": {
"entries": {
"openclaw-engram": {
"config": {
"semanticConsolidationEnabled": true
// Optional tuning:
// "semanticConsolidationThreshold": 0.8, // 0.8=conservative, 0.6=aggressive
// "semanticConsolidationModel": "fast", // "auto", "fast", or specific model
// "semanticConsolidationIntervalHours": 168, // weekly (default)
// "semanticConsolidationMaxPerRun": 100
}
}
}
}
}Run manually from the CLI:
openclaw engram semantic-consolidate --dry-run # Preview what would be merged
openclaw engram semantic-consolidate --verbose # Run with detailed output
openclaw engram semantic-consolidate --threshold 0.6 # Override threshold- Objective-State Recall β Surfaces file/process/tool state snapshots alongside semantic memory
- Causal Trajectories β Typed
goal -> action -> observation -> outcomechains - Trust Zones β Quarantine/working/trusted tiers with promotion rules and poisoning defense
- Harmonic Retrieval β Blends abstraction nodes with cue-anchor matches
- Verified Recall β Only surfaces memory boxes whose source memories still verify
- Semantic Rule Promotion β Promotes
IF ... THENrules from verified episodes - Creation Memory β Work-product ledger tracking agent outputs
- Commitment Lifecycle β Tracks promises, deadlines, and obligations
- Resume Bundles β Crash-recovery context for interrupted sessions
- Utility Learning β Learns promotion/ranking weights from downstream outcomes
See Enable All Features for a full-feature config profile.
Remnic exposes one shared service layer through multiple transports. During the
v1.x compatibility window, the HTTP API path remains /engram/v1/... and the
legacy engram.* MCP aliases still work.
remnic daemon startKey endpoints: GET /engram/v1/health, POST /engram/v1/recall, POST /engram/v1/memories, GET /engram/v1/entities/:name, and more. Full reference in API docs.
The HTTP server also hosts a lightweight operator UI at http://127.0.0.1:4318/engram/ui/ for memory browsing, recall inspection, governance review, trust-zone promotion, and entity exploration.
Available via both stdio and HTTP transports:
| Tool | Purpose |
|---|---|
engram.recall |
Retrieve relevant memories for a query |
engram.recall_explain |
Debug the last recall |
engram.day_summary |
Generate structured end-of-day summary from memory content |
engram.memory_get |
Fetch a specific memory by ID |
engram.memory_timeline |
View a memory's lifecycle history |
engram.memory_store |
Store a new memory |
engram.suggestion_submit |
Queue a memory for review |
engram.entity_get |
Look up a known entity |
engram.review_queue_list |
View the governance review queue |
engram.observe |
Feed conversation messages into memory pipeline (LCM + extraction) |
engram.lcm_search |
Full-text search over LCM-archived conversations |
engram_context_search |
Full-text search across all archived conversation history (LCM) |
engram_context_describe |
Get a compressed summary of a turn range (LCM) |
engram_context_expand |
Retrieve raw lossless messages for a turn range (LCM) |
The HTTP server exposes an MCP JSON-RPC endpoint at POST /mcp, allowing remote MCP clients to use Remnic tools over HTTP:
npx remnic-server --host 0.0.0.0 --port 4318 --auth-token "$REMNIC_AUTH_TOKEN"For namespace-enabled deployments, configure server.principal in remnic.config.json so it matches a writePrincipals entry for your target namespace. Deployments with namespacesEnabled: false (the default) do not need a principal.
# Setup & diagnostics
openclaw engram setup # Guided first-run setup
openclaw engram doctor # Health diagnostics with remediation hints
openclaw engram config-review # Config tuning recommendations
openclaw engram stats # Memory counts, search status
openclaw engram inventory # Full storage and namespace inventory
# Search & recall
openclaw engram search "query" # Search memories from CLI
openclaw engram harmonic-search "query" # Preview harmonic retrieval matches
# Governance
openclaw engram governance-run --mode shadow # Preview governance transitions
openclaw engram governance-run --mode apply # Apply reversible transitions
openclaw engram review-disposition <id> --status rejected # Operator review
# Benchmarking
openclaw engram benchmark recall # Benchmark status and validation
openclaw engram benchmark-ci-gate # CI gate for regressions
# Memory maintenance
openclaw engram consolidate # Run standard consolidation
openclaw engram semantic-consolidate # Run semantic dedup consolidation
openclaw engram semantic-consolidate --dry-run # Preview without changes
# Access layer
remnic daemon start # Start HTTP API + managed daemon
openclaw engram access mcp-serve # Start OpenClaw-hosted stdio MCP server
# Trust-zone demos
openclaw engram trust-zone-demo-seed --dry-run # Preview the opt-in buyer demo dataset
openclaw engram trust-zone-demo-seed # Explicitly seed the demo dataset
openclaw engram trust-zone-promote --record-id <id> --target-zone working --reason "Operator review"Trust zones now ship with a dedicated admin-console view plus an explicit demo seeding path for buyer-facing walkthroughs.
- Never automatic β Remnic does not seed sample trust-zone records on install, startup, or feature enablement.
- Explicit only β demo records appear only after you run
openclaw engram trust-zone-demo-seedor trigger the matching admin-console action. - Buyer-friendly story β the trust-zone view surfaces provenance strength, promotion readiness, corroboration requirements, and operator promotion actions in one place.
The seeded scenario is enterprise-buyer-v1, which creates a small, opinionated dataset covering:
- quarantine records that are ready for review
- working records that are blocked on missing provenance
- working records that still need corroboration
- working records with independent corroboration support
- a trusted operator policy record
See the full CLI reference for all commands.
All settings live in openclaw.json under plugins.entries.openclaw-engram.config. The table below shows the most commonly changed settings β Engram has 60+ configuration options covering search backends, capture modes, memory OS features, namespaces, governance, benchmarking, and more.
| Setting | Default | Description |
|---|---|---|
openaiApiKey |
(env) |
OpenAI API key (optional when using a local LLM) |
localLlmEnabled |
false |
Enable Engram's local LLM path when modelSource is plugin |
localLlmUrl |
unset | Local LLM endpoint (e.g., http://localhost:1234/v1) |
localLlmModel |
unset | Local model name (e.g., qwen2.5-32b-instruct) |
model |
gpt-5.2 |
OpenAI model for extraction when modelSource is plugin and local LLM is disabled |
searchBackend |
"qmd" |
Search engine: qmd, orama, lancedb, meilisearch, remote, noop |
captureMode |
implicit |
Memory write policy: implicit, explicit, hybrid |
recallBudgetChars |
maxMemoryTokens * 4 |
Recall budget (default ~8K chars; set 64K+ for large-context models) |
memoryDir |
~/.openclaw/workspace/memory/local |
Memory storage root |
memoryOsPreset |
unset | Quick config: conservative, balanced, research-max, local-llm-heavy |
lcmEnabled |
false |
Enable Lossless Context Management (proactive session archive + summary DAG) |
semanticConsolidationEnabled |
false |
Enable periodic semantic dedup of similar memories |
semanticConsolidationThreshold |
0.8 |
Token overlap threshold (0.8=conservative, 0.6=aggressive) |
semanticConsolidationModel |
"auto" |
LLM for synthesis: "auto", "fast", or specific model |
See the full config reference for all 60+ settings including search backend configuration, namespace policies, Memory OS features, governance, evaluation harness, trust zones, causal trajectories, and more.
- Getting Started β Installation, setup, first-run verification
- Config Reference β Every setting with defaults
- Architecture Overview β System design and storage layout
- Retrieval Pipeline β How recall works
- Memory Lifecycle β Write, consolidation, expiry
- Search Backends β Choosing and configuring search engines
- Writing a Search Backend β Build your own adapter
- API Reference β HTTP, MCP, and CLI documentation
- Codex CLI Integration β Setup Engram with OpenAI's Codex
- Standalone Server Guide β Multi-tenant HTTP server for multiple agent harnesses
- Local LLM Guide β Local-first extraction and reranking
- Cost Control Guide β Budget mappings and presets
- Namespaces β Multi-agent memory isolation
- Shared Context β Cross-agent intelligence
- Identity Continuity β Consistent agent personality
- Graph Reasoning β Opt-in graph traversal
- Evaluation Harness β Benchmarks and CI delta gates
- Operations β Backup, export, maintenance
- Lossless Context Management β Never lose context to compaction
- Enable All Features β Full-feature config profile
- Migration Guide β Upgrading from older versions
- Platform Migration Guide β Migrating to the monorepo architecture (v9.1.36+)
- Hermes Setup β HTTP client for remote Remnic instances
- Deployment Topologies β Localhost, LAN, remote, containerized, standalone
Contributions are welcome! Please:
- Fork the repository
- Create a feature branch (
git checkout -b feat/my-feature) - Write tests for new functionality
- Ensure
npm test(672 tests) andnpm run check-typespass - Submit a pull request

{ "plugins": { "allow": ["openclaw-engram"], "slots": { "memory": "openclaw-engram" }, "entries": { "openclaw-engram": { "enabled": true, "config": { // Option 1: Use OpenAI for extraction: "openaiApiKey": "${OPENAI_API_KEY}" // Option 2: Use Engram's local LLM path (plugin mode only; no API key needed): // "localLlmEnabled": true, // "localLlmUrl": "http://localhost:1234/v1", // "localLlmModel": "qwen2.5-32b-instruct" // Option 3: Use the gateway model chain (primary path in gateway mode): // "modelSource": "gateway", // "gatewayAgentId": "engram-llm", // "fastGatewayAgentId": "engram-llm-fast" } } } } }