Persistent, private memory for AI agents. Your agents forget everything between sessions β Engram fixes that.
Engram gives AI agents long-term memory that survives across conversations. Decisions, preferences, project context, personal details, past mistakes β everything your agent learns persists and resurfaces exactly when it's needed. All data stays on your machine as plain markdown files. No cloud services, no subscriptions, no sharing your data with third parties.
Engram is now a monorepo. For standalone (non-OpenClaw) use, install the scoped packages:
@engram/core,@engram/server,@engram/cli. Theopenclaw-engramand@joshuaswarren/openclaw-engrampackages remain the OpenClaw plugin entry point. Python users:engram-hermeson PyPI.
Every bit of support is genuinely appreciated and helps keep this project alive and free for everyone.
If you're able to, sponsoring on GitHub or sending a Lightning donation to joshuaswarren@strike.me directly funds continued development, new integrations, and keeping Engram open source.
If financial support isn't an option, you can still make a big difference β star the repo on GitHub, share it on social media, or recommend it to a friend or colleague. Word of mouth is how most people find Engram, and it means the world.
Every AI agent session starts from zero. Your agent doesn't know your name, your projects, the decisions you've already made, or the bugs you already debugged. Whether it's a personal assistant, a coding agent, a research agent, or a multi-agent team β they all forget everything between conversations. You re-explain the same context over and over, and your agents still make the same mistakes.
OpenClaw's built-in memory works for simple cases, but it doesn't scale. It lacks semantic search, lifecycle management, entity tracking, and governance. Third-party memory services exist, but they cost money and require sending your private data to someone else's servers.
Engram is an open-source, local-first memory system that replaces OpenClaw's default memory with something much more capable β while keeping everything on your machine. It watches your agent conversations, extracts durable knowledge, and injects the right memories back at the start of every session. Use OpenAI or a local LLM (Ollama, LM Studio, etc.) for extraction β your choice.
Engram is the universal memory layer for AI agents. It works natively with OpenClaw, Claude Code, Codex CLI, Hermes Agent, and any MCP-compatible client (Replit, Cursor, etc.). When you tell any agent a preference, every agent knows it β they share one memory store.
| Without Engram | With Engram |
|---|---|
| Re-explain who you are and what you're working on | Agent recalls your identity, projects, and preferences automatically |
| Repeat context for every task | Entity knowledge surfaces people, projects, tools, and relationships on demand |
| Lose debugging and research context between sessions | Past root causes, dead ends, and findings are recalled β no repeated work |
| Manually restate preferences every session | Preferences persist across sessions, agents, and projects |
| Context-switching tax when resuming work | Session-start recall brings you back to speed instantly |
| Default OpenClaw memory doesn't scale | Hybrid search, lifecycle management, namespaces, and governance |
| Third-party memory services cost money and share your data | Everything stays local β your filesystem, your rules |
openclaw plugins install @joshuaswarren/openclaw-engram --pinTell any OpenClaw agent:
Install the openclaw-engram plugin and configure it as my memory system.
Your agent will run the install command, update openclaw.json, and restart the gateway for you.
git clone https://github.com/joshuaswarren/openclaw-engram.git \
~/.openclaw/extensions/openclaw-engram
cd ~/.openclaw/extensions/openclaw-engram
npm ci && npm run buildFrom npm (recommended):
npm install -g @engram/cli # Installs the `engram` binary
engram init # Create engram.config.json
export OPENAI_API_KEY=sk-...
export ENGRAM_AUTH_TOKEN=$(openssl rand -hex 32)
engram daemon start # Start background server
engram status # Verify it's running
engram query "hello" --explain # Test query with tier breakdownFrom source (requires Node.js 22.12+ and pnpm):
git clone https://github.com/joshuaswarren/openclaw-engram.git
cd openclaw-engram
pnpm install && pnpm run build
cd packages/engram-cli && pnpm link --global # Makes `engram` available on PATH
cd ../..
engram initNote: The
engrambinary (packages/cli/bin/engram.cjs) is a CJS wrapper that auto-locatestsxfromnode_modules(falling back to a globaltsx). Runningnpm linkfrompackages/cli/(not the repo root) makes the CLI globally available β the root package only exposesengram-access. Alternatively, invoke directly:npx tsx packages/cli/src/index.ts <command>.
The standalone CLI provides 15+ commands for memory management, project onboarding, curation, diff-aware sync, dedup, connectors, spaces, and benchmarks -- all without requiring OpenClaw. See the Platform Migration Guide for the full command reference.
Once the Engram daemon is running, connect any supported agent:
engram connectors install claude-code # Claude Code (hooks + MCP)
engram connectors install codex-cli # Codex CLI (hooks + MCP)
engram connectors install replit # Replit (MCP only)
pip install engram-hermes # Hermes Agent (Python MemoryProvider)Each connector generates a unique auth token, installs the appropriate plugin/hooks, and verifies the connection. All agents share the same memory store β tell one agent your preference, and every agent remembers it.
| Platform | Integration | Auto-recall | Auto-observe |
|---|---|---|---|
| OpenClaw | Memory slot plugin | Every session | Every response |
| Claude Code | Native hooks + MCP | Every prompt | Every tool use |
| Codex CLI | Native hooks + MCP | Every prompt | Every tool use |
| Hermes | Python MemoryProvider | Every LLM call | Every turn |
| Replit | MCP only | On demand | On demand |
After installation, add Engram to your openclaw.json:
Gateway model source: When
modelSourceis"gateway", Engram routes all LLM calls (extraction, consolidation, reranking) through an OpenClaw agent persona's model chain instead of its own config. Extraction starts on thegatewayAgentIdchain directly in this mode;localLlm*settings do not control primary extraction order. Define agent personas inopenclaw.json β agents.list[]with aprimarymodel andfallbacks[]array β Engram tries each in order until one succeeds. This lets you build multi-provider fallback chains like Fireworks β local LLM β cloud OpenAI. See the Gateway Model Source guide for full setup.
Restart the gateway:
launchctl kickstart -k gui/$(id -u)/ai.openclaw.gateway # macOS
# or: systemctl restart openclaw-gateway # LinuxStart a conversation β Engram begins learning immediately.
Note: This shows only the minimal config. Engram has 60+ configuration options for search backends, capture modes, memory OS features, and more. See the full config reference for every setting.
openclaw engram setup --json # Validates config, scaffolds directories
openclaw engram doctor --json # Health diagnostics with remediation hints
openclaw engram config-review --json # Opinionated config tuning recommendationsStart the Engram HTTP server:
# Generate a token
export OPENCLAW_ENGRAM_ACCESS_TOKEN="$(openssl rand -base64 32)"
# Start the server
openclaw engram access http-serve \
--host 127.0.0.1 \
--port 4318 \
--token "$OPENCLAW_ENGRAM_ACCESS_TOKEN"Add to ~/.codex/config.toml:
[mcp_servers.engram]
url = "http://127.0.0.1:4318/mcp"
bearer_token_env_var = "OPENCLAW_ENGRAM_ACCESS_TOKEN"That's it. Codex now has access to Engram's recall, store, and entity tools. See the full Codex integration guide for session-start hooks, cross-machine setup, and automatic recall at session start.
Run the stdio MCP server:
openclaw engram access mcp-servePoint your MCP client's command at openclaw engram access mcp-serve. Works with Claude Code, and any other MCP-compatible client. The server exposes the same tools as the HTTP endpoint.
Claude Code (MCP over HTTP): Start the Engram HTTP server, then add to ~/.claude.json:
{
"mcpServers": {
"engram": {
"url": "http://localhost:4318/mcp",
"headers": {
"Authorization": "Bearer ${ENGRAM_TOKEN}"
}
}
}
}See the Standalone Server Guide for multi-tenant setups and connecting multiple agent harnesses.
Engram also works as a standalone tool without OpenClaw. Install and run the CLI directly:
npm install -g @joshuaswarren/openclaw-engram
engram init # create engram.config.json
export OPENAI_API_KEY=sk-...
export ENGRAM_AUTH_TOKEN=$(openssl rand -hex 32)
engram daemon start # start background server
engram query "hello" # verifyThe CLI provides 15+ commands for querying, onboarding projects, curating files, managing spaces, running benchmarks, and more. See the full CLI reference for all commands.
Engram works with 10+ coding tools via MCP or HTTP. See the Connector Setup Guide for config snippets for Claude Code, Codex CLI, Cursor, GitHub Copilot, Cline, Roo Code, Windsurf, Amp, Replit, and any generic MCP client.
OpenClaw remains the recommended path for most users. The standalone CLI is useful for CI/CD pipelines, scripted memory operations, and environments without OpenClaw.
@engram/core β Framework-agnostic engine (re-exports orchestrator, config, storage, search, extraction, graph, trust zones)
@engram/cli β Standalone CLI binary (15+ commands)
@engram/server β Standalone HTTP/MCP server
@engram/bench β Benchmarks + CI regression gates
@engram/hermes-provider β HTTP client for remote Engram instances
Engram operates in three phases:
Recall β Before each conversation, inject relevant memories into context
Buffer β After each turn, accumulate content until a trigger fires
Extract β Periodically, extract structured memories using an LLM
Memories are stored as plain markdown files with YAML frontmatter β fully portable, git-friendly, no database required:
---
id: decision-1738789200000-a1b2
category: decision
confidence: 0.92
tags: ["architecture", "search"]
---
Decided to use the port/adapter pattern for search backends
so alternative engines can replace QMD without changing core logic.Memory categories include: fact, decision, preference, correction, relationship, principle, commitment, moment, skill, rule, and more.
Engram is organized as a monorepo with a core engine, standalone server/CLI, and native plugins for multiple AI platforms:
βββββββββββββββββββ
β @engram/core β
β (engine) β
ββββββββββ¬βββββββββ
β
ββββββββββββ¬βββββββββββ¬ββββ΄βββββ¬βββββββββββ¬βββββββββββ
β β β β β β
βββββββ΄ββββββ βββ΄βββββββ ββ΄ββββββ ββ΄βββββββββ β Native β
β @engram/ β β@engram/β βengramβ βopenclaw- β β Plugins β
β cli β βserver β βhermesβ βengram β β β
βββββββββββββ ββββββββββ ββββββββ βββββββββββ ββββββββββββ
β β
βββββββ΄ββββββ ββββββββββββββββΌβββββββββββ
β @engram/ β β β β
β bench β claude-code codex replit
βββββββββββββ
The @joshuaswarren/openclaw-engram npm package is deprecated β use openclaw-engram (for OpenClaw) or the @engram/* packages (for standalone/multi-platform use).
All memory lives on your filesystem as plain markdown files. No cloud dependency, no subscriptions, no proprietary formats, no sending your private conversations to third-party servers. Back it up with git, rsync, or Time Machine. Move it between machines with a folder copy. You own your data completely.
OpenClaw's built-in memory is basic β it works for getting started, but lacks semantic search, entity tracking, lifecycle management, governance, and multi-agent isolation. Engram is a drop-in replacement that brings all of those capabilities while keeping the same local-first philosophy.
Engram uses hybrid search (BM25 + vector + reranking via QMD) to find semantically relevant memories. It doesn't just match keywords β it understands what you're working on and surfaces the right context.
Use OpenAI for extraction and reranking, run entirely offline with a local LLM (Ollama, LM Studio), or route through the gateway model chain to use any provider with automatic fallback. The local-llm-heavy preset is optimized for fully local operation. See the Local LLM Guide and the Gateway Model Source section for multi-provider setups.
Start with zero config. Enable features as your needs grow:
| Level | What You Get |
|---|---|
| Defaults | Automatic extraction, recall injection, entity tracking, lifecycle management |
| + Search tuning | Choose from 6 search backends (QMD, Orama, LanceDB, Meilisearch, remote, noop) |
| + Capture control | implicit, explicit, or hybrid capture modes for memory write policy |
| + Memory OS | Memory boxes, graph reasoning, compounding, shared context, identity continuity |
| + LCM | Lossless Context Management β never lose conversation context to compaction |
| + Parallel retrieval | Three specialized agents (DirectFact, Contextual, Temporal) run in parallel β same latency, broader coverage |
| + Advanced | Trust zones, causal trajectories, harmonic retrieval, evaluation harness, poisoning defense |
Use a preset to jump to a recommended level: conservative, balanced, research-max, or local-llm-heavy.
- OpenClaw β Native plugin with automatic extraction and recall injection
- Codex CLI β MCP-over-HTTP with session-start hooks for automatic recall
- Any MCP client β stdio or HTTP transport, 8 tools available
- Scripts & automation β Authenticated REST API for custom integrations
- Local LLMs β Run extraction and reranking with local models (Ollama, LM Studio, etc.)
Run Engram as a standalone HTTP server that multiple agent harnesses share. Isolate tenants with namespace policies, feed conversations from any client via the observe endpoint, and search archived history with LCM full-text search. Works with OpenClaw, Codex CLI, Claude Code, and custom HTTP agents. See the Standalone Server Guide.
- 672 tests with CI enforcement
- Evaluation harness with benchmark packs, shadow recall recording, and CI delta gates
- Governance system with review queues, shadow/apply modes, and reversible transitions
- Namespace isolation for multi-agent deployments
- Rate limiting on write paths with idempotency support
- Automatic memory extraction β Facts, decisions, preferences, corrections extracted from conversations
- Observe endpoint β Feed conversation messages from any agent into the extraction pipeline via HTTP or MCP
- Recall injection β Relevant memories injected before each agent turn
- Entity tracking β People, projects, tools, companies tracked as structured entities
- Lifecycle management β Memories age through active, validated, stale, archived states
- Episode/Note model β Memories classified as time-specific events or stable beliefs
| Backend | Type | Best For |
|---|---|---|
| QMD (default) | Hybrid BM25+vector+reranking | Best recall quality |
| Orama | Embedded, pure JS | Zero native deps |
| LanceDB | Embedded, native Arrow | Large collections |
| Meilisearch | Server-based | Shared search |
| Remote | HTTP REST | Custom services |
| Noop | No-op | Extraction only |
See the Search Backends Guide or write your own.
These capabilities can be enabled progressively:
- Memory Boxes β Groups related memories into topic-windowed episodes
- Graph Recall β Entity-relationship graph for causal and timeline queries
- Compounding β Weekly synthesis surfaces patterns and recurring mistakes
- Shared Context β Cross-agent memory sharing for multi-agent setups
- Identity Continuity β Consistent agent personality across sessions
- Hot/Cold Tiering β Automatic migration of aging memories to cold storage
- Memory Cache β Process-level singleton cache for
readAllMemories()β turns 15s disk scans into <100ms cache hits, shared across all sessions - Semantic Consolidation β Finds clusters of semantically similar memories, synthesizes canonical versions via LLM, archives originals to reduce bloat
- Native Knowledge β Search curated markdown (workspace docs, Obsidian vaults) without extracting into memory
- Behavior Loop Tuning β Runtime self-tuning of extraction and recall parameters
When your AI agent hits its context window limit, the runtime silently compresses old messages β and that context is gone forever. LCM fixes this by proactively archiving every message into a local SQLite database and building a hierarchical summary DAG (directed acyclic graph) alongside it. When context gets compacted, LCM injects compressed session history back into recall, so your agent never loses track of what happened earlier in the conversation.
- Proactive archiving β Every message is indexed with full-text search before compaction can discard it
- Hierarchical summaries β Leaf summaries cover ~8 turns, depth-1 covers ~32, depth-2 ~128, etc.
- Fresh tail protection β Recent turns always use the most detailed (leaf-level) summaries
- Three-level summarization β Normal LLM summary, aggressive bullet compression, and deterministic truncation (guaranteed convergence, no LLM needed)
- MCP expansion tools β Agents can search, describe, or expand any part of conversation history on demand
- Zero data loss β Raw messages are retained for the configured retention period (default 90 days)
Enable it in your openclaw.json:
{
"plugins": {
"entries": {
"openclaw-engram": {
"config": {
"lcmEnabled": true
// All other LCM settings have sensible defaults
}
}
}
}
}See the LCM Guide for architecture details, configuration options, and how it complements native compaction.
Engram's default retrieval runs a single hybrid search pass. Parallel Specialized Retrieval (inspired by Supermemory's ASMR technique) runs three specialized agents in parallel so total latency equals max(agents) not sum(agents).
| Agent | What It Does | Cost |
|---|---|---|
| DirectFact | Scans entity filenames for keyword overlap with the query | File I/O only, <5ms |
| Contextual | Existing hybrid BM25+vector search (unchanged) | Same as current |
| Temporal | Reads the temporal date index, returns recent memories with recency decay scoring | File I/O + math, <10ms |
Zero additional LLM cost. The DirectFact and Temporal agents reuse existing indexes with no new embeddings or inference. The Contextual agent is the same hybrid search already running.
Results from all three agents are merged by path, deduplicated, and weighted (direct=1.0Γ, temporal=0.85Γ, contextual=0.7Γ) before returning the top N results. Any agent error degrades gracefully without blocking the others.
Enable it in your openclaw.json:
{
"plugins": {
"entries": {
"openclaw-engram": {
"config": {
"parallelRetrievalEnabled": true
// Optional tuning:
// "parallelMaxResultsPerAgent": 20,
// "parallelAgentWeights": { "direct": 1.0, "contextual": 0.7, "temporal": 0.85 }
}
}
}
}
}Set parallelMaxResultsPerAgent: 0 to disable an individual agent's results without disabling the feature entirely.
Over time, memory stores accumulate redundant facts β the same information extracted multiple times across sessions, expressed slightly differently. Semantic consolidation finds clusters of similar memories using token overlap, synthesizes a single canonical version via LLM, and archives the originals. This reduces storage bloat, speeds up recall, and improves memory quality.
- Conservative by default β Only merges when 80%+ token overlap is detected across 3+ memories
- LLM synthesis β Uses your configured model to combine unique information from all cluster members
- Safe archival β Originals are archived (not deleted) with full provenance tracking
- Configurable β Adjust threshold, cluster size, excluded categories, model, and schedule
- Excluded categories β Corrections and commitments are never consolidated (configurable)
Enable it in your openclaw.json:
{
"plugins": {
"entries": {
"openclaw-engram": {
"config": {
"semanticConsolidationEnabled": true
// Optional tuning:
// "semanticConsolidationThreshold": 0.8, // 0.8=conservative, 0.6=aggressive
// "semanticConsolidationModel": "fast", // "auto", "fast", or specific model
// "semanticConsolidationIntervalHours": 168, // weekly (default)
// "semanticConsolidationMaxPerRun": 100
}
}
}
}
}Run manually from the CLI:
openclaw engram semantic-consolidate --dry-run # Preview what would be merged
openclaw engram semantic-consolidate --verbose # Run with detailed output
openclaw engram semantic-consolidate --threshold 0.6 # Override threshold- Objective-State Recall β Surfaces file/process/tool state snapshots alongside semantic memory
- Causal Trajectories β Typed
goal -> action -> observation -> outcomechains - Trust Zones β Quarantine/working/trusted tiers with promotion rules and poisoning defense
- Harmonic Retrieval β Blends abstraction nodes with cue-anchor matches
- Verified Recall β Only surfaces memory boxes whose source memories still verify
- Semantic Rule Promotion β Promotes
IF ... THENrules from verified episodes - Creation Memory β Work-product ledger tracking agent outputs
- Commitment Lifecycle β Tracks promises, deadlines, and obligations
- Resume Bundles β Crash-recovery context for interrupted sessions
- Utility Learning β Learns promotion/ranking weights from downstream outcomes
See Enable All Features for a full-feature config profile.
Engram exposes one shared service layer through multiple transports:
openclaw engram access http-serve --token "$OPENCLAW_ENGRAM_ACCESS_TOKEN"Key endpoints: GET /engram/v1/health, POST /engram/v1/recall, POST /engram/v1/memories, GET /engram/v1/entities/:name, and more. Full reference in API docs.
The HTTP server also hosts a lightweight operator UI at http://127.0.0.1:4318/engram/ui/ for memory browsing, recall inspection, governance review, trust-zone promotion, and entity exploration.
Available via both stdio and HTTP transports:
| Tool | Purpose |
|---|---|
engram.recall |
Retrieve relevant memories for a query |
engram.recall_explain |
Debug the last recall |
engram.day_summary |
Generate structured end-of-day summary from memory content |
engram.memory_get |
Fetch a specific memory by ID |
engram.memory_timeline |
View a memory's lifecycle history |
engram.memory_store |
Store a new memory |
engram.suggestion_submit |
Queue a memory for review |
engram.entity_get |
Look up a known entity |
engram.review_queue_list |
View the governance review queue |
engram.observe |
Feed conversation messages into memory pipeline (LCM + extraction) |
engram.lcm_search |
Full-text search over LCM-archived conversations |
engram_context_search |
Full-text search across all archived conversation history (LCM) |
engram_context_describe |
Get a compressed summary of a turn range (LCM) |
engram_context_expand |
Retrieve raw lossless messages for a turn range (LCM) |
The HTTP server exposes an MCP JSON-RPC endpoint at POST /mcp, allowing remote MCP clients to use Engram tools over HTTP:
openclaw engram access http-serve --host 0.0.0.0 --port 4318 --token "$TOKEN"For namespace-enabled deployments, pass --principal <name> where <name> matches a writePrincipals entry for your target namespace. Deployments with namespacesEnabled: false (the default) do not need --principal.
# Setup & diagnostics
openclaw engram setup # Guided first-run setup
openclaw engram doctor # Health diagnostics with remediation hints
openclaw engram config-review # Config tuning recommendations
openclaw engram stats # Memory counts, search status
openclaw engram inventory # Full storage and namespace inventory
# Search & recall
openclaw engram search "query" # Search memories from CLI
openclaw engram harmonic-search "query" # Preview harmonic retrieval matches
# Governance
openclaw engram governance-run --mode shadow # Preview governance transitions
openclaw engram governance-run --mode apply # Apply reversible transitions
openclaw engram review-disposition <id> --status rejected # Operator review
# Benchmarking
openclaw engram benchmark recall # Benchmark status and validation
openclaw engram benchmark-ci-gate # CI gate for regressions
# Memory maintenance
openclaw engram consolidate # Run standard consolidation
openclaw engram semantic-consolidate # Run semantic dedup consolidation
openclaw engram semantic-consolidate --dry-run # Preview without changes
# Access layer
openclaw engram access http-serve --token "$TOKEN" # Start HTTP API
openclaw engram access mcp-serve # Start stdio MCP server
# Trust-zone demos
openclaw engram trust-zone-demo-seed --dry-run # Preview the opt-in buyer demo dataset
openclaw engram trust-zone-demo-seed # Explicitly seed the demo dataset
openclaw engram trust-zone-promote --record-id <id> --target-zone working --reason "Operator review"Trust zones now ship with a dedicated admin-console view plus an explicit demo seeding path for buyer-facing walkthroughs.
- Never automatic β Engram does not seed sample trust-zone records on install, startup, or feature enablement.
- Explicit only β demo records appear only after you run
openclaw engram trust-zone-demo-seedor trigger the matching admin-console action. - Buyer-friendly story β the trust-zone view surfaces provenance strength, promotion readiness, corroboration requirements, and operator promotion actions in one place.
The seeded scenario is enterprise-buyer-v1, which creates a small, opinionated dataset covering:
- quarantine records that are ready for review
- working records that are blocked on missing provenance
- working records that still need corroboration
- working records with independent corroboration support
- a trusted operator policy record
See the full CLI reference for all commands.
All settings live in openclaw.json under plugins.entries.openclaw-engram.config. The table below shows the most commonly changed settings β Engram has 60+ configuration options covering search backends, capture modes, memory OS features, namespaces, governance, benchmarking, and more.
| Setting | DefaultRelease History
Dependencies & License AuditLoading dependencies... Similar Packagesopenclaw-engramLocal-first memory plugin for OpenClaw AI agents. LLM-powered extraction, plain markdown storage, hybrid search via QMD. Gives agents persistent long-term memory across conversations.v9.3.142 Awareness-LocalLocal-first AI agent memory β one command, works offline, no account needed. Give your Claude Code, Cursor, Windsurf, OpenClaw agent persistent memory. Markdown storage, hybrid search (FTS5 + embeddinmain@2026-04-20 vektoriMemory that remembers the story not just the facts. Three layer sentence graph for AI agents -> Facts, Episodes, raw Sentences. One DB. Zero config.main@2026-04-19 ori-cliAgentic coding harness with persistent memory and a REPL body. Built on Ori Mnemos. Open source must win.master@2026-04-18 tweetsave-mcpπ Fetch Twitter/X content and convert it into blog posts using the MCP server for seamless integration and easy content management.main@2026-04-21 |
|---|

{ "plugins": { "allow": ["openclaw-engram"], "slots": { "memory": "openclaw-engram" }, "entries": { "openclaw-engram": { "enabled": true, "config": { // Option 1: Use OpenAI for extraction: "openaiApiKey": "${OPENAI_API_KEY}" // Option 2: Use Engram's local LLM path (plugin mode only; no API key needed): // "localLlmEnabled": true, // "localLlmUrl": "http://localhost:1234/v1", // "localLlmModel": "qwen2.5-32b-instruct" // Option 3: Use the gateway model chain (primary path in gateway mode): // "modelSource": "gateway", // "gatewayAgentId": "engram-llm", // "fastGatewayAgentId": "engram-llm-fast" } } } } }