Two commands. Your AI coding assistant gets outcome-based memory.
Works with Claude Code and OpenCode.
AI coding assistants forget everything between sessions. You explain your architecture, your preferences, your conventions β again. When they give bad advice, there's no mechanism to learn from it.
Roampal is an MCP server that gives your AI persistent, outcome-based memory across every session. Good advice gets promoted. Bad advice gets demoted. Your AI learns what works and what doesn't β automatically, with zero workflow changes.
85.8% on LoCoMo (non-adversarial, end-to-end answer accuracy) β validated on 1,986 questions across 10 conversations with dual grading.
| Result | Score |
|---|---|
| Conversational learning vs raw ingestion | +23 points (76.6% vs 53.0%, p<0.0001) |
| Architecture vs model effect | Architecture ~10x larger contributor |
| Poison resilience (1,135 adversarial memories) | -2.6 to -4.2 points only |
| TagCascade retrieval (tags-first + CE rerank) | +1.9 Hit@1 vs pure CE (p<0.0001) |
Benchmark pipeline runs on a single GPU with no cloud dependencies. Roampal itself runs on CPU β no GPU required. Full methodology, data, and evaluation scripts: roampal-labs
Paper: "Beyond Ingestion: What Conversational Memory Learning Reveals on a Corrected LoCoMo Benchmark" (Logan Teague, April 2026)
pip install roampal
roampal initAuto-detects installed tools. Restart your editor and start chatting.
Target a specific tool:
roampal init --claude-codeorroampal init --opencode
Platform Differences
The core loop is identical β both platforms inject context, capture exchanges, and score outcomes. The delivery mechanism differs:
| Claude Code | OpenCode | |
|---|---|---|
| Context injection | Hooks (stdout) | Plugin (system prompt) |
| Exchange capture | Stop hook | Plugin session.idle event |
| Scoring | Main LLM via score_memories tool |
Independent sidecar (your chosen model > Zen free) |
| Self-healing | Hooks auto-restart server on failure | Plugin auto-restarts server on failure |
Claude Code prompts the main LLM to score each exchange via the score_memories tool. OpenCode never self-scores β an independent sidecar (a separate API call) reviews each exchange as a third party, removing self-assessment bias. The score_memories tool is not registered on OpenCode. During roampal init or roampal sidecar setup, Roampal detects local models (Ollama, LM Studio, etc.) and lets you choose a scoring model. If configured, these take priority (Zen is skipped for privacy). A cheap or local model works great β scoring doesn't need a powerful model. Defaults to Zen free models (remote, best-effort) if you skip setup.
When you type a message, Roampal automatically injects relevant context before your AI sees it:
You type:
fix the auth bug
Your AI sees:
βββ KNOWN CONTEXT βββ
β’ JWT refresh pattern fixed auth loop [id:patterns_a1b2] (3d, 90% proven, patterns)
β’ User prefers: never stage git changes [id:mb_c3d4] (memory_bank)
βββ END CONTEXT βββ
fix the auth bug
No manual calls. No workflow changes. It just works.
- You type a message
- Roampal injects relevant context automatically (hooks in Claude Code, plugin in OpenCode)
- AI responds with full awareness of your history, preferences, and what worked before
- Outcome scored β good advice gets promoted, bad advice gets demoted
- Repeat β the system gets smarter every exchange
| Collection | Purpose | Lifetime |
|---|---|---|
working |
Current session context | 24h β promotes if useful, deleted otherwise |
history |
Past conversations | 30 days, outcome-scored |
patterns |
Proven solutions | Persistent while useful, promoted from history |
memory_bank |
Identity, preferences, goals | Permanent |
books |
Uploaded reference docs | Permanent |
roampal init # Auto-detect and configure installed tools
roampal init --claude-code # Configure Claude Code explicitly
roampal init --opencode # Configure OpenCode explicitly
roampal init --no-input # Non-interactive setup (CI/scripts)
roampal start # Start the HTTP server manually
roampal stop # Stop the HTTP server
roampal status # Check if server is running
roampal status --json # Machine-readable status (for scripting)
roampal stats # View memory statistics
roampal stats --json # Machine-readable statistics (for scripting)
roampal doctor # Diagnose installation issues
roampal summarize # Summarize long memories (retroactive cleanup)
roampal score # Score the last exchange (manual/testing)
roampal context # Output recent exchange context
roampal ingest <file> # Add documents to books collection
roampal books # List all ingested books
roampal remove <title> # Remove a book by title
roampal sidecar status # Check scoring model configuration (OpenCode)
roampal sidecar setup # Configure scoring model (OpenCode)
roampal sidecar test # Test scoring model response format (OpenCode)
roampal sidecar disable # Remove scoring model configuration (OpenCode)Your AI gets these memory tools:
| Tool | Description | Platforms |
|---|---|---|
search_memory |
Deep search across all collections | Both |
add_to_memory_bank |
Store permanent facts (identity, preferences, goals) | Both |
update_memory |
Correct or update existing memories | Both |
delete_memory |
Remove outdated info | Both |
score_memories |
Score previous exchange outcomes | Claude Code |
record_response |
Store key takeaways from significant exchanges | Both |
How scoring works: Claude Code's hooks prompt the main LLM to call
score_memoriesevery turn. OpenCode uses an independent sidecar that scores silently in the background β the model never sees a scoring prompt andscore_memoriesis not registered as a tool. If the sidecar is unavailable, a warning prompts the user to runroampal sidecar setup. Choose your scoring model duringroampal initor viaroampal sidecar setup.
| Feature | Roampal Core | Claude Code built-in (CLAUDE.md / auto memory) | OpenCode built-in |
|---|---|---|---|
| Learns from outcomes | Yes β bad advice demoted, good advice promoted | No | No |
| Semantic retrieval | Yes β TagCascade + cross-encoder reranking | No β files loaded in full, no search | No memory system |
| Context injection | Automatic β relevant memories per query | Full CLAUDE.md every session, auto memory on demand | None |
| Atomic fact extraction | Yes β summaries + facts, two-lane retrieval | No β saves what Claude decides is useful | No |
| Works across projects | Yes β shared memory across all projects | Per-project only (per git repo) | No memory |
| Scales with history | Yes β 5 collections, promotion/demotion/decay | CLAUDE.md unbounded, auto memory first 200 lines | No memory |
| Fully local / private | Yes β ChromaDB on your machine | Yes | Yes |
Architecture
βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
β pip install roampal && roampal init β
β Claude Code: hooks + MCP β ~/.claude/ β
β OpenCode: plugin + MCP β ~/.config/opencode/ β
βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
β
βΌ
βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
β HTTP Hook Server (port 27182) β
β Auto-started on first use, self-heals on failure β
β Manual control: roampal start / roampal stop β
βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
β
βΌ
βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
β User types message β
β β Hook/plugin calls HTTP server for context β
β β AI sees relevant memories, responds β
β β Exchange stored, scored (hooks or sidecar) β
βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
β
βΌ
βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
β Single-Writer Backend β
β FastAPI β UnifiedMemorySystem β ChromaDB β
β All clients share one server, isolated by session β
βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
See dev/docs/ for full technical details.
- Python 3.10+
- One of: Claude Code or OpenCode
- Platforms: Windows, macOS, Linux (primarily developed and tested on Windows)
- RAM: ~800MB available (cross-encoder reranker + embeddings + ChromaDB)
- Disk: ~500MB for models (multilingual embedding + reranker, downloaded automatically on first use)
- CPU: Any modern x86-64 processor with AVX2 (Intel Haswell 2013+ / AMD Excavator 2015+)
- GPU: Not required β all inference runs on CPU via ONNX Runtime
Hooks not working? (Claude Code)
- Restart Claude Code (hooks load on startup)
- Check HTTP server:
curl http://127.0.0.1:27182/api/health
MCP not connecting? (Claude Code)
- Verify
~/.claude.jsonhas theroampal-coreMCP entry with correct Python path - Check Claude Code output panel for MCP errors
Context not appearing? (OpenCode)
- Make sure you ran
roampal init --opencode - Check that the server auto-started:
curl http://127.0.0.1:27182/api/health - If not, start it manually:
roampal start
Server crashes and recovers?
This is expected. Roampal has self-healing -- if the HTTP server stops responding, it is automatically restarted and retried.
Still stuck? Ask your AI for help β it can read logs and debug Roampal issues directly.
Roampal Core is completely free and open source.
- Support development: roampal.gumroad.com
- Feature ideas & feedback: Discord
- Bug reports: GitHub Issues
- Need help with AI memory? Reach out: roampal@protonmail.com | LinkedIn
