Bring your own context to any AI agent
97.6% LongMemEval answer accuracy
Readable record Β· inspectable recall Β· harnesses are replaceable
Website Β· Docs Β· Benchmarks Β· Vision Β· Discussions Β· Discord Β· Contributing Β· AI Policy
Models change. Harnesses change. Providers change. Your context should not.
Signet is the portable context layer for AI agents. It keeps identity, memory, provenance, secrets, skills, and working knowledge outside any single chat app, model provider, or harness. The execution surface can change. The agent keeps its footing.
The job is simple: bring your own context to the agents you already use, then keep that context inspectable and under your control. Signet runs beneath Claude Code, OpenCode, OpenClaw, Codex, Hermes Agent, and other harnesses so the durable layer survives the tool of the week.
Memory is ambient. Signet captures useful context between sessions, preserves the raw record, indexes it for recall, and injects relevant context before the next prompt starts. The agent wakes up with continuity instead of asking you to rebuild the room by hand.
Why teams adopt it:
- less prompt re-explaining between sessions
- one context layer across agents, models, harnesses, and providers
- local-first storage with inspectable provenance and repair tools
- a path away from harness-locked behavioral context
bun add -g signetai # or: npm install -g signetai
signet setup # interactive setup wizard
signet status # confirm daemon + pipeline health
signet dashboard # open memory + retrieval inspectorIf you already use Claude Code, OpenCode, OpenClaw, Codex, or Hermes Agent, keep your existing harness. Signet installs under it.
Run Signet as a containerized daemon with first-party Compose assets:
cd deploy/docker
cp .env.example .env
docker compose up -d --buildSee docs/SELF-HOSTING.md for token bootstrap,
backup, and upgrade runbook details.
Portable memory only matters if the agent can see the world you already work inside. Signet is built around ordinary context, not a special knowledge-base ritual: project notes, transcripts, markdown files, PDFs, URLs, identity files, decisions, preferences, and the corrections that shape how work actually happens.
The durable record stays readable. The semantic layer helps the agent navigate it. Retrieval is a lens over the record, not a replacement for it. When a summary is stale, conflict-heavy, or decision-critical, the agent can climb back down to the source.
Run this once:
signet remember "my primary stack is bun + typescript + sqlite"Then in your next session, ask your agent:
what stack am i using for this project?
You should see continuity without manually reconstructing context. If not, inspect recall and provenance in the dashboard or run:
signet recall "primary stack"Want the deeper architecture view? Jump to How it works or Architecture.
These are the product surface areas Signet is optimized around:
| Core | What it does |
|---|---|
| π§ Ambient memory | Sessions are captured automatically, no manual memory ceremony required |
| ποΈ Source-backed context | Raw transcripts and workspace files remain available beneath summaries and recall results |
| π― Inspectable recall | Hybrid search, graph traversal, provenance, scopes, and ranking signals explain why context surfaced |
| π Local-first substrate | Data lives on your machine in SQLite and markdown, portable by default |
| π€ Cross-harness continuity | Claude Code, OpenCode, OpenClaw, Codex, Pi, Hermes Agent, one shared context layer |
| π§© SDK-first extensibility | Typed SDKs, middleware, and plugin surfaces let builders shape Signet around their own agents |
Use Signet if you want:
- memory continuity across sessions without manual prompt bootstrapping
- local ownership of agent state and history
- one context layer across multiple agent harnesses
Signet may be overkill if you only need short-lived chat memory inside a single hosted assistant.
Signet is not a chat app, not a harness, and not a fake second brain trying to outsmart the model. It is the durable layer underneath: files, memory, identity, provenance, retrieval, secrets, and permissions.
The harness should stay replaceable. The provider should provide intelligence, not custody. Signet keeps the continuity somewhere you can inspect, repair, move, and rebuild.
- runs local-first by default
- raw records and workspace files stay inspectable
- SQLite powers the query layer; recall keeps provenance and source references
- memory can be repaired (edit, supersede, delete, reclassify)
- easy to build on: SDK, connectors, MCP, and workspace primitives let teams shape Signet around their agents, policies, and workflows
- no vendor lock-in, your context stays portable
If you are building agents for an organization, Signet is meant to be shaped, not merely installed. Use the SDK, plugin SDK, connectors, and MCP surface to fit your own agents, permission model, workflows, and deployment style.
These systems improve quality and reliability of the core memory loop:
| Supporting | What it does |
|---|---|
| π Lossless transcripts | Raw session history preserved alongside extracted memories |
| πΈοΈ Structured retrieval substrate | Graph traversal + FTS5 + vector search produce bounded candidate context |
| π― Feedback-aware ranking | Recency, provenance, importance, and dampening signals help separate useful context from repeated noise |
| π¬ Noise filtering | Hub and similarity controls reduce low-signal memory surfacing |
| π Document ingestion | Pull PDFs, markdown, and URLs into the same retrieval pipeline |
| π₯οΈ CLI + Dashboard | Operate and inspect the system from terminal or web UI |
These extend Signet for larger deployments and custom integrations:
| Advanced | What it does |
|---|---|
| π Agent-blind secrets | Encrypted secret storage, injected at execution time, not exposed to agent text |
| π― Multi-agent policies | Isolated/shared/group memory visibility for multiple named agents |
| π Git sync | Identity and memory can be versioned in your own remote |
| π¦ SDK + plugin SDK | Typed client, React hooks, Vercel/OpenAI helpers, and plugin surfaces for extending the ecosystem |
| π MCP aggregation | Register MCP servers once, expose across connected harnesses |
| π₯ Team controls | RBAC, token policy, and rate limits for shared deployments |
| πͺ Ecosystem installs | Install skills and MCP servers from skills.sh and ClawHub |
| βοΈ Apache 2.0 | Fully open source, forkable, and self-hostable |
Memory quality is not just recall quality. It is governance quality.
Signet is built to support:
- provenance inspection (where a memory came from)
- scoped visibility controls (who can see what)
- memory repair (edit, supersede, delete, or reclassify)
- transcript fallback (verify extracted memory against raw source)
- lifecycle controls (retention, decay, and conflict handling)
Signet is not a harness. It doesn't replace Claude Code, OpenClaw, OpenCode, Pi, or Hermes Agent β it runs alongside them as an enhancement. Bring the harness you already use. Signet handles the memory layer underneath it.
| Harness | Status | Integration |
|---|---|---|
| Claude Code | Supported | Hooks |
| Forge | First-party | Native runtime / reference harness |
| OpenCode | Supported | Plugin + Hooks |
| OpenClaw | Supported | Runtime plugin + NemoClaw compatible |
| Codex | Supported | Hooks + MCP server |
| Hermes Agent | Supported | Memory provider plugin |
| Pi | Supported | Extension + Hooks |
| Gemini CLI | Planned | β |
Don't see your favorite harness? file an issue and request that it be added!
LongMemEval measures whether a memory
system can recover and use facts across long-running, multi-session
assistant conversations. Signet's current published run scores 97.6% answer
accuracy under the MemoryBench rules profile.
That profile keeps the benchmark contract strict: memories are ingested through
/api/memory/remember, recalled through /api/memory/recall, and answered
from bounded daemon recall results. Search does not call an LLM.
See Benchmarks for the methodology and run workflow.
bun add -g signetai # or: npm install -g signetai
signet setup # interactive setup wizardThe wizard initializes $SIGNET_WORKSPACE/, configures your harnesses, sets up
an embedding provider, creates the database, and starts the daemon.
Path note:
$SIGNET_WORKSPACEmeans your active Signet workspace path. Default is~/.agents, configurable viasignet workspace set <path>.
Paste this to your AI agent:
Install and fully configure Signet AI by following this guide exactly: https://signetai.sh/skill.md
signet status # check daemon health
signet dashboard # open the web UI
signet remember "prefers bun over npm"
signet recall "coding preferences"Multiple named agents share one daemon and database. Each agent gets its
own identity directory (~/.agents/agents/<name>/) and configurable
memory visibility:
signet agent add alice --memory isolated # alice sees only her own memories
signet agent add bob --memory shared # bob sees all global memories
signet agent add ci --memory group --group eng # ci sees memories from the eng group
signet agent list # roster + policies
signet remember "deploy key" --agent alice --private # alice-only secret
signet recall "deploy" --agent alice # scoped to alice's visible memories
signet agent info alice # identity files, policy, memory countOpenClaw users get zero-config routing β session keys like
agent:alice:discord:direct:u123 are parsed automatically; no
agentId header needed.
In connected harnesses, skills work directly:
/remember critical: never commit secrets to git
/recall release process
Signet separates memory into three layers:
workspace / transcripts
truth layer: raw files, identity docs, source records, session history
semantic memory
navigation layer: summaries, entities, decisions, constraints, relations
query layer
retrieval lens: FTS, vector search, graph traversal, scopes, provenance
The record is preserved first. The daemon indexes it, extracts useful structure, and keeps recall bounded and inspectable. The agent gets the right context before the next prompt starts, with a path back to the raw source when the semantic layer is not enough.
After setup, there is no per-session memory ceremony. The pipeline runs in the background and the agent wakes up with its memory intact.
Read more: Why Signet Β· Architecture Β· Knowledge Graph Β· Pipeline
Workspace (~/.agents/)
AGENTS.md, SOUL.md, IDENTITY.md, USER.md, MEMORY.md, transcripts, memory files
readable source records and agent identity files
CLI (signet)
setup, knowledge, secrets, skills, hooks, git sync, service mgmt
Daemon (@signet/daemon, localhost:3850)
|-- HTTP API (memory, retrieval, auth, skills, updates, tooling)
|-- File Watcher
| identity sync, per-agent workspace sync, git auto-commit
|-- Distillation Layer
| extraction -> decision -> graph -> retention
|-- Retrieval
| FTS + vectors + graph traversal -> fusion -> dampening
|-- Lossless Transcripts
| raw session storage -> expand-on-recall join
|-- Document Worker
| ingest -> chunk -> embed -> index
|-- Ranking + Feedback
| bounded candidate ordering, provenance, source-aware scoring
|-- MCP Server
| tool registration, aggregation, blast radius endpoint
|-- Auth Middleware
| local / team / hybrid, RBAC, rate limiting
|-- Multi-Agent
roster sync, agent_id scoping, read-policy SQL enforcement
Core (@signet/core)
types, identity, SQLite storage/query, hybrid search, graph traversal
SDK (@signet/sdk)
typed client, React hooks, Vercel/OpenAI helpers, plugin-facing primitives
Connectors
claude-code, opencode, openclaw, codex, oh-my-pi, pi, hermes-agent, forge
| Package | Role |
|---|---|
@signet/core |
Types, identity, SQLite, hybrid + graph search |
@signet/cli |
CLI, setup wizard, dashboard |
@signet/daemon |
API server, distillation layer, auth, analytics, diagnostics |
@signet/sdk |
Typed client, React hooks, Vercel/OpenAI helpers, plugin-facing primitives |
packages/forge |
Forge native terminal harness and reference runtime implementation |
@signet/connector-base |
Shared connector primitives and utilities |
@signet/connector-claude-code |
Claude Code integration |
@signet/connector-opencode |
OpenCode integration |
@signet/connector-openclaw |
OpenClaw integration |
@signet/connector-codex |
Codex CLI integration |
@signet/connector-oh-my-pi |
Oh My Pi integration |
@signet/connector-hermes-agent |
Hermes Agent integration |
@signet/connector-pi |
Pi coding agent integration |
@signet/oh-my-pi-extension |
Oh My Pi extension bridge |
@signet/pi-extension |
Pi extension β memory tools, lifecycle, and session hooks |
@signet/opencode-plugin |
OpenCode runtime plugin β memory tools and session hooks |
@signetai/signet-memory-openclaw |
OpenClaw runtime plugin |
@signet/extension |
Browser extension for Chrome and Firefox |
@signet/desktop |
Electron desktop application |
@signet/tray |
Shared tray/menu bar utilities |
@signet/native |
Native accelerators |
predictor |
Experimental Rust sidecar for learned relevance ranking |
signetai |
Meta-package (signet binary) |
- Quickstart
- CLI Reference
- Configuration
- Hooks
- Harnesses
- Secrets
- Skills
- Auth
- Dashboard
- SDK
- API Reference
- Knowledge Architecture
- Knowledge Graph
- Benchmarks
- Roadmap
| Paper / Project | Relevance |
|---|---|
| Lossless Context Management (Voltropy, 2026) | Hierarchical summarization, guaranteed convergence. Patterns adapted in LCM-PATTERNS.md. |
| Recursive Language Models (Zhang et al., 2026) | Active context management. LCM builds on and departs from RLM's approach. |
| acpx (OpenClaw) | Agent Client Protocol. Structured agent coordination. |
| lossless-claw (Martian Engineering) | LCM reference implementation as an OpenClaw plugin. |
| openclaw (OpenClaw) | Agent runtime reference. |
| arscontexta | Agentic notetaking patterns. |
| ACAN (Hong et al.) | LLM-enhanced memory retrieval for generative agents. |
| Kumiho (Park et al., 2026) | Prospective indexing. Hypothetical query generation at write time. Reports 0.565 F1 on the official split and 97.5% on the adversarial subset. |
git clone https://github.com/Signet-AI/signetai.git
cd signetai
bun install
bun run build
bun test
bun run lintcd packages/daemon && bun run dev # Daemon dev (watch mode)
cd packages/cli/dashboard && bun run dev # Dashboard devRequirements:
- Node.js 18+ or Bun
- macOS or Linux
- Optional for harness integrations: Claude Code, Codex, OpenCode, or OpenClaw
Embeddings (choose one):
- Built-in (recommended) β no extra setup, runs locally via ONNX (
nomic-embed-text-v1.5) - Ollama β alternative local option, requires
nomic-embed-textmodel - OpenAI β cloud option, requires
OPENAI_API_KEY
New to open source? Start with Your First PR. For code conventions and project structure, see CONTRIBUTING.md. Open an issue before contributing significant features. Read the AI Policy before submitting AI-assisted work.
|
|
|
|
|
|
|
|
|
|
|
|
Made with love by members of Dashore Incubator & friends of Jake Shore and Nicholai Vogel.
Apache-2.0.
signetai.sh Β· docs Β· spec Β· discussions Β· issues
