freshcrate
Home > MCP Servers > freelance

freelance

Graph-based workflow enforcement and persistent memory for AI coding agents. Define structured workflows in YAML. Enforce them at tool boundaries via MCP. Build a persistent knowledge graph that grow

Description

Graph-based workflow enforcement and persistent memory for AI coding agents. Define structured workflows in YAML. Enforce them at tool boundaries via MCP. Build a persistent knowledge graph that grows with every query and knows when its sources have changed.

README

Freelance

Graph-based workflow enforcement and persistent memory for AI coding agents.

Define structured workflows in YAML. Agents drive them through the freelance CLI, via a Claude Agent Skill that teaches the invariant protocol. Build a persistent knowledge graph that grows with every query and knows when its sources have changed.

Quick Start

Claude Code (plugin — recommended)

/plugin marketplace add duct-tape-and-markdown/freelance
/plugin install freelance@freelance-plugins

This installs the CLI-driving skill and the session/compact hooks. Then scaffold the project with freelance init in a terminal.

Other clients (Cursor, Windsurf, Cline)

npm install -g freelance-mcp
cd /path/to/your/project
freelance init

Driving workflows via skill + CLI

Freelance ships a single Claude Agent Skill that teaches the agent how to drive any workflow through the freelance CLI. The skill activates from its description match — when the user mentions a workflow to run, describes a task matching a loaded workflow, or wants to continue an in-flight traversal.

freelance init --client claude-code installs SKILL.md at .claude/skills/freelance/ (project scope) or ~/.claude/skills/freelance/ (user scope) alongside the workflows directory. The plugin install path ships the same skill.

The agent drives workflows through shell-out calls — freelance status, freelance start <graphId>, freelance advance <edge>, freelance context set k=v, freelance inspect — and branches on semantic exit codes (0 success, 1 internal, 2 blocked, 3 validation, 4 not found, 5 invalid input). Every runtime verb emits a structured JSON response on stdout; breadcrumbs go to stderr. See plugins/freelance/skills/freelance/SKILL.md for the driving protocol.

Workflows

Workflows are directed graphs defined in YAML. The agent calls CLI verbs to traverse them — freelance start to begin, freelance advance to move between nodes. Gate nodes block advancement until conditions are met. State lives on disk under .freelance/traversals/, so it survives context compaction.

id: my-workflow
version: "1.0.0"
name: "My Workflow"
startNode: start

context:
  taskDone: false

nodes:
  start:
    type: action
    description: "Do the work"
    instructions: "Complete the task and set context.taskDone = true."
    edges:
      - target: review
        label: done

  review:
    type: gate
    description: "Review the work"
    validations:
      - expr: "context.taskDone == true"
        message: "Task must be completed before review."
    edges:
      - target: complete
        label: approved

  complete:
    type: terminal
    description: "Workflow complete"

Node types: action (do work), decision (pick a route), gate (enforce conditions), wait (pause for external signals), terminal (end state).

Subgraph composition — Nodes can push into child workflows with scoped context. contextMap passes parent values in, returnMap passes child values back. The engine manages a stack, so subgraphs can nest.

Expression evaluator — Edge conditions and validations use a safe expression language (context.x == 'value', context.count > 0, boolean operators, nested property access). Validated at load time, evaluated at runtime.

onEnter hooks — Any node can declare onEnter: [{ call, args }] hooks that run before the agent sees the node. call resolves to either a built-in hook (memory_status, memory_browse) or a local script path (./scripts/fetch-context.js). Hooks receive resolved args, live context, and the memory store, and return a plain object of context updates. Strict-context enforcement still applies. Per-hook timeout defaults to 5000ms, configurable via hooks.timeoutMs in config.yml. Script hooks are validated at freelance validate time (eager import) — syntax errors, missing deps, and non-function default exports surface before the first traversal hits the node.

Trust model for hook scripts. Hook execution sits on an explicit line between two tiers:

  • Built-in hooks (memory_status, memory_browse, meta_set, …) are curated and ship inside the package. They run against a narrow read interface over memory + a meta collector. Always safe to reference.
  • Local script hooks (./scripts/foo.js) execute with full Node privileges in the host process — same filesystem, same network, same subprocess spawn, same env vars. There is no sandbox. Treat a workflow file that references a local script like a package.json scripts block: trust it at the same level you trust the rest of the repo. Do not load workflow graphs from untrusted sources.

A deployment that can't vet every workflow (shared graph registry, untrusted contributors) can set FREELANCE_HOOKS_ALLOW_SCRIPTS=0 in the environment; graph load then rejects any onEnter entry that points at a local script, leaving built-ins as the only runnable hook surface. Unset or set to any other value means scripts are allowed (default). See docs/decisions.md for the architectural rationale and the sandboxing milestone.

Memory

Freelance includes a persistent knowledge graph backed by SQLite. The agent reads source files, reasons about them, and writes atomic propositions about 1-2 entities. Every proposition records which source files produced it and their content hashes at the time of compilation.

When you query memory, it checks whether the source files on disk still match. Match = valid knowledge. Mismatch = stale. The knowledge base grows with every query but never serves something that's silently out of date.

How it works

Compilation — The agent reads source files, then emits propositions: self-contained claims in natural prose, each about 1-2 named entities. Propositions are deduplicated by content hash. Entity references are resolved by exact match, normalized match, or creation.

Recollection — When a new question comes in, the agent searches existing memory, reads the provenance sources, and identifies the delta — what the sources say about the question that existing propositions don't cover. Only that gap gets compiled. Each query makes the knowledge base denser from a different angle, without re-deriving what's already there.

Source provenance — Every proposition records the specific source files it was derived from, their content hashes, and their mtime at emit time. Validity is checked per-proposition on read: if any of a prop's source files have drifted, the prop is marked stale. Stale propositions aren't hidden — they're returned with a confidence signal so the agent can decide whether to re-verify.

Git branching for free — Switch branches, files change on disk, different propositions light up as valid or stale. Merge the branch, files converge, knowledge converges. No scope model, no branch tracking — just hash checks on read.

Configuration

Memory is enabled by default with zero configuration. The database is stored at .freelance/memory/memory.db.

To customize, add memory settings to your .freelance/config.yml (see Configuration below).

Two sealed workflows are auto-injected: memory:compile (read sources, emit propositions, evaluate coverage) and memory:recall (recall, source, compare, fill delta, evaluate). These can be referenced as subgraphs in your own workflows.

Memory CLI verbs

Write (gated by an active workflow traversal):

Command Description
freelance memory emit <file> Write propositions with required per-file source attribution (use - for stdin)
freelance memory prune --keep <ref> Scope-bounded delete by content-reachability (see Pruning memory)

Read (available anytime):

Command Description
freelance memory browse Find entities by name or kind (paginated via --limit/--offset)
freelance memory inspect <entity> Full entity details with propositions, neighbors, and deduped source files. Paginates via --limit/--offset; --shape minimal|full (default full) trims per-proposition source details when size matters
freelance memory by-source <file> All propositions derived from a specific source file (paginated via --limit/--offset)
freelance memory related <entity> Entity graph navigation — co-occurring entities with connection strength (paginated via --limit/--offset)
freelance memory search <query> Full-text search across proposition content (FTS5)
freelance memory status Knowledge graph health: total, valid, stale counts

Workflow CLI verbs

Command Description
freelance status Discover available workflow graphs and active traversals (each with any meta tags). Surfaces loadErrors when any workflow yaml failed to parse or validate
freelance start <graphId> Begin traversing a graph (optional opaque --meta key=value tags for later lookup)
freelance advance <edge> Move to the next node via a labeled edge. --minimal drops the full-context echo + node blob and returns contextDelta (keys written this turn) for lean hot-path responses
freelance context set <key=value>... Update session context without advancing. --minimal returns contextDelta only
freelance meta set <key=value>... Merge opaque meta tags onto a traversal (add or overwrite)
freelance inspect [id] Read-only introspection. --detail position|history, --minimal (strips node blob on position detail), --fields <name> (repeatable: currentNode|neighbors|contextSchema|definition), and on history: --limit <n> / --offset <n> / --include-snapshots. Includes meta tags
freelance reset --confirm Clear traversal and start over
freelance guide [topic] Authoring guidance for writing graphs
freelance distill --mode distill|refine Distill a task into a new workflow, or refine an existing one
freelance validate <dir> Validate graph definitions
freelance sources hash <paths...> Compute hashes for source binding
freelance sources check <sources...> Verify source file hashes
freelance sources validate Validate source integrity across loaded graphs

Configuration

Freelance uses two config files in .freelance/, both with the same schema:

File Purpose Committed?
config.yml Team-shared settings Yes
config.local.yml Machine-specific overrides (plugin hooks) No (gitignored)

General precedence: CLI flags > env vars > config.local.yml > config.yml > defaults. Per-field surface:

Field CLI flag Env var config.yml Notes
workflows --workflows (repeatable) FREELANCE_WORKFLOWS āœ“ (array, concatenates across files) User/project dirs cascade automatically
memory.enabled --memory / --no-memory — āœ“ CLI flag always wins
memory.dir --memory-dir — āœ“ Default: .freelance/memory/
maxDepth --max-depth — āœ“ Default: 5
hooks.timeoutMs — — āœ“ Config-only. Default: 5000
context.maxValueBytes — — āœ“ Per-value cap on context writes. Default: 4096 (4 KB)
context.maxTotalBytes — — āœ“ Total context cap per traversal. Default: 65536 (64 KB)
sourceRoot --source-root — — Computed from graphsDir if omitted
# .freelance/config.yml
workflows:                          # Additional workflow directories
  - ../shared-workflows/

memory:
  enabled: true                     # Default: true. Set false to disable.
  dir: /path/to/persistent/dir      # Override memory.db location (default: .freelance/memory/)

maxDepth: 5                         # Max subgraph nesting depth. CLI --max-depth overrides.

hooks:
  timeoutMs: 5000                   # Per-hook timeout for onEnter hooks. Default 5000.

context:
  maxValueBytes: 4096               # Per-value byte cap on context writes. Default 4 KB.
  maxTotalBytes: 65536              # Total serialized-context cap per traversal. Default 64 KB.

Over-cap writes are rejected with CONTEXT_VALUE_TOO_LARGE or CONTEXT_TOTAL_TOO_LARGE errors at the freelance context set, freelance advance --context …, and onEnter-hook return boundaries, so a runaway hook or context write never persists.

Merge rules: arrays (workflows) concatenate across files. Scalars use highest-precedence value.

Use freelance config show to see the resolved configuration and which files contributed.

Use freelance config set-local <key> <value> to modify config.local.yml programmatically (used by plugin hooks).

Workflow directories

Workflows load automatically from these directories (no flags needed):

  1. ./.freelance/ — project-level workflows
  2. ~/.freelance/ — user-level workflows (shared across projects)
  3. Additional directories listed in config.yml or config.local.yml workflows:

Subdirectories are scanned recursively. Later directories shadow earlier ones by graph ID. You can also specify directories explicitly:

freelance status --workflows ./my-workflows/

.freelance/ directory layout

.freelance/
ā”œā”€ā”€ config.yml           # team-shared config (committed)
ā”œā”€ā”€ config.local.yml     # machine-specific overrides (gitignored)
ā”œā”€ā”€ *.workflow.yaml      # source artifacts — your graph definitions
ā”œā”€ā”€ .gitignore           # auto-generated; covers runtime dirs below
ā”œā”€ā”€ memory/              # runtime (gitignored)
│   ā”œā”€ā”€ memory.db        #   persistent knowledge graph
│   ā”œā”€ā”€ memory.db-shm    #   SQLite shared-memory sidecar
│   └── memory.db-wal    #   SQLite write-ahead log
└── traversals/          # runtime (gitignored)
    └── tr_*.json        #   one file per active traversal

Source artifacts and runtime artifacts coexist as peers; the lifecycle distinction is maintained via .gitignore, not directory nesting. Freelance auto-generates .freelance/.gitignore on first write.

If you're upgrading from a pre-1.3 install that used a .state/ subdirectory, the layout is migrated automatically on the next run — memory.db is moved into memory/, traversals/ moves up one level, the vestigial state.db from the earlier architecture is removed, and the empty .state/ is cleaned up. The migration logs one line to stderr and is best-effort; on failure you'll see an actionable message.

Pruning memory

Over a long project lifetime, proposition_sources accumulates rows from abandoned branches and old file versions. Each row is a (proposition, file_path, content_hash) coordinate in corpus-version space — stale entries aren't wrong, they're frames of reference you no longer care about. freelance memory prune is the explicit, user-initiated cleanup path; emit-time GC would collapse the multi-frame store (see docs/memory-intent.md).

freelance memory prune --keep main --keep release --yes
freelance memory prune --keep main --dry-run

A row is deleted only when its content_hash doesn't match the file at any location you declared live: the current working tree, or the tip of any --keep ref. Ref blobs are read via git cat-file --batch directly from the object store, so prune never switches branches or touches your working tree.

The approach is robust to history-rewriting workflows (rebase, squash merge, amend) because it asks about tree content, not commit reachability — a squashed branch's bytes end up in the merge commit's tree and are still found. Unresolvable --keep refs hard-error before touching the database. Non-git source roots can't use prune at all.

Config default:

memory:
  prune:
    keep: [main]                 # concatenates with --keep flags

Resetting memory

Memory is content-addressable — everything in memory.db can be rebuilt on demand from source files. If you hit a schema incompatibility after a version bump, or just want a clean slate:

freelance memory reset --confirm

Deletes memory.db and its sidecars without opening the database, so it works even when the current binary refuses to load the old schema. Next run re-initializes a fresh store.

CLI flags:

  • --memory-dir <path> — override memory.db location (highest priority)
  • --no-memory — disable memory entirely

Environment variables:

  • FREELANCE_WORKFLOWS_DIR — colon-separated list of workflow directories (bypasses auto-scan)

CLI

Agents drive Freelance through the CLI. Commands operate directly on the local state store — no daemon or server required.

# Setup
freelance init                            # Interactive project setup
freelance validate <dir>                  # Validate graph definitions
freelance visualize <file>                # Render graph as Mermaid or DOT

# Traversals
freelance status [--filter key=value ...]           # Show loaded graphs and active traversals (with meta); --filter narrows by meta
freelance start <graphId> [--meta key=value ...]    # Begin a workflow traversal, optionally tagged
freelance advance [edge] [--minimal]                # Move to next node via edge label; --minimal for lean response
freelance context set <key=value...> [--minimal]    # Update traversal context; --minimal for lean response
freelance meta set <key=value...>                   # Merge meta tags (add or overwrite)
freelance inspect [traversalId] [--minimal]         # Read-only introspection (includes meta); --minimal trims node blob
freelance reset [traversalId] --confirm             # Clear a traversal

# Memory
freelance memory status                   # Proposition and entity counts
freelance memory browse                   # Find entities by name or kind
freelance memory inspect <entity>         # Full entity details
freelance memory search <query>           # Full-text search
freelance memory related <entity>         # Co-occurring entities
freelance memory by-source <file>         # Propositions from a source file
freelance memory emit <file>              # Write propositions from JSON

# Graph tools
freelance guide [topic]                   # Authoring guidance
freelance distill                         # Get a distill prompt
freelance sources hash <paths...>         # Compute source hashes
freelance sources check <sources...>      # Validate source hashes
freelance sources validate                # Validate all source bindings

# Configuration
freelance config show                     # Display resolved config with sources
freelance config set-local <key> <value>  # Modify config.local.yml

freelance completion bash|zsh|fish        # Shell completion script

Run freelance --help for full details and flags.

License

MIT

Release History

VersionChangesUrgencyDate
v1.3.3Plugin-only patch. Server code is unchanged from 1.3.2 — this release exists to propagate the `.mcp.json` pinning mechanism to users whose `/plugin update` was silently no-op'ing on stale npx cache entries. ### Changed - **Plugin `.mcp.json` now pins an exact `freelance-mcp` version** instead of the `^1` range. Caught by a field report after 1.3.2: npx keys its `_npx/<hash>` cache on the raw spec string, so `freelance-mcp@^1` reuses whatever 1.x is already cached and never re-resolves against High4/17/2026
v1.3.2Hotfix release for two regressions shipped in [v1.3.1](https://github.com/duct-tape-and-markdown/freelance/releases/tag/v1.3.1). ## Fixed - **Plugin `.mcp.json` launcher ([#70](https://github.com/duct-tape-and-markdown/freelance/pull/70))** — 1.3.1 shipped with hardcoded author-machine dev paths instead of the `npx -y freelance-mcp@^1 mcp` launcher. Any fresh install on a different machine failed to start, and existing installs stayed pinned to whatever `freelance-mcp` npx resolved under 1.3.0High4/17/2026
v1.3.1The memory-architecture port. Pushed memory intelligence out of the agent's round-trip path and into the traversal layer, driven by 11 ablation runs against a controlled fixture. Design intent captured in [`docs/memory-intent.md`](docs/memory-intent.md); empirical basis in [`experiments/FINDINGS.md`](experiments/FINDINGS.md). ## Highlights ### Added - **Four new built-in onEnter hooks**: `memory_search`, `memory_related`, `memory_inspect`, `memory_by_source`. `memory_by_source` takes `paths: sHigh4/17/2026
v1.3.0## What's Changed * Restrict memory write tools to active memory workflow traversals by @Jwcjwc12 in https://github.com/duct-tape-and-markdown/freelance/pull/43 * Codebase cleanup: dead code, docs drift, consistency by @Jwcjwc12 in https://github.com/duct-tape-and-markdown/freelance/pull/44 * 1.3.0: architectural consolidation release by @Jwcjwc12 in https://github.com/duct-tape-and-markdown/freelance/pull/50 * 1.3.0: onEnter hooks, async engine, composition root, flat layout by @Jwcjwc12 in httHigh4/14/2026
v1.3.0-beta.0First beta of the 1.3.0 consolidation release. Publishes under the `beta` npm dist-tag — `@latest` stays on 1.2.1 until this bakes. ## Install ```bash npx freelance-mcp@beta mcp # or pin explicitly npx freelance-mcp@1.3.0-beta.0 mcp ``` Plugin users on `freelance-mcp@^1` are **not** auto-upgraded — npm semver ranges skip pre-release versions by default. ## What's in 1.3.0 See the full [CHANGELOG entry](https://github.com/duct-tape-and-markdown/freelance/blob/main/CHANGELOG.md#130---2026-04-Medium4/13/2026
v1.2.1### Fixed - **Memory enabled-by-default** — Memory gate checked `enabled && db` instead of `enabled !== false && db`, preventing zero-config memory activation when `memory.enabled` was unset ### Changed - **`memory_register_source` accepts arrays** — `file_path` parameter now accepts a single path or an array of paths, reducing round-trips during compilation workflowsMedium4/11/2026
v1.2.0## Unified configuration system Introduces `config.yml` + `config.local.yml` — a structured, layered config surface for Freelance. ### New - **`freelance config show`** — display resolved configuration with sources - **`freelance config set-local <key> <value>`** — modify `config.local.yml` programmatically (for plugin hooks) - **`config.local.yml`** — gitignored, machine-specific overrides layered over committed `config.yml` - **`workflows:` config key** — declare additional workflow directorMedium4/10/2026
v1.1.3### Fixes - **Session-start hook**: called nonexistent `inspect --active --oneline` — now calls `status` - **Hook idempotency**: dedup marker didn't match the actual command, causing duplicates on re-init - **Shell completions**: all three (bash/zsh/fish) referenced removed `traversals` subgroup — rewritten for current CLI - **CONTRIBUTING.md**: wrong package name (`npx freelance` → `npx freelance-mcp`) - **Tests**: fixed stale mock (`INIT_DEFAULTS` missing `hooks`) and assertion (`validate` miMedium4/10/2026
v1.1.2## Changes - Fix publish workflow authentication for npm trusted publishing - CLI parity with all 21 MCP tools - Memory: collections, entity kinds, graph navigation, FTS triggers - Enable memory by default with --memory-dir and --no-memory flags - Fix CLI self-references in hooks, completions, and docs - Fix missing better-sqlite3 dependency for npx installsMedium4/10/2026
v1.1.1Fix missing `better-sqlite3` dependency for `npx` installs. `better-sqlite3` was listed as an optional peer dependency, which meant it wasn't installed when running via `npx -y freelance-mcp@latest mcp` (e.g. the Claude Code plugin). Moved it to a regular dependency.Medium4/10/2026
v1.1.0## What's Changed * Update installation command in README by @Jwcjwc12 in https://github.com/duct-tape-and-markdown/freelance/pull/29 * Surface graph validation errors and add freelance_validate MCP tool by @Jwcjwc12 in https://github.com/duct-tape-and-markdown/freelance/pull/31 * Add persistent memory system with SQLite-backed knowledge graph by @Jwcjwc12 in https://github.com/duct-tape-and-markdown/freelance/pull/32 * Add collections, entity kinds, graph navigation, and FTS triggers by @JwcjwcMedium4/10/2026

Dependencies & License Audit

Loading dependencies...

Similar Packages

@baseplate-dev/plugin-aiAI agent integration plugin for Baseplate — generates AGENTS.md, CLAUDE.md, .mcp.json, and .agents/ configuration files0.6.8
neurohiveMulti-agent memory intelligence layer — shared knowledge, expertise tracking, and conflict detection for AI agent teams1.0.8
@senso-ai/shipablesCLI for installing, managing, and publishing AI agent skills from the Shipables registry0.1.2
openpaeanOpen source AI agent CLI with executor framework (a8e, Claude Code), gateway relay, MCP support, and scrolling TUI0.7.15
minutes-sdkConversation memory SDK — query meeting transcripts, decisions, and action items from any AI agent or application0.13.3