MCP server for Claude Code and Codex. One tool call replaces ~42 minutes of agent exploration β 80 Grep calls, 190 file reads.
Your AI agent reads
UserController.phpand sees a class. trace-mcp reads it and sees a route β controller β Eloquent model β Inertia render β Vue page β in one graph.Ask "what breaks if I change this model?" β instead of 80 Grep calls and 190 file reads, the agent calls
get_change_impactonce and gets the blast radius across PHP, Vue, migrations, and DI. 58 framework integrations across 81 languages, 138 tools, up to 99% token reduction.
Also ships a desktop app with a GPU graph explorer over the same index.
| You ask | trace-mcp answers | How |
|---|---|---|
| "What breaks if I change this model?" | Blast radius across languages + risk score + linked architectural decisions | get_change_impact β reverse dependency graph + decision memory |
| "Why was auth implemented this way?" | The actual decision record with reasoning and tradeoffs | query_decisions β searches the decision knowledge graph linked to code |
| "I'm starting a new task" | Optimal code subgraph + relevant past decisions + dead-end warnings | plan_turn β opening-move router with decision enrichment |
| "What did we discuss about GraphQL last month?" | Verbatim conversation fragments with file references | search_sessions β FTS5 search across all past session content |
| "Show me the request flow from URL to rendered page" | Route β Middleware β Controller β Service β View with prop mapping | get_request_flow β framework-aware edge traversal |
| "Find all untested code in this module" | Symbols classified as "unreached" or "imported but never called in tests" | get_untested_symbols β test-to-source mapping |
| "What's the impact of this API change on other services?" | Cross-subproject client calls with confidence scores | get_subproject_impact β topology graph traversal |
Three things no other tool does:
-
Framework-aware edges β trace-mcp understands that
Inertia::render('Users/Show')connects PHP to Vue, that@Injectable()creates a DI dependency, that$user->posts()means apoststable from migrations. 58 integrations across 15 frameworks, 7 ORMs, 13 UI libraries. -
Code-linked decision memory β when you record "chose PostgreSQL for JSONB support", it's linked to
src/db/connection.ts::Pool#class. When someone runsget_change_impacton that symbol, they see the decision. MemPalace stores decisions as text; trace-mcp ties them to the dependency graph. -
Cross-session intelligence β past sessions are mined for decisions and indexed for search. When you start a new session,
get_wake_upgives you orientation in ~300 tokens;plan_turnshows relevant past decisions for your task;get_session_resumecarries over structural context from previous sessions.
AI coding agents are language-aware but framework-blind.
They don't know that Inertia::render('Users/Show', $data) connects a Laravel controller to resources/js/Pages/Users/Show.vue. They don't know that $user->posts() means the posts table defined three migrations ago. They can't trace a request from URL to rendered pixel.
So they brute-read files, guess at relationships, and miss cross-language edges entirely. The bigger the project, the worse it gets.
trace-mcp builds a cross-language dependency graph from your source code and exposes it through the Model Context Protocol β the plugin format Claude Code, Cursor, Windsurf and other AI coding agents speak. Any MCP-compatible agent gets framework-level understanding out of the box.
| Without trace-mcp | With trace-mcp |
|---|---|
| Agent reads 15 files to understand a feature | get_task_context β optimal code subgraph in one shot |
| Agent doesn't know which Vue page a controller renders | routes_to β renders_component β uses_prop edges |
| "What breaks if I change this model?" β agent guesses | get_change_impact traverses reverse dependencies across languages |
| Schema? Agent needs a running database | Migrations parsed β schema reconstructed from code |
| Prop mismatch between PHP and Vue? Discovered in production | Detected at index time β PHP data vs. defineProps |
trace-mcp ships with an optional Electron desktop app (packages/app) that gives you a visual surface over the same index the MCP server uses. It manages multiple projects, wires up MCP clients, and provides a GPU-accelerated graph explorer β all without opening a terminal.
Projects & clients. The menu window lists indexed projects with live status (Ready / indexing / error) and re-index / remove controls. The MCP Clients tab detects installed clients (Claude Code, Claw Code, Claude Desktop, Cursor, Windsurf, Continue, Junie, JetBrains AI, Codex) and wires trace-mcp into them with one click, including enforcement level (Base / Standard / Max β CLAUDE.md only, + hooks, + tweakcc).
Per-project overview. Each project opens in its own tabbed window: Overview (files, symbols, edges, coverage, linked services, re-index), Ask (natural-language query over the index), and Graph. Overview also surfaces Most Symbols files, last-indexed timestamp, and the dependency coverage meter.
GPU graph explorer. The Graph tab renders the full dependency graph on the GPU via cosmos.gl β tens of thousands of nodes/edges at interactive frame rates. Filter by Files / Symbols, overlay detected communities, highlight groups, toggle labels/FPS, and step through graph depth. Good for getting a feel for coupling, hotspots, and how a codebase is actually shaped before you dive into tools.
Install: grab the latest build from Releases β
- macOS β
trace-mcp-<version>-arm64-mac.zip(Apple Silicon) ortrace-mcp-<version>-mac.zip(Intel). Unzip and dragtrace-mcp.appinto/Applications. - Windows β run
trace-mcp.Setup.<version>.exe.
The app talks to the same trace-mcp daemon (http://127.0.0.1:3741) that MCP clients use, so anything you index from the app is immediately available to Claude Code / Cursor / etc.
trace-mcp combines code graph navigation, cross-session memory, and real-time code understanding in a single tool. Most adjacent projects solve one of these β trace-mcp unifies all three and is the only one with framework-aware cross-language edges (58 integrations) and code-linked decision memory.
- vs. token-efficient exploration (Repomix, jCodeMunch, cymbal) β trace-mcp adds framework edges, refactoring, security, and subprojects on top of symbol lookup.
- vs. session-memory tools (MemPalace, claude-mem, ConPort) β trace-mcp links decisions to specific symbols/files, so they surface automatically in impact analysis.
- vs. RAG / doc-gen (DeepContext, smart-coding-mcp) β trace-mcp answers "show me the execution path, deps, and tests," not "find code similar to this query."
- vs. code-graph MCP servers (Serena, Roam-Code) β trace-mcp has the broadest language coverage (81) and is the only one with cross-language framework edges.
Full side-by-side tables with GitHub stars, languages, and per-capability coverage: docs/comparisons.md.
AI agents burn tokens reading files they don't need. trace-mcp returns precision context β only the symbols, edges, and signatures relevant to the query.
Benchmark: trace-mcp's own codebase (694 files, 3,831 symbols):
Task Without trace-mcp With trace-mcp Reduction
βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
Symbol lookup 42,518 tokens 7,353 tokens 82.7%
File exploration 27,486 tokens 548 tokens 98.0%
Search 22,860 tokens 8,000 tokens 65.0%
Find usages 11,430 tokens 1,720 tokens 85.0%
Context bundle 12,847 tokens 4,164 tokens 67.6%
Batch overhead 16,831 tokens 9,031 tokens 46.3%
Impact analysis 49,141 tokens 2,461 tokens 95.0%
Call graph 178,345 tokens 10,704 tokens 94.0%
Type hierarchy 94,762 tokens 1,030 tokens 98.9%
Tests for 22,590 tokens 1,150 tokens 94.9%
Composite task 93,634 tokens 3,836 tokens 95.9%
βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
Total 572,444 tokens 49,997 tokens 91.3%
91% fewer tokens to accomplish the same code understanding tasks. That's ~522K tokens saved per exploration session β more headroom for actual coding, fewer context window evictions, lower API costs.
Savings scale with project size. On a 650-file project, trace-mcp saves ~522K tokens. On a 5,000-file enterprise codebase, savings grow non-linearly β without trace-mcp, the agent reads more wrong files before finding the right one. With trace-mcp, graph traversal stays O(relevant edges), not O(total files).
Composite tasks deliver the biggest wins. A single get_task_context call replaces a chain of ~10 sequential operations (search β get_symbol Γ 5 β Read Γ 3 β Grep Γ 2). That's one round-trip instead of ten, with 90%+ token reduction.
Methodology
Measured using benchmark_project β runs eleven real task categories (symbol lookup, file exploration, text search, find usages, context bundle, batch overhead, impact analysis, call graph traversal, type hierarchy, tests-for, composite task context) against the indexed project. "Without trace-mcp" = estimated tokens from equivalent Read/Grep/Glob operations (full file reads, grep output). "With trace-mcp" = actual tokens returned by trace-mcp tools (targeted symbols, outlines, graph results). Token counts estimated using trace-mcp's built-in savings tracker.
Reproduce it yourself:
# Via MCP tool
benchmark_project # runs against the current project
# Or via CLI
trace-mcp benchmark /path/to/project
- Request flow tracing β URL β Route β Middleware β Controller β Service, across backend frameworks
- Component trees β render hierarchy with props / emits / slots (Vue, React, Blade)
- Schema from migrations β no DB connection needed
- Event chains β Event β Listener β Job fan-out (Laravel, Django, NestJS, Celery, Socket.io)
- Change impact analysis β reverse dependency traversal across languages, enriched with linked architectural decisions
- Graph-aware task context β describe a dev task β get the optimal code subgraph (execution paths, tests, types) + relevant past decisions, adapted to bugfix/feature/refactor intent
- Call graph & DI tree β bidirectional call graphs with 4-tier resolution confidence, optional LSP enrichment for compiler-grade accuracy, NestJS dependency injection
- ORM model context β relationships, schema, metadata for 7 ORMs
- Dead code & test gap detection β find untested exports/symbols (with "unreached" vs "imported_not_called" classification), dead code, per-symbol test reach in impact analysis
- Security scanning β OWASP Top-10 pattern scanning and taint analysis (sourceβsink data flow). Exportable MCP-server security context for skill-scan
- Semantic search, offline by default β bundled ONNX embeddings work out of the box, no API keys; switch to Ollama/OpenAI for LLM-powered summarisation
- Decision memory β mine sessions for decisions, link them to symbols/files, auto-surface in impact analysis
- Multi-service subprojects β link graphs across services via API contracts; cross-service impact + service-scoped decisions
- CI/PR change impact reports β automated blast radius, risk scoring, test-gap detection, architecture violations on every PR
Languages (81): PHP, TypeScript, JavaScript, Python, Go, Java, Kotlin, Ruby, Rust, C, C++, C#, Swift, Objective-C, Objective-C++, Dart, Scala, Groovy, Elixir, Erlang, Haskell, Gleam, Bash, Lua, Perl, GDScript, R, Julia, Nix, SQL, PL/SQL, HCL/Terraform, Protocol Buffers, GraphQL, Prisma, Vue SFC, HTML, CSS/SCSS/SASS/LESS, XML/XUL/XSD, YAML, JSON, TOML, Assembly, Fortran, AutoHotkey, Verse, AL, Blade, EJS, Zig, OCaml, Clojure, F#, Elm, CUDA, COBOL, Verilog/SystemVerilog, GLSL, Meson, Vim Script, Common Lisp, Emacs Lisp, Dockerfile, Makefile, CMake, INI, Svelte, Markdown, MATLAB, Lean 4, FORM, Magma, Wolfram/Mathematica, Ada, Apex, D, Nim, Pascal, PowerShell, Solidity, Tcl
Frameworks: Laravel (+ Livewire, Nova, Filament, Pennant), Django (+ DRF), FastAPI, Flask, Express, NestJS, Fastify, Hono, Next.js, Nuxt, Rails, Spring, tRPC
ORMs: Eloquent, Prisma, TypeORM, Drizzle, Sequelize, Mongoose, SQLAlchemy
Frontend: Vue, React, React Native, Blade, Inertia, shadcn/ui, Nuxt UI, MUI, Ant Design, Headless UI
Other: GraphQL, Socket.io, Celery, Zustand, Pydantic, Zod, n8n, React Query/SWR, Playwright/Cypress/Jest/Vitest/Mocha
Full details: Supported frameworks Β· All tools
npm install -g trace-mcp
trace-mcp init # one-time global setup (MCP clients, hooks, CLAUDE.md)
trace-mcp add # register current project for indexinginitβ configures your MCP client (Claude Code, Cursor, Windsurf, Claude Desktop, β¦), installs the guard hook, adds routing rules to~/.claude/CLAUDE.md.addβ detects frameworks, creates the per-project index, registers the project. Re-run in every project you want trace-mcp to understand.
All state lives in ~/.trace-mcp/ β your project directory stays clean unless you opt into .traceignore or .trace-mcp/.config.json.
Then in your MCP client:
> get_project_map to see what frameworks are detected
> get_task_context("fix the login bug") to get full execution context for a task
> get_change_impact on app/Models/User.php to see what depends on it
Prefer a GUI? The desktop app handles install, indexing, MCP-client wiring, and re-indexing without touching a terminal.
Going further: adding more projects / upgrading / manual setup Β· semantic search (local ONNX) Β· indexing & file watcher Β· .traceignore.
trace-mcp works on three levels to make AI agents use its tools instead of raw file reading:
The MCP server provides instructions and tool descriptions with routing hints that tell AI agents when to prefer trace-mcp over native Read/Grep/Glob. This works with any MCP-compatible client β no configuration needed.
trace-mcp init adds a Code Navigation Policy block to ~/.claude/CLAUDE.md (or your project's CLAUDE.md) that tells the agent which trace-mcp tool to prefer over Read/Grep/Glob for each kind of task. If you skipped init, see System prompt routing for the full block and how to tune enforcement.
For hard enforcement, trace-mcp init installs a PreToolUse guard hook that blocks Read/Grep/Glob on source files and redirects the agent to trace-mcp tools (non-code files, Read-before-Edit, and safe Bash commands pass through). Manage manually with trace-mcp setup-hooks --global / --uninstall. Details: System prompt routing.
Decisions, tradeoffs, and discoveries from AI-agent conversations usually vanish when the session ends. trace-mcp captures them and links each decision to the code it's about β so when someone later runs get_change_impact on src/db/connection.ts::Pool#class, the "we chose PostgreSQL for JSONB" decision surfaces automatically.
- Mine β
mine_sessionsscans Claude Code / Claw Code JSONL logs and extracts decisions via pattern matching (0 LLM calls). Types: architecture, tech choice, bug root cause, tradeoff, convention. - Link β each decision attaches to a symbol or file; supports service-scoped decisions for subprojects.
- Surface β decisions auto-enrich
get_change_impact,plan_turn, andget_session_resume. Temporal validity (valid_from/valid_until) makes "what was true on 2025-01-15?" queries possible. - Search β
query_decisions(FTS5 + filters) for decisions;search_sessionsfor raw conversation content across all past sessions.
trace-mcp memory mine # extract decisions from sessions
trace-mcp memory search "GraphQL migration" # search past conversations
trace-mcp memory timeline --file src/auth.ts # decision history for a fileFull tool list, CLI, temporal validity, service scoping: Decision memory.
A subproject is any repo in your project's ecosystem β microservice, frontend, shared lib, CLI tool. trace-mcp links dependency graphs across subprojects: if service A calls an endpoint in service B, changing the endpoint in B shows up as a breaking change for A.
Discovery is automatic. On each index, trace-mcp detects subprojects (Docker Compose, flat/grouped workspaces, monolith fallback), parses API contracts (OpenAPI, GraphQL SDL, Protobuf/gRPC), scans code for HTTP client calls (fetch, axios, Http::, requests, http.Get, gRPC stubs, GraphQL ops), and links the calls to known endpoints.
cd ~/projects/my-app && trace-mcp add
# β auto-detects user-service (openapi.yaml) and order-service
# β links order-service β user-service via /api/users/{id}
trace-mcp subproject impact --endpoint=/api/users
# β [order-service] src/services/user-client.ts:42 (axios, confidence: 85%)External subprojects can be added manually with trace-mcp subproject add --repo=... --project=.... MCP tools: get_subproject_graph, get_subproject_impact, get_subproject_clients, subproject_add_repo, subproject_sync.
Full CLI, detection modes, MCP-tool reference, topology config: Configuration β topology & subprojects.
trace-mcp ci-report --base main --head HEAD produces a markdown or JSON report per pull request: summary, blast radius (depth-2 reverse dep traversal), test coverage gaps (per-symbol hasTestReach), risk analysis (30% complexity + 25% churn + 25% coupling + 20% blast radius), architecture violations (auto-detects clean / hexagonal presets), and new dead exports.
Use --fail-on high to block merges on high-risk changes. See .github/workflows/ci.yml for a ready-to-use GitHub Action that runs build β test β impact-report and posts a sticky PR comment on every push.
Source files (PHP, TS, Vue, Python, Go, Java, Kotlin, Ruby, HTML, CSS, Blade)
β
βΌ
ββββββββββββββββββββββββββββββββββββββββββββ
β Pass 1 β Per-file extraction β
β tree-sitter β symbols β
β integration plugins β routes, β
β components, migrations, events, β
β models, schemas, variants, tests β
ββββββββββββββββββββββ¬ββββββββββββββββββββββ
β
βΌ
ββββββββββββββββββββββββββββββββββββββββββββ
β Pass 2 β Cross-file resolution β
β PSR-4 Β· ES modules Β· Python modules β
β Vue components Β· Inertia bridge β
β Blade inheritance Β· ORM relations β
β β unified directed edge graph β
ββββββββββββββββββββββ¬ββββββββββββββββββββββ
β
βΌ
ββββββββββββββββββββββββββββββββββββββββββββ
β Pass 3 β LSP enrichment (opt-in) β
β tsserver Β· pyright Β· gopls Β· β
β rust-analyzer β compiler-grade β
β call resolution, 4-tier confidence β
ββββββββββββββββββββββ¬ββββββββββββββββββββββ
β
βΌ
ββββββββββββββββββββββββββββββββββββββββββββ
β SQLite (WAL mode) + FTS5 β
β nodes Β· edges Β· symbols Β· routes β
β + embeddings (local ONNX by default) β
β + optional: LLM summaries β
ββββββββββββββββββββββ¬ββββββββββββββββββββββ
β
βΌ
ββββββββββββββββββββββββββββββββββββββββββββ
β Decision Memory (decisions.db) β
β decisions Β· session chunks Β· FTS5 β
β temporal validity Β· code linkage β
β auto-mined from session logs β
ββββββββββββββββββββββ¬ββββββββββββββββββββββ
β
βΌ
MCP server (stdio or HTTP/SSE)
138 tools Β· 2 resources
Incremental by default β files are content-hashed; unchanged files are skipped on re-index.
Plugin architecture β language plugins (symbol extraction) and integration plugins (semantic edges) are loaded based on project detection, organized into categories: framework, ORM, view, API, validation, state, realtime, testing, tooling.
Details: Architecture & plugin system
| Document | Description |
|---|---|
| Supported frameworks | Complete list of languages, frameworks, ORMs, UI libraries, and what each extracts |
| Tools reference | All 138 MCP tools with descriptions and usage examples |
| Configuration | Config options, AI setup, environment variables, security settings |
| Architecture | How indexing works, plugin system, project structure, tech stack |
| Decision memory | Decision knowledge graph, session mining, cross-session search, wake-up context |
| Analytics | Session analytics, token savings tracking, optimization reports, benchmarks |
| System prompt routing | Optional tweakcc integration for maximum tool routing enforcement |
| Comparisons | Full side-by-side tables vs. other code intelligence / memory / RAG tools |
| Development | Building, testing, contributing, adding new plugins |
Built by Nikolai Vysotskyi



