freshcrate
Home > MCP Servers > SocratiCode

SocratiCode

Enterprise-grade (40m+ lines) codebase intelligence in a zero-setup, private and local Claude Plugin or MCP: managed indexing, hybrid semantic search, polyglot code dependency graphs, and DB/API/infra

Description

Enterprise-grade (40m+ lines) codebase intelligence in a zero-setup, private and local Claude Plugin or MCP: managed indexing, hybrid semantic search, polyglot code dependency graphs, and DB/API/infra knowledge. Benchmark: 61% less tokens, 84% fewer calls, 37x faster than standard AI grep.

README

SocratiCode logo

SocratiCode

CI License: AGPL-3.0 npm version Node.js >= 18 GitHub stars Install Claude Code PluginInstall in VS CodeInstall in VS Code InsidersInstall in Cursor"There is only one good, knowledge, and one evil, ignorance." β€” Socrates

Give any AI instant automated knowledge of your entire codebase (and infrastructure) β€” at scale, zero configuration, fully private, completely free.

Kindly sponsored by Altaire Limited

If SocratiCode has been useful to you, please ⭐ star this repo β€” it helps others discover it β€” and share it with your dev team and fellow developers!

One thing, done well: deep codebase intelligence β€” zero setup, no bloat, fully automatic. SocratiCode gives AI assistants deep semantic understanding of your codebase β€” hybrid search, polyglot code dependency graphs, and searchable context artifacts (database schemas, API specs, infra configs, architecture docs). Zero configuration β€” add it to any MCP host, or install the Claude Code plugin for built-in workflow skills. It manages everything automatically.

Production-ready, battle-tested on enterprise-level large repositories (up to and over ~40 million lines of code). Batched, automatic resumable indexing checkpoints progress β€” pauses, crashes, restarts, and interruptions don't lose work. The file watcher keeps the index automatically updated at every file change and across sessions. Multi-agent ready β€” multiple AI agents can work on the same codebase simultaneously, sharing a single index with automatic coordination and zero configuration.

Private and local by default β€” Docker handles everything, no API keys required, no data leaves your machine. Cloud ready for embeddings (OpenAI, Google Gemini) and Qdrant, and a full suite of configuration options are all available when you need them.

The first Qdrant‑based MCP/Claude Plugin/Skill that pairs auto‑managed, zero‑config local Docker deployment with AST‑aware code chunking, hybrid semantic + BM25 (RRF‑fused) code search, polyglot dependency graphs with circular‑dependency visualization, and searchable infra/API/database artifacts in a single focused, zero-config and easy to use code intelligence engine.

Benchmarked on VS Code (2.45M lines): SocratiCode uses 61% less context, 84% fewer tool calls, and is 37x faster than grep‑based exploration β€” tested live with Claude Opus 4.6. See the full benchmark β†’

Star History Chart

Contents


Quick Start

Only Docker (running) required.

One-click install β€” Claude Code, VS Code and Cursor,:

Install Claude Code PluginInstall in VS CodeInstall in VS Code InsidersInstall in CursorAll MCP hosts β€” add the following to your mcpServers (Claude Desktop, Windsurf, Cline, Roo Code) or servers (VS Code project-local .vscode/mcp.json) config:

"socraticode": {
  "command": "npx",
  "args": ["-y", "socraticode"]
}

Claude Code β€” install the plugin (recommended, includes workflow skills for best results):

From your shell:

claude plugin marketplace add giancarloerra/socraticode
claude plugin install socraticode@socraticode

Or from within Claude Code:

/plugin marketplace add giancarloerra/socraticode
/plugin install socraticode@socraticode

Auto-updates: After installing, enable automatic updates by opening /plugin β†’ Marketplaces β†’ select socraticode β†’ Enable auto-update.

Or as MCP only (without skills):

claude mcp add socraticode -- npx -y socraticode

Updating: npx caches the package after the first run. To get the latest version, clear the cache and restart your MCP host: rm -rf ~/.npm/_npx && claude mcp restart socraticode. Alternatively, use npx -y socraticode@latest in your config to always check for updates on startup (slightly slower).

OpenCode β€” add to your opencode.json (or opencode.jsonc):

{
  "mcp": {
    "socraticode": {
      "type": "local",
      "command": ["npx", "-y", "socraticode"],
      "enabled": true
    }
  }
}

OpenAI Codex CLI β€” add to ~/.codex/config.toml:

[mcp_servers.socraticode]
command = "npx"
args = ["-y", "socraticode"]

Restart your host. On first use SocratiCode automatically pulls Docker images, starts its own Qdrant and Ollama containers, and downloads the embedding model β€” one-time setup, ~5 minutes depending on your connection. After that, it starts in seconds.

First time on a project β€” ask your AI: "Index this codebase". Indexing runs in the background; ask "What is the codebase index status?" to monitor progress. Depending on codebase size and whether you're using GPU-accelerated Ollama or cloud embeddings, first-time indexing can take anywhere from a few seconds to a few minutes (it takes under 10 minutes to first-index +3 million lines of code on a Macbook Pro M4). Once complete it doesn't need to be run again, you can search, explore the dependency graph, and query context artifacts.

Every time after that β€” just use the tools (search, graph, etc.). On server startup SocratiCode automatically detects previously indexed projects, restarts the file watcher, and runs an incremental update to catch any changes made while the server was down. If indexing was interrupted, it resumes automatically from the last checkpoint. You can also explicitly start or restart the watcher with codebase_watch { action: "start" }.

macOS / Windows on large codebases: Docker containers can't use the GPU. For medium-to-large repos, install native Ollama (auto-detected, no config change needed) for Metal/CUDA acceleration, or use OpenAI embeddings for speed without a local install. Full details.

Recommended: For best results, add the Agent Instructions to your AI assistant's system prompt or project instructions file (CLAUDE.md, AGENTS.md, etc.). The key principle β€” search before reading β€” helps your AI use SocratiCode's tools effectively and avoid unnecessary file reads.

Claude Code users: If you installed the SocratiCode plugin, the Agent Instructions are included automatically as skills β€” no need to add them to your CLAUDE.md. The plugin also bundles the MCP server, so you don't need a separate claude mcp add.

Advanced: cloud embeddings (OpenAI / Google), external Qdrant, remote Ollama, native Ollama, and dozens of tuning options are all available. See Configuration below.

Why SocratiCode

I built SocratiCode because I regularly work on existing, large, and complex codebases across different languages and need to quickly understand them and act. Existing solutions were either too limited, insufficiently tested for production use, or bloated with unnecessary complexity. I wanted a single focused tool that does deep codebase intelligence well β€” zero setup, no bloat, fully automatic β€” and gets out of the way.

  • True Zero Configuration β€” Just install the Claude Plugin/Skill or add the MCP server to your AI host config. The server automatically pulls Docker images, starts Qdrant and Ollama containers, and downloads the embedding model on first use. No config files, no YAML, no environment variables to tune, no native dependencies to compile, no commands to type. Works everywhere Docker runs.
  • Fully Private & Local by Default β€” Everything runs on your machine. Your code never leaves your network. The default Docker setup includes Ollama and Qdrant with no external API calls. Optional cloud providers (Qdrant, OpenAI, Gemini) are available but never required.
  • Language-Agnostic β€” Works with every programming language, framework, and file type out of the box. No per-language parsers to install, no grammar files to maintain, no "unsupported language" limitations. If your AI can read it, SocratiCode can index it.
  • Production-Grade Vector Search β€” Built on Qdrant, a purpose-built vector database with HNSW indexing, concurrent read/write, and payload filtering. Collections store both a dense vector and a BM25 sparse vector per chunk; the Query API runs both sub-queries in a single round-trip and fuses results with RRF. Designed for scale vector search.
  • Flexible Embedding Providers β€” Switch between Local Ollama (private), Docker Ollama (zero-config), OpenAI (fastest), or Google Gemini (free tier) with a single environment variable. No provider-specific configuration files.
  • Enterprise-Ready Simplicity β€” No agent coordination tuning, no memory limit environment variables, no coordinator/conductor capacity knobs, no backpressure configuration. SocratiCode scales by relying on production-grade infrastructure (Qdrant, proven embedding APIs) rather than complex in-process orchestration.
  • Multi-Agent Ready β€” Multiple AI agents share a single index with zero configuration. Cross-process locking coordinates indexing and watching automatically β€” one agent indexes, all agents search, one watcher keeps everyone current. Crashed agents don't block others; stale locks are reclaimed automatically.
  • Measurably better than grep β€” On VS Code's 2.45M‑line codebase, SocratiCode answers architectural questions with 61% less data, 84% fewer steps, and 37Γ— faster response than a grep‑based AI agent. Full benchmark β†’

Features

  • Hybrid code search β€” Combines dense vector (semantic) search with BM25 lexical search, merged via Reciprocal Rank Fusion (RRF). Semantic search handles conceptual queries like "authentication middleware" even when those exact words don't appear in the code. BM25 handles exact identifier and keyword lookups that dense models struggle to rank precisely. RRF merges both result sets automatically β€” you get the best of both in every query with no tuning required.
  • Configurable Qdrant β€” Use the built-in Docker Qdrant (default, zero config) or connect to your own instance (self-hosted, remote server, or Qdrant Cloud). Configure via QDRANT_MODE, QDRANT_URL, and QDRANT_API_KEY environment variables.
  • Configurable Ollama β€” Use the built-in Docker Ollama (default, zero config) or point to your own Ollama instance (native install -GPU access-, remote server, etc.). Configure via OLLAMA_MODE, OLLAMA_URL, EMBEDDING_MODEL and EMBEDDING_DIMENSIONS environment variables.
  • Multi-provider embeddings β€” Beyond Ollama, use OpenAI (text-embedding-3-small) or Google Generative AI (gemini-embedding-001) for cloud-based embeddings. Just set EMBEDDING_PROVIDER and your API key.
  • Private & secure β€” Everything runs locally. Embeddings via Ollama, vector storage via Qdrant. No API costs, no token limits. Suitable for air-gapped and on-premises environments.
  • AST-aware chunking β€” Files are split at function/class boundaries using AST parsing (ast-grep), not arbitrary line counts. This produces higher-quality search results. Falls back to line-based chunking for unsupported languages.
  • Polyglot code dependency graph β€” Static analysis of import/require/use/include statements using ast-grep for 18+ languages. No external tools like dependency-cruiser required. Detects circular dependencies and generates visual Mermaid diagrams.
  • Incremental indexing β€” After the first full index, only changed files are re-processed. Content hashes are persisted in Qdrant so state survives server restarts.
  • Batched & resumable indexing β€” Files are processed in batches of 50, with progress checkpointed to Qdrant after each batch. If the process crashes or is interrupted, the next run automatically resumes from where it left off β€” already-indexed files are skipped via hash comparison. This keeps peak memory low and makes indexing reliable even for very large codebases.
  • Live file watching β€” Optionally watch for file changes and keep the index updated in real time (debounced 2s). Watcher also invalidates the code graph cache.
  • Parallel processing β€” Files are scanned and chunked in parallel batches (50 at a time) for fast I/O, while embedding generation and upserts are batched separately for optimal throughput.
  • Multi-project β€” Index multiple projects simultaneously. Each gets its own isolated collection with full project path tracking.
  • Respects ignore rules β€” Honors all .gitignore files (root + nested), plus an optional .socraticodeignore for additional exclusions. Includes sensible built-in defaults. .gitignore processing can be disabled via RESPECT_GITIGNORE=false. Dot-directories (e.g. .agent) can be included via INCLUDE_DOT_FILES=true.
  • Custom file extensions β€” Projects with non-standard extensions (e.g. .tpl, .blade) can be included via EXTRA_EXTENSIONS env var or extraExtensions tool parameter. Works for both indexing and code graph.
  • Configurable infrastructure β€” All ports, hosts, and API keys are configurable via environment variables. Qdrant API key support for enterprise deployments.
  • Auto-setup β€” On first use, automatically checks Docker, pulls images, starts containers, and pulls the embedding model. Only prerequisite: Docker.
  • Session resume β€” When reopening a previously indexed project, the file watcher starts automatically on first tool use (search, status, update, or graph query). It catches any changes made since the last session and keeps the index live β€” no manual action needed.
  • Auto-start watcher β€” The file watcher is automatically activated when you use any SocratiCode tool on an indexed project. It starts after codebase_index completes, after codebase_update, and on the first codebase_search, codebase_status, or graph query. You can also start it manually with codebase_watch { action: "start" } if needed.
  • Auto-build code graph β€” The code dependency graph is automatically built after indexing and rebuilt when watched files change. No need to call codebase_graph_build manually unless you want to force a rebuild.
  • Multi-agent collaboration β€” Multiple AI agents (each running their own MCP instance) can work on the same codebase simultaneously and share a single index. One agent triggers indexing, all agents search against the same data. Only one watcher runs per project β€” every agent benefits from real-time updates. Cross-process file locking coordinates indexing and watching automatically. Ideal for workflows like one agent writing tests while another fixes code, or a planning agent and an implementation agent working in parallel.
  • Cross-process safety β€” File-based locking (proper-lockfile) prevents multiple MCP instances from simultaneously indexing or watching the same project. Stale locks from crashed processes are automatically reclaimed. When another MCP process is already watching a project, codebase_status reports "active (watched by another process)" instead of incorrectly showing "inactive."
  • Concurrency guards β€” Duplicate indexing and graph-build operations are prevented. If you call codebase_index while indexing is already running, it returns the current progress instead of starting a second operation.
  • Graceful stop β€” Long-running indexing operations can be stopped safely with codebase_stop. The current batch finishes and checkpoints, preserving all progress. Re-run codebase_index to resume from where it left off.
  • Graceful shutdown β€” On server shutdown, active indexing operations are given up to 60 seconds to complete, all file watchers are stopped cleanly, and the everything closes gracefully.
  • Structured logging β€” All operations are logged with structured context for observability. Log level configurable via SOCRATICODE_LOG_LEVEL.
  • Graceful degradation β€” If infrastructure goes down during watch, the watcher backs off and retries instead of crashing.

Prerequisites

Dependency Purpose Install
Docker Runs Qdrant (vector DB) and by default Ollama (embeddings) docker.com
Node.js 18+ Runs the MCP server nodejs.org

Docker must be running when you use the server in the default managed mode.

The Qdrant container is managed automatically. If you set QDRANT_MODE=external and point QDRANT_URL at a remote or cloud Qdrant instance, Docker is only needed for Ollama (embeddings) in that case.

The Ollama container (embeddings) is also managed automatically in the default auto mode. SocratiCode first checks if Ollama is already running natively β€” if so it uses it. Otherwise it manages a Docker container for you. First-time download of the docker images or embedding models may take a few minutes, depending on your internet speed, and is required only at first launch.

Embedding performance on macOS / Windows

Docker containers on macOS and Windows cannot access the GPU (no Metal or CUDA passthrough). For small projects this is fine, but for medium-to-large codebases the CPU-only container is noticeably slower.

For best performance, install native Ollama: download and run the installer from ollama.com/download. Once Ollama is running, SocratiCode will automatically detect and use it β€” no extra configuration needed (first-time download of the embedding model, if not present, might take a few minutes). This gives you Metal GPU acceleration on macOS and CUDA on Windows/Linux.

If you prefer speed without a local install, see OpenAI Embeddings and Google Generative AI Embeddings below for cloud-based options. OpenAI is very fast with no local setup required. Google’s free tier is functional but rate-limited. See Environment Variables for configuration details.

Example Workflow

All tools default projectPath to the current working directory, so you never need to specify a path for the active project.

User: "Index this project"
β†’ codebase_index {}
  ⚑ Indexing started in the background β€” call codebase_status to check progress
β†’ codebase_status {}
  ⚠ Full index in progress β€” Phase: generating embeddings (batch 1/1)
  Progress: 247/1847 chunks embedded (13%) β€” Elapsed: 12s
β†’ codebase_status {}
  βœ“ Indexing complete: 342 files, 1,847 chunks (took 115.2s)
  File watcher: active (auto-updating on changes)

User: "Search for how authentication is handled"
β†’ codebase_search { query: "authentication handling" }
  Runs dense semantic search + BM25 keyword search in parallel, fuses results with RRF
  Returns top 10 results ranked by combined relevance

User: "What files depend on the auth middleware?"
β†’ codebase_graph_query { filePath: "src/middleware/auth.ts" }
  Returns imports and dependents
  (graph was auto-built after indexing β€” no manual build needed)

User: "Show me the dependency graph"
β†’ codebase_graph_visualize {}
  Returns a Mermaid diagram color-coded by language

User: "Are there any circular dependencies?"
β†’ codebase_graph_circular {}
  Found 2 cycles: src/a.ts β†’ src/b.ts β†’ src/a.ts

Agent Instructions

Claude Code plugin users: These instructions are included automatically as skills in the SocratiCode plugin. You don't need to copy them into CLAUDE.md. The section below is for non-Claude Code hosts (VS Code, Cursor, Claude Desktop, etc.).

For best results, add instructions like the following to your AI assistant's system prompt, CLAUDE.md, AGENTS.md, or equivalent instructions file. The core principle: search before reading. The index gives you a map of the codebase in milliseconds; raw file reading is expensive and context-consuming.

## Codebase Search (SocratiCode)

This project is indexed with SocratiCode. Always use its MCP tools to explore the codebase
before reading any files directly.

### Workflow

1. **Start most explorations with `codebase_search`.**
   Hybrid semantic + keyword search (vector + BM25, RRF-fused) runs in a single call.
   - Use broad, conceptual queries for orientation: "how is authentication handled",
     "database connection setup", "error handling patterns".
   - Use precise queries for symbol lookups: exact function names, constants, type names.
   - Prefer search results to infer which files to read β€” do not speculatively open files.
   - **When to use grep instead**: If you already know the exact identifier, error string,
     or regex pattern, grep/ripgrep is faster and more precise β€” no semantic gap to bridge.
     Use `codebase_search` when you're exploring, asking conceptual questions, or don't
     know which files to look in.

2. **Follow the graph before following imports.**
   Use `codebase_graph_query` to see what a file imports and what depends on it before
   diving into its contents. This prevents unnecessary reading of transitive dependencies.

3. **Read files only after narrowing down via search.**
   Once search results clearly point to 1–3 files, read only the relevant sections.
   Never read a file just to find out if it's relevant β€” search first.

4. **Use `codebase_graph_circular` when debugging unexpected behavior.**
   Circular dependencies cause subtle runtime issues; check for them proactively.

5. **Check `codebase_status` if search returns no results.**
   The project may not be indexed yet. Run `codebase_index` if needed, then wait for
   `codebase_status` to confirm completion before searching.

6. **Leverage context artifacts for non-code knowledge.**
   Projects can define a `.socraticodecontextartifacts.json` config to expose database
   schemas, API specs, infrastructure configs, architecture docs, and other project
   knowledge that lives outside source code. These artifacts are auto-indexed alongside
   code during `codebase_index` and `codebase_update`.
   - Run `codebase_context` early to see what artifacts are available.
   - Use `codebase_context_search` to find specific schemas, endpoints, or configs
     before asking about database structure or API contracts.
   - If `codebase_status` shows artifacts are stale, run `codebase_context_index` to
     refresh them.

### When to use each tool

| Goal | Tool |
|------|------|
| Understand what a codebase does / where a feature lives | `codebase_search` (broad query) |
| Find a specific function, constant, or type | `codebase_search` (exact name) or grep if you know already the exact string |
| Find exact error messages, log strings, or regex patterns | grep / ripgrep |
| See what a file imports or what depends on it | `codebase_graph_query` |
| Spot architectural problems | `codebase_graph_circular`, `codebase_graph_stats` |
| Visualise module structure | `codebase_graph_visualize` |
| Verify index is up to date | `codebase_status` |
| Discover what project knowledge (schemas, specs, configs) is available | `codebase_context` |
| Find database tables, API endpoints, infra configs | `codebase_context_search` |

Why semantic search first? A single codebase_search call returns ranked, deduplicated snippets from across the entire codebase in milliseconds. This gives you a broad map at negligible token cost β€” far cheaper than opening files speculatively. Once you know which files matter, targeted reading is both faster and more accurate. That said, grep remains the right tool when you have an exact string or pattern β€” use whichever fits the query.

Keep the connection alive during indexing. Indexing runs in the background β€” the MCP server continues working even when not actively responding to tool calls. However, some MCP hosts might disconnect an idle MCP connection after a period of inactivity, which might cut off the background process. Instruct your AI to call codebase_status roughly every 60 seconds after starting codebase_index until it completes. This keeps the host connection active and provides real-time progress.

Configuration

Install

Claude Code plugin (recommended for Claude Code users)

The SocratiCode plugin bundles both the MCP server and workflow skills that teach Claude how to use the tools effectively. One install gives you everything:

From your shell:

claude plugin marketplace add giancarloerra/socraticode
claude plugin install socraticode@socraticode

Or from within Claude Code:

/plugin marketplace add giancarloerra/socraticode
/plugin install socraticode@socraticode

The plugin includes:

  • MCP server β€” all 21 SocratiCode tools (search, graph, context artifacts, etc.)
  • Exploration skill β€” teaches Claude the search-before-reading workflow
  • Management skill β€” guides setup, indexing, watching, and troubleshooting
  • Explorer agent β€” delegatable subagent for deep codebase analysis

If you previously installed SocratiCode as a standalone MCP (claude mcp add socraticode), remove it after installing the plugin to avoid duplicates: claude mcp remove socraticode

Auto-updates: Third-party plugins don't auto-update by default. To enable automatic updates, open /plugin β†’ Marketplaces β†’ select socraticode β†’ Enable auto-update. To update manually:

From your shell:

claude plugin marketplace update socraticode
claude plugin update socraticode@socraticode

Or from within Claude Code:

/plugin marketplace update socraticode
/plugin update socraticode@socraticode

Configuring environment variables: SocratiCode works with zero config for most users (local Ollama + managed Qdrant). If you need cloud embeddings, a remote Qdrant, or other customization:

  1. Claude Code settings (recommended) β€” add to ~/.claude/settings.json:

    {
      "env": {
        "EMBEDDING_PROVIDER": "openai",
        "OPENAI_API_KEY": "sk-..."
      }
    }

    This works in all environments β€” CLI, VS Code, and JetBrains.

  2. Shell profile β€” set vars in ~/.zshrc or ~/.bashrc:

    export EMBEDDING_PROVIDER=openai
    export OPENAI_API_KEY=sk-...

    Works when Claude Code is launched from a terminal. Note: IDE-launched sessions (e.g. VS Code opened from Finder/Dock) may not inherit shell profile variables β€” use option 1 instead.

Restart Claude Code after changing variables. See Environment Variables for all options.

npx (recommended for all other MCP hosts β€” no installation)

Requires Node.js 18+ and Docker (running). Already covered in Quick Start above, add the following to your mcpServers (Claude Desktop, Windsurf, Cline, Roo Code) or servers (VS Code project-local .vscode/mcp.json) config:

    "socraticode": {
      "command": "npx",
      "args": ["-y", "socraticode"]
    }

From source (for contributors)

git clone https://github.com/giancarloerra/socraticode.git
cd socraticode
npm install
npm run build

Then use node /absolute/path/to/socraticode/dist/index.js in place of npx -y socraticode in the config examples below.

MCP host config variants

All env options below apply equally to the npx install. Just add the "env" block to the npx config shown above.

Add to your MCP settings - mcpServers (Claude Desktop, Windsurf, Cline, Roo Code) or servers (VS Code project-local .vscode/mcp.json):

Default (zero config, from source)

Using npx? Your config is already in Quick Start. Add any "env" block from the examples below as needed.

{
  "mcpServers": {
    "socraticode": {
      "command": "node",
      "args": ["/absolute/path/to/socraticode/dist/index.js"]
    }
  }
}

Tip: The default OLLAMA_MODE=auto detects native Ollama (port 11434) on startup and uses it if available, otherwise falls back to a managed Docker container. To make your config self-documenting, add an "env" block with explicit values. See Environment Variables for all options.

External Ollama (native install)

If you have Ollama installed natively, set OLLAMA_MODE=external and point to your instance:

{
  "mcpServers": {
    "socraticode": {
      "command": "node",
      "args": ["/absolute/path/to/socraticode/dist/index.js"],
      "env": {
        "OLLAMA_MODE": "external",
        "OLLAMA_URL": "http://localhost:11434"
      }
    }
  }
}

The embedding model is pulled automatically on first use. To pre-download: ollama pull nomic-embed-text

Remote Ollama server

{
  "mcpServers": {
    "socraticode": {
      "command": "node",
      "args": ["/absolute/path/to/socraticode/dist/index.js"],
      "env": {
        "OLLAMA_MODE": "external",
        "OLLAMA_URL": "http://gpu-server.local:11434"
      }
    }
  }
}

OpenAI Embeddings

Use OpenAI's cloud embedding API instead of local Ollama. Requires an API key.

{
  "mcpServers": {
    "socraticode": {
      "command": "node",
      "args": ["/absolute/path/to/socraticode/dist/index.js"],
      "env": {
        "EMBEDDING_PROVIDER": "openai",
        "OPENAI_API_KEY": "sk-..."
      }
    }
  }
}

Defaults: EMBEDDING_MODEL=text-embedding-3-small, EMBEDDING_DIMENSIONS=1536. For higher quality, use text-embedding-3-large with EMBEDDING_DIMENSIONS=3072.

Google Generative AI Embeddings

Use Google's Gemini embedding API. Requires an API key.

{
  "mcpServers": {
    "socraticode": {
      "command": "node",
      "args": ["/absolute/path/to/socraticode/dist/index.js"],
      "env": {
        "EMBEDDING_PROVIDER": "google",
        "GOOGLE_API_KEY": "AIza..."
      }
    }
  }
}

Defaults: EMBEDDING_MODEL=gemini-embedding-001, EMBEDDING_DIMENSIONS=3072.

Git Worktrees (shared index across directories)

If you use git worktrees β€” or any workflow where the same repository lives in multiple directories β€” each path would normally get its own Qdrant index. This means redundant embedding and storage for what is essentially the same codebase.

Set SOCRATICODE_PROJECT_ID to share a single index across all directories of the same project.

MCP hosts with git worktree detection (e.g. Claude Code)

Some MCP hosts (like Claude Code) resolve the project root by following git worktree links. Since worktrees point back to the main repository's .git directory, the host automatically maps all worktrees to the same project config. This means you only need to configure the MCP server once for the main checkout β€” all worktrees inherit it automatically.

For Claude Code, add the server with local scope from your main checkout:

cd /path/to/main-checkout
claude mcp add -e SOCRATICODE_PROJECT_ID=my-project --scope local socraticode -- npx -y socraticode

All worktrees created from this repo will automatically connect to socraticode with the shared project ID. No per-worktree setup needed.

Note: This only works for git worktrees. Separate git clones of the same repo have independent .git directories and won't share the config.

Other MCP hosts (per-project .mcp.json)

For MCP hosts that don't resolve git worktree paths, add a .mcp.json at the root of each worktree (and your main checkout):

{
  "mcpServers": {
    "socraticode": {
      "command": "npx",
      "args": ["-y", "socraticode"],
      "env": {
        "SOCRATICODE_PROJECT_ID": "my-project"
      }
    }
  }
}

Add .mcp.json to your .gitignore if you don't want it tracked.

How it works

With this config, agents running in /repo/main, /repo/worktree-feat-a, and /repo/worktree-fix-b all share the same codebase_my-project, codegraph_my-project, and context_my-project Qdrant collections.

How it works in practice:

  • The semantic index reflects whichever worktree last triggered a file change β€” but since branches typically differ by only a handful of files, the index is 99%+ accurate for all worktrees
  • Your AI agent reads actual file contents from its own worktree; the shared index is only used for discovery and navigation
  • When changes merge back to main, the file watcher re-indexes the changed files and the index converges

Available tools

Once connected, 21 tools are available to your AI assistant:

Indexing

Tool Description
codebase_index Start indexing a codebase in the background (poll codebase_status for progress)
codebase_stop Gracefully stop an in-progress indexing operation (current batch finishes and checkpoints; resume with codebase_index)
codebase_update Incremental update β€” only re-indexes changed files
codebase_remove Remove a project's index (safely stops watcher, cancels in-flight indexing/update, waits for graph build)
codebase_watch Start/stop file watching β€” on start, catches up missed changes then watches for future ones

Search

Tool Description
codebase_search Hybrid semantic + keyword search (dense + BM25, RRF-fused) with optional file path and language filters
codebase_status Check index status and chunk count

Code Graph

Tool Description
codebase_graph_build Build a polyglot dependency graph (runs in background β€” poll with codebase_graph_status)
codebase_graph_query Query imports and dependents for a specific file
codebase_graph_stats Get graph statistics (most connected files, orphans, language breakdown)
codebase_graph_circular Detect circular dependencies
codebase_graph_visualize Generate a Mermaid diagram of the dependency graph
codebase_graph_status Check graph build progress or persisted graph metadata
codebase_graph_remove Remove a project's persisted code graph (waits for in-flight graph build to finish first)

Management

Tool Description
codebase_health Check Docker, Qdrant, and embedding provider status
codebase_list_projects List all indexed projects with paths and metadata
codebase_about Display info about SocratiCode

Context Artifacts

Tool Description
codebase_context List all context artifacts defined in .socraticodecontextartifacts.json with names, descriptions, and index status
codebase_context_search Semantic search across context artifacts (auto-indexes on first use, auto-detects staleness)
codebase_context_index Index or re-index all artifacts from .socraticodecontextartifacts.json
codebase_context_remove Remove all indexed context artifacts for a project (blocked while indexing is in progress)

Language Support

SocratiCode supports languages at three levels:

Similar Packages

ntfy-me-mcpAn ntfy MCP server for sending/fetching ntfy notifications to self-hosted or ANY ntfy.sh server from AI Agents πŸ“€ (supports secure token auth & more - use with npx or docker!)v1.4.2
deepchat🐬DeepChat - A smart assistant that connects powerful AI to your personal worldv1.0.2
spaceship-mcpπŸš€ Manage domains, DNS, contacts, and listings with spaceship-mcp, a community-built MCP server for the Spaceship API.main@2026-04-21
TV-Show-Recommender-AIπŸ€– Recommend TV shows by matching favorites, averaging embeddings, and finding similar titles using fuzzy search and vector similarity.main@2026-04-21
product-management-skill🧠 Enable AI coding agents to adopt product management skills and build user-focused software efficiently.main@2026-04-21