freshcrate
Home > MCP Servers > ctxray

ctxray

See how you really use AI β€” X-ray your AI coding sessions locally

Description

See how you really use AI β€” X-ray your AI coding sessions locally

README

ctxray

See how you really use AI.

X-ray your AI coding sessions across Claude Code, Cursor, ChatGPT, and 6 more tools. Discover your patterns, find wasted tokens, catch leaked secrets β€” all locally, nothing leaves your machine.

PyPI version Python 3.10+ License: MIT Tests Coverage

Quick start

pip install ctxray

ctxray scan                    # discover prompts from your AI tools
ctxray wrapped                 # your AI coding persona + shareable card
ctxray insights                # your patterns vs research-optimal
ctxray privacy                 # what sensitive data you've exposed

ctxray demo

Works in your pipeline

Drop ctxray into your CI as a prompt quality gate. No LLM, no API key, no network β€” <50ms per prompt.

# .github/workflows/prompt-quality.yml
- uses: ctxray/ctxray@main
  with:
    score-threshold: 50
    comment-on-pr: true
# .pre-commit-config.yaml
repos:
  - repo: https://github.com/ctxray/ctxray
    rev: v3.0.0
    hooks:
      - id: ctxray-lint
  • Deterministic β€” same prompt, same score, every run. No flaky LLM-based checks.
  • Air-gapped β€” runs in offline and private networks. All analysis stays on your infrastructure.
  • Configurable β€” .ctxray.toml or [tool.ctxray.lint] in pyproject.toml. Per-project rules.

Full setup: GitHub Action Β· pre-commit Β· .ctxray.toml

What you'll discover

Your AI coding persona

ctxray wrapped generates a Spotify Wrapped-style report of your AI interactions β€” your persona (Debugger? Architect? Explorer?), top patterns, and a shareable card.

Your prompt patterns

ctxray insights compares your actual prompting habits against research-backed benchmarks. Are your prompts specific enough? Do you front-load instructions? How much context do you provide?

Your privacy exposure

ctxray privacy --deep scans every prompt you've sent for API keys, tokens, passwords, and PII. See exactly what you've shared with which AI tool.

Full prompt diagnostic

ctxray check "your prompt" scores, lints, and rewrites in one command β€” no LLM, <50ms.

ctxray check β€” good prompt

More screenshots

ctxray rewrite β€” rule-based prompt improvement

ctxray rewrite β€” before/after

ctxray build β€” assemble prompts from components

ctxray build β€” structured prompt assembly

What a bad prompt looks like

ctxray check β€” weak prompt

All commands

Discover your patterns

Command Description
ctxray wrapped AI coding persona + shareable card
ctxray insights Personal patterns vs research-optimal benchmarks
ctxray tools Cross-tool comparison β€” how your Claude Code / Cursor / ChatGPT habits differ
ctxray sessions Session quality scores with frustration signal detection
ctxray agent Agent workflow analysis β€” error loops, tool patterns, efficiency
ctxray repetition Cross-session repetition detection β€” spot recurring prompts
ctxray patterns Personal prompt weaknesses β€” recurring gaps by task type
ctxray distill Extract important turns from conversations with 6-signal scoring
ctxray projects Per-project quality breakdown
ctxray style Prompting fingerprint with --trends for evolution tracking
ctxray privacy See what data you sent where β€” file paths, errors, PII exposure

Optimize your prompts

Command Description
ctxray check "prompt" Full diagnostic β€” score + lint + rewrite in one command
ctxray score "prompt" Research-backed 0-100 scoring with 30+ features
ctxray rewrite "prompt" Rule-based improvement β€” filler removal, restructuring, hedging cleanup
ctxray build "task" Build prompts from components β€” task, context, files, errors, constraints
ctxray compress "prompt" 4-layer prompt compression (40-60% token savings typical)
ctxray compare "a" "b" Side-by-side prompt analysis (or --best-worst for auto-selection)
ctxray lint Configurable linter with CI/GitHub Action support

Manage

Command Description
ctxray Instant dashboard β€” prompts, sessions, avg score, top categories
ctxray scan Auto-discover prompts from 9 AI tools
ctxray report Full analytics: hot phrases, clusters, patterns (--html for dashboard)
ctxray digest Weekly summary comparing current vs previous period
ctxray template save|list|use Save and reuse your best prompts
ctxray distill --export Recover context when a session runs out β€” paste into new session
ctxray init Generate .ctxray.toml config for your project

Supported AI tools

Tool Format Auto-discovered by scan
Claude Code JSONL Yes
Codex CLI JSONL Yes
Cursor .vscdb Yes
Aider Markdown Yes
Gemini CLI JSON Yes
Cline (VS Code) JSON Yes
OpenClaw / OpenCode JSON Yes
ChatGPT JSON Via ctxray import
Claude.ai JSON/ZIP Via ctxray import

Installation

pip install ctxray              # core (all features, zero config)
pip install ctxray[chinese]     # + Chinese prompt analysis (jieba)
pip install ctxray[mcp]         # + MCP server for Claude Code / Continue.dev / Zed

Auto-scan after every session

ctxray install-hook             # adds post-session hook to Claude Code

Browser extension

Capture prompts from ChatGPT, Claude.ai, and Gemini directly in your browser. Live quality badge shows prompt tier as you type β€” click "Rewrite & Apply" to improve and replace the text directly in the input box.

  1. Install the extension from Chrome Web Store or Firefox Add-ons
  2. Connect to the CLI: ctxray install-extension
  3. Verify: ctxray extension-status

Captured prompts sync locally via Native Messaging β€” nothing leaves your machine.

CI integration

GitHub Action

# .github/workflows/prompt-lint.yml
name: Prompt Quality
on: pull_request

jobs:
  lint:
    runs-on: ubuntu-latest
    permissions:
      pull-requests: write
    steps:
      - uses: actions/checkout@v4
      - uses: ctxray/ctxray@main
        with:
          score-threshold: 50
          strict: true
          comment-on-pr: true

pre-commit

# .pre-commit-config.yaml
repos:
  - repo: https://github.com/ctxray/ctxray
    rev: v3.0.0
    hooks:
      - id: ctxray-lint

Direct CLI

ctxray lint --score-threshold 50  # exit 1 if avg score < 50
ctxray lint --strict              # exit 1 on warnings
ctxray lint --json                # machine-readable output

Project configuration

ctxray init   # generates .ctxray.toml with all rules documented
# .ctxray.toml (or [tool.ctxray.lint] in pyproject.toml)
[lint]
score-threshold = 50

[lint.rules]
min-length = 20
short-prompt = 40
vague-prompt = true
debug-needs-reference = true
Prompt Science β€” research foundation

Prompt Science

Scoring is calibrated against 10 peer-reviewed papers covering 30+ features across 5 dimensions:

Dimension What it measures Key papers
Structure Markdown, code blocks, explicit constraints Prompt Report (2406.06608)
Context File paths, error messages, I/O specs, edge cases Zi+ (2508.03678), Google (2512.14982)
Position Instruction placement relative to context Stanford (2307.03172), Veseli+ (2508.07479), Chowdhury (2603.10123)
Repetition Redundancy that degrades model attention Google (2512.14982)
Clarity Readability, sentence length, ambiguity SPELL (EMNLP 2023), PEEM (2603.10477)

Cross-validated findings that inform our engine:

  • Position bias is architectural β€” present at initialization, not learned. Front-loading instructions is effective for prompts under 50% of context window (3 papers agree)
  • Moderate compression improves output β€” rule-based filler removal doesn't just save tokens, it enhances LLM performance (2505.00019)
  • Prompt quality is independently measurable β€” prompt-only scoring predicts output quality without seeing the response (ACL 2025, 2503.10084)

All analysis runs locally in <1ms per prompt. No LLM calls, no network requests.

How it works β€” architecture

How it works

 Data sources:
 β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β” β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β” β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β” β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β” β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”
 β”‚Claude Codeβ”‚ β”‚  Cursor  β”‚ β”‚  Aider   β”‚ β”‚ ChatGPT  β”‚ β”‚ 5 more.. β”‚
 β””β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”˜ β””β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”˜ β””β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”˜ β””β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”˜ β””β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”˜
       β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜
                                 β”‚
                    scan -> dedup -> store -> analyze
                                 β”‚
              β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”Όβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”
              v                  v                  v
        β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”     β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”    β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”
        β”‚ insights β”‚     β”‚  patterns    β”‚    β”‚ sessions β”‚
        β”‚ wrapped  β”‚     β”‚  repetition  β”‚    β”‚ projects β”‚
        β”‚ style    β”‚     β”‚  privacy     β”‚    β”‚ agent    β”‚
        β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜     β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜    β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜

Key design decisions:

  • Pure rules, no LLM β€” scoring and rewriting use regex + TF-IDF + research heuristics. Deterministic, private, <1ms per prompt.
  • Adapter pattern β€” each AI tool gets a parser that normalizes to a common Prompt model. Adding a new tool = one file.
  • Two-layer dedup β€” SHA-256 for exact matches, TF-IDF cosine similarity for near-dupes.
  • Research-calibrated β€” 10 peer-reviewed papers inform the scoring weights.
Conversation Distillation

Conversation Distillation

ctxray distill scores every turn in a conversation using 6 signals:

  • Position β€” first/last turns carry framing and conclusions
  • Length β€” substantial turns contain more information
  • Tool trigger β€” turns that cause tool calls are action-driving
  • Error recovery β€” turns that follow errors show problem-solving
  • Semantic shift β€” topic changes mark conversation boundaries
  • Uniqueness β€” novel phrasing vs repetitive follow-ups

Session type (debugging, feature-dev, exploration, refactoring) is auto-detected and signal weights adapt accordingly.

Why ctxray?

After Promptfoo joined OpenAI and Humanloop joined Anthropic, ctxray is the independent, open-source alternative for understanding your AI interactions.

  • 100% local β€” your prompts never leave your machine
  • No LLM required β€” pure rule-based analysis, <50ms per prompt
  • 9 AI tools β€” the only tool that works across Claude Code, Cursor, ChatGPT, and more
  • Research-backed β€” calibrated against 10 peer-reviewed papers, not vibes

Previously published as reprompt-cli. Same tool, new name, clean namespace.

Privacy

  • All analysis runs locally. No prompts leave your machine.
  • ctxray privacy shows exactly what you've sent to which AI tool.
  • Optional telemetry sends only anonymous feature vectors β€” never prompt text.
  • Open source: audit exactly what's collected.

Links

Contributing

See CONTRIBUTING.md for development setup and guidelines.

License

MIT

Release History

VersionChangesUrgencyDate
v2.2.1### MCP Server: 6 β†’ 9 tools - `check_prompt_quality` β€” full diagnostic (score + lint + rewrite) - `build_prompt_from_parts` β€” construct prompts from components - `explain_prompt_quality` β€” educational plain-English analysis ### File input All prompt commands now accept `--file`: ```bash reprompt check --file prompt.txt reprompt score --file my-prompt.md --json reprompt rewrite --file draft.txt --diff ``` Also supports stdin with `-`: ```bash echo "fix the bug" | reprompt check - ``` **1,864 tMedium4/1/2026

Dependencies & License Audit

Loading dependencies...

Similar Packages

lad_mcp_serverLad MCP Server: Autonomous code & system design review for AI coding agents (Claude Code, Cursor, Codex, etc.). Features multi-model consensus via OpenRouter and context-aware reviews via Serena.main@2026-04-20
devtapπŸš€ Streamline build and dev output by feeding logs directly into AI coding sessions using Model Context Protocol for seamless automation.main@2026-04-21
refragπŸš€ Enhance retrieval with REFRAG, using micro-chunking and fast indexing for optimized RAG systems that improve efficiency and effectiveness.main@2026-04-21
git-notes-memory🧠 Store and search your notes effectively with Git-native memory storage, enhancing productivity for Claude Code users.main@2026-04-21
agent-search-cliEnable AI agents to search, crawl, and extract web data with IP rotation, CAPTCHA handling, and rate limit management via CLI and Python.main@2026-04-21