Unmask the hidden before the world does
An autonomous AI framework that chains reconnaissance, exploitation, and post-exploitation into a single pipeline, then goes further by triaging every finding, implementing code fixes, and opening pull requests on your repository. From first packet to merged patch, with human oversight at every critical step.
LEGAL DISCLAIMER: This tool is intended for authorized security testing, educational purposes, and research only. Never use this system to scan, probe, or attack any system you do not own or have explicit written permission to test. Unauthorized access is illegal and punishable by law. By using this tool, you accept full responsibility for your actions. Read Full Disclaimer
LOCAL USE ONLY: RedAmon is designed to run on a local machine and has not been hardened for server or cloud deployment. It lacks the security controls required for a production environment exposed to the internet (e.g. authentication hardening, rate limiting, TLS enforcement, input sanitization across all surfaces). Do not deploy RedAmon on a public-facing server. Running it outside a trusted local network is entirely at your own risk.
RedAmon launches multiple reconnaissance tools in parallel, each feeding results into a shared knowledge graph in real time. Tools spin up, adapt their scope based on live discoveries, and coordinate without manual intervention. The entire attack surface -- subdomains, ports, endpoints, parameters -- materializes in minutes, not hours.
Reconnaissance ➜ Exploitation ➜ Post-Exploitation ➜ AI Triage ➜ CodeFix Agent ➜ GitHub PR
RedAmon doesn't stop at finding vulnerabilities, it fixes them. The pipeline starts with a 6-phase reconnaissance engine that maps your target's entire attack surface, then hands control to an autonomous AI agent that validates CVE exploitability, tests credential policies, and maps lateral movement paths. Every finding is recorded in a Neo4j knowledge graph. When the offensive phase completes, CypherFix takes over: an AI triage agent correlates hundreds of findings, deduplicates them, and ranks them by exploitability. Then a CodeFix agent clones your repository, navigates the codebase with 11 code-aware tools, implements targeted fixes, and opens a GitHub pull request, ready for review and merge.
We maintain a public Project Board with upcoming features open for community contributions. Pick a task and submit a PR!
Want to contribute? See CONTRIBUTING.md for how to get started.
![]() Samuele Giampieri — Creator, Maintainer & AI Platform Architect AI Platform Architect & Full-Stack Lead with 15+ years of freelancing experience and more than 30 projects shipped to production, including enterprise-scale AI agentic systems. AWS-certified (DevOps Engineer, ML Specialty) and IBM-certified AI Engineer. Designs end-to-end ML solutions spanning deep learning, NLP, Computer Vision, and AI Agent systems with LangChain/LangGraph. LinkedIn · GitHub · Devergo Labs |
![]() Ritesh Gohil — Maintainer & Lead Security Researcher Cyber Security Engineer at Workday with over 7 years of experience in Web, API, Mobile, Network, and Cloud penetration testing. Published 11 CVEs in MITRE, with security acknowledgements from Google (4×) and Apple (6×). Secured 200+ web and mobile applications and contributed to Exploit Database, Google Hacking Database, and the AWS Community. Holds AWS Security Specialty, eWPTXv2, eCPPTv2, CRTP, and CEH certifications with expertise in red teaming, cloud security, CVE research, and security architecture review. LinkedIn · GitHub |
- Docker & Docker Compose v2+
That's it. No Node.js, Python, or security tools needed on your host.
| Resource | Without OpenVAS | With OpenVAS (full stack) |
|---|---|---|
| CPU | 2 cores | 4 cores |
| RAM | 4 GB | 8 GB (16 GB recommended) |
| Disk | 20 GB free | 50 GB free |
Without OpenVAS runs 6 containers: webapp, postgres, neo4j, agent, kali-sandbox, recon-orchestrator. With OpenVAS adds 4 more runtime containers (gvmd, ospd-openvas, gvm-postgres, gvm-redis) plus ~8 one-shot data-init containers for vulnerability feeds (~170K+ NVTs). First launch takes ~30 minutes for GVM feed synchronization. Dynamic recon and scan containers are spawned on-demand during operations and require additional resources.
git clone https://github.com/samugit83/redamon.git
cd redamon
# Without GVM (lighter, faster startup):
./redamon.sh install
# With GVM / OpenVAS (full stack, ~30 min first run):
./redamon.sh install --gvmThe script builds all images and starts the services. When done, open http://localhost:3000.
Open http://localhost:3000/settings (gear icon in the header) to configure everything. No .env file is needed.
- LLM Providers -- add API keys for OpenAI, Anthropic, OpenRouter, AWS Bedrock, or any OpenAI-compatible endpoint (Ollama, vLLM, Groq, etc.). Each provider can be tested before saving. The model selector in project settings dynamically fetches available models from configured providers.
- API Keys -- Tavily, Shodan, SerpAPI, NVD, Vulners, URLScan, and threat intelligence keys (Censys, FOFA, OTX, Netlas, VirusTotal, ZoomEye, CriminalIP) to enable extended agent capabilities (web search, OSINT, CVE lookups, passive threat intel). Uncover multi-engine search keys (Quake, Hunter, PublicWWW, HunterHow, Google, Onyphe, Driftnet) expand target discovery across 13 search engines -- shared keys (Shodan, Censys, FOFA, etc.) are automatically reused. Supports key rotation -- configure multiple keys per tool with automatic round-robin rotation to avoid rate limits.
- Tunneling -- configure ngrok or chisel for reverse shell tunneling. Changes apply immediately without container restarts.
All settings are stored per-user in the database. See the AI Model Providers wiki page for detailed setup instructions.
Go to http://localhost:3000 -- create a project, configure your target, and start scanning.
For a detailed walkthrough of every feature, check the Wiki.
Having issues? See the Troubleshooting guide or the Wiki Troubleshooting page.
All lifecycle management is handled by a single script:
| Command | Description |
|---|---|
./redamon.sh install |
Build + start without GVM |
./redamon.sh install --gvm |
Build + start with GVM/OpenVAS |
./redamon.sh install --skipkbase |
Build without Knowledge Base (~4.4 GB lighter, Tavily-only) |
./redamon.sh update |
Pull latest version, smart-rebuild only changed services |
./redamon.sh up |
Start services (auto-detects GVM mode) |
./redamon.sh up dev |
Start in dev mode with hot-reload |
./redamon.sh up dev --gvm |
Dev mode with GVM/OpenVAS |
./redamon.sh down |
Stop services (preserves data) |
./redamon.sh status |
Show running services, version, GVM mode |
./redamon.sh clean |
Remove containers + images, keep data |
./redamon.sh purge |
Remove everything including all data |
Flags can be combined:
./redamon.sh install --skipkbase --gvm
Just run:
./redamon.sh updateThe script pulls the latest code from GitHub, detects which Dockerfiles and source files changed, rebuilds only the affected images, and restarts the updated services. Your databases, scan results, and reports are preserved -- volumes are never deleted.
The webapp also checks for updates automatically and shows a notification in the UI when a new version is available.
For contributors and active development with Next.js fast refresh:
./redamon.sh up dev # without GVM
./redamon.sh up dev --gvm # with GVM/OpenVASTool images are built automatically on first run if they don't exist yet. The dev override swaps the production webapp image for a dev container with your source code volume-mounted. Every file save triggers instant hot-reload in the browser.
| What changed | Action needed |
|---|---|
webapp/src/ (frontend code) |
Nothing -- Next.js hot-reload handles it in dev mode |
agentic/*.py (agent Python code) |
docker compose restart agent |
recon_orchestrator/*.py |
docker compose restart recon-orchestrator |
mcp/servers/*.py (MCP servers) |
docker compose restart kali-sandbox |
agentic/Dockerfile or agentic/requirements.txt |
docker compose build agent && docker compose up -d agent |
recon_orchestrator/Dockerfile or its requirements.txt |
docker compose build recon-orchestrator && docker compose up -d recon-orchestrator |
mcp/kali-sandbox/Dockerfile |
docker compose build kali-sandbox && docker compose up -d kali-sandbox |
webapp/Dockerfile or webapp/package.json |
docker compose build webapp && docker compose up -d webapp |
recon/Dockerfile |
docker compose --profile tools build recon |
gvm_scan/Dockerfile |
docker compose --profile tools build vuln-scanner |
github_secret_hunt/Dockerfile |
docker compose --profile tools build github-secret-hunter |
trufflehog_scan/Dockerfile |
docker compose --profile tools build trufflehog-scanner |
docker-compose.yml |
docker compose up -d (re-creates affected containers) |
prisma/schema.prisma |
docker compose exec webapp npx prisma db push |
Rebuild a single service:
docker compose build <service> # Rebuild one image
docker compose up -d --no-deps <service> # Restart only that serviceCommon dev commands:
docker compose ps # Check service status
docker compose logs -f <service> # Follow logs for a service
docker compose down # Stop all (preserves volumes)
docker compose --profile tools down --rmi local # Remove built images
docker compose --profile tools down --rmi local --volumes --remove-orphans # Full cleanupFor a complete development reference -- hot-reload rules, common commands, important rules, and AI-assisted coding guidelines -- see the Developer Guide.
The agent's web_search tool includes a local Knowledge Base -- a RAG pipeline that searches curated security datasets (GTFOBins, LOLBAS, OWASP WSTG, NVD CVEs, ExploitDB, Nuclei templates, and agent skill docs) before falling back to Tavily web search. When the KB returns a high-confidence match, Tavily is skipped entirely for faster, offline-capable results.
How it works: During install / up / restart, RedAmon automatically builds a lightweight KB index (~1,200 chunks in 10-15 min on CPU). At query time, the agent runs a hybrid retrieval pipeline (FAISS vector search + Neo4j fulltext), reranks with a cross-encoder, and checks a confidence threshold. If the score is high enough, results come from the local KB. Otherwise, it falls back to Tavily or merges both.
Default behavior: The KB is enabled by default. On first install, it detects your hardware (GPU / CPU / API) and offers a quick-start option. No configuration needed.
Skip it entirely: If you don't need the local KB (e.g., limited disk space), use --skipkbase to build a ~4.4 GB lighter image with Tavily-only web search:
./redamon.sh install --skipkbaseSpeed up ingestion with API embeddings: By default, embeddings run locally on CPU/GPU. On CPU-only machines, large datasets (ExploitDB, NVD) can take hours. You can offload embedding to an external API by creating a .env file from the template:
cp .env.example .envThen configure the embedding API in .env:
| Variable | Default | Description |
|---|---|---|
KB_EMBEDDING_USE_API |
false |
Set to true to use API-based embeddings instead of local model |
KB_EMBEDDING_API_BASE_URL |
(empty = OpenAI) | Any OpenAI-compatible endpoint (Ollama, vLLM, LiteLLM, Together AI, Azure) |
KB_EMBEDDING_API_KEY |
(empty) | API key for the embedding provider |
KB_EMBEDDING_API_MODEL |
text-embedding-3-small |
Model name (provider-specific) |
NVD_API_KEY |
(empty) | Free NVD API key for 10x faster CVE ingestion |
Example with Ollama (free, local, no API key cost):
KB_EMBEDDING_USE_API=true
KB_EMBEDDING_API_BASE_URL=http://host.docker.internal:11434/v1
KB_EMBEDDING_API_KEY=ollama
KB_EMBEDDING_API_MODEL=nomic-embed-textImportant: Ingestion and query must use the same model. If you switch models, rebuild the index:
make -C knowledge_base kb-rebuild-lite MODE=docker
Manage the KB:
./redamon.sh kb build lite # Build with lite profile (~30-60s with API)
./redamon.sh kb build standard # Add NVD CVEs
./redamon.sh kb update nvd # Incremental NVD refresh
./redamon.sh kb stats # Show index statistics
./redamon.sh kb rebuild lite # Wipe and rebuild from scratchFor full technical documentation -- query pipeline, data sources, ingestion profiles, scoring, security model -- see the Knowledge Base Technical Reference or the Wiki: Knowledge Base & Web Search.
|
Explore real-time live attack sessions -- every step, every pivot, every exploit -- across 15 vulnerability categories on a live target. Full session logs, decoded walkthroughs, and video recordings showing the agent autonomously compromising a multi-service server from scratch. Explore the HackLab → | Submit your own session →Got an amazing agent session on your own target? Share it with the community -- session log + YouTube video. |
- Full Wiki Documentation
- Overview
- Feature Highlights
- System Architecture
- Components
- Documentation
- Troubleshooting
- Community Showcase
- Legal
RedAmon is a modular, containerized penetration testing framework that chains automated reconnaissance, AI-driven exploitation, and graph-powered intelligence into a single, end-to-end offensive security pipeline. Every component runs inside Docker — no tools installed on your host — and communicates through well-defined APIs so each layer can evolve independently.
The platform is built around six pillars:
| Pillar | What it does |
|---|---|
| Reconnaissance Pipeline | A parallelized fan-out / fan-in scanning pipeline that maps your target's entire attack surface — starting from a domain or IP addresses / CIDR ranges — from subdomain discovery (5 concurrent tools) through port scanning, Nmap service detection and NSE vulnerability scripts, HTTP probing, resource enumeration, and vulnerability detection. Independent modules run concurrently via ThreadPoolExecutor, graph DB updates happen in a background thread, and results are stored as a rich, queryable graph. Complemented by standalone GVM network scanning, GitHub secret hunting, and TruffleHog deep secret scanning modules. |
| AI Agent Orchestrator | A LangGraph-based autonomous agent that reasons about the graph, selects security tools via MCP, transitions through informational / exploitation / post-exploitation phases, and can be steered in real-time via chat. |
| Attack Surface Graph | A Neo4j knowledge graph with 17 node types and 20+ relationship types that serves as the single source of truth for every finding — and the primary data source the AI agent queries before every decision. |
| EvoGraph | A persistent, evolutionary attack chain graph in Neo4j that tracks every step, finding, decision, and failure across the attack lifecycle — bridging the recon graph and enabling cross-session intelligence accumulation. |
| CypherFix | Automated vulnerability remediation pipeline — an AI triage agent correlates and prioritizes findings from the graph, then a CodeFix agent clones the target repository, implements fixes using a ReAct loop with 11 code tools, and opens a GitHub pull request. |
| Project Settings Engine | 196+ per-project parameters — exposed through the webapp UI — that control every tool's behavior, from Naabu thread counts to Nuclei severity filters to agent approval gates. |
A fully automated, parallelized scanning engine running inside a Kali Linux container. Given a root domain, subdomain list, or IP/CIDR ranges, it maps the complete external attack surface using a fan-out / fan-in pipeline architecture: subdomain discovery (crt.sh, HackerTarget, Subfinder, Amass, Knockpy — all 5 tools run concurrently), puredns wildcard filtering (validates subdomains against public DNS resolvers and removes wildcard/poisoned entries), parallel DNS resolution (20 workers), Shodan + port scanning (Masscan / Naabu — both run in parallel), passive threat intelligence enrichment (7 tools: Censys, FOFA, OTX, Netlas, VirusTotal, ZoomEye, CriminalIP — all run in parallel with port scanning) in parallel, Nmap service version detection and NSE vulnerability scripts on discovered ports, HTTP probing with technology fingerprinting (httpx + Wappalyzer), resource enumeration (Katana, Hakrawler, GAU, ParamSpider, Kiterunner — internally parallel, followed by jsluice JavaScript analysis, FFuf directory fuzzing with custom wordlist support, and Arjun hidden parameter discovery with multi-method parallel execution), and vulnerability scanning (Nuclei with 9,000+ templates + DAST fuzzing). Neo4j graph updates run in a dedicated background thread so the main pipeline is never blocked. Results are stored as JSON and imported into the Neo4j graph.
| Settings Tab | Phase | Tools | Type | Execution |
|---|---|---|---|---|
| Discovery & OSINT | Subdomain Discovery | crt.sh, HackerTarget, Subfinder, Amass, Knockpy | Passive* | 5 tools parallel |
| Wildcard Filtering | Puredns | Active | Sequential | |
| WHOIS + URLScan | python-whois, URLScan.io API | Passive | Parallel | |
| DNS Resolution | dnspython | Passive | 20 parallel workers | |
| OSINT Enrichment | Shodan / InternetDB | Passive | Parallel with port scan | |
| Uncover Expansion | ProjectDiscovery Uncover (13 engines: Shodan, Censys, FOFA, ZoomEye, Netlas, CriminalIP, Quake, Hunter, PublicWWW, HunterHow, Google, Onyphe, Driftnet) | Passive | Before port scan (GROUP 2b) | |
| Threat Intel Enrichment | Censys, FOFA, OTX (AlienVault), Netlas, VirusTotal, ZoomEye, CriminalIP | Passive | 7 tools parallel (GROUP 3b) | |
| Port Scanning | Port Scanning | Masscan, Naabu | Active / Passive | Both parallel (Naabu supports passive InternetDB mode) |
| Nmap Service Detection | Service Version Detection | Nmap (-sV, --script vuln) | Active | Sequential per target |
| HTTP Probing | HTTP Probing | httpx | Active | Internal parallel |
| Tech Detection | Wappalyzer | Passive | Sequential (post-probe) | |
| Banner Grabbing | Custom (Python sockets: SSH, FTP, SMTP, MySQL, etc.) | Active | Parallel workers | |
| Resource Enum | Web Crawling | Katana, Hakrawler | Active | Parallel |
| Archive Discovery | GAU (Wayback, CommonCrawl, OTX) | Passive | Parallel with crawlers | |
| Parameter Mining | ParamSpider (Wayback CDX) | Passive | Parallel with crawlers | |
| JS Analysis | jsluice | Active | Sequential (post-crawl) | |
| Directory Fuzzing | FFuf | Active | Sequential (post-jsluice) | |
| Parameter Discovery | Arjun | Active / Passive | Methods parallel (GET/POST/JSON/XML) | |
| API Discovery | Kiterunner | Active | Sequential per wordlist | |
| JS Recon | JS Secret Detection | 100 regex patterns + custom uploads | Passive | Parallel per file |
| Key Validation | 21 service validators (AWS, GitHub, Stripe, etc.) | Active | Rate-limited (1/sec/svc) | |
| Source Map Discovery | Comment, header, path probing | Active | Per JS file | |
| Dependency Confusion | npm registry check | Passive | Per scoped package | |
| Endpoint Extraction | REST, GraphQL, WebSocket, router patterns | Passive | Per JS file | |
| Framework Fingerprinting | 12 built-in + custom signatures | Passive | Per JS file | |
| DOM Sink Detection | 17 XSS/prototype pollution patterns | Passive | Per JS file | |
| Vulnerability Scanning | Vulnerability Scanning | Nuclei (9,000+ templates + DAST + custom template upload) | Active | Internal parallel |
| Security Checks | Security Checks | WAF bypass, direct IP access, TLS expiry, missing headers, cache-control | Active | Parallel workers |
| CVE & MITRE | CVE Enrichment | NVD API, Vulners API | Passive | Sequential |
| MITRE Enrichment | CWE / CAPEC mapping | Passive | Sequential |
*Amass can run in active mode when configured. Knockpy performs active DNS probing.
GVM/OpenVAS performs deep network-level vulnerability assessment with 170,000+ NVTs — probing services at the protocol layer for misconfigurations, outdated software, default credentials, and known CVEs. Complements Nuclei's web-layer findings. Seven pre-configured scan profiles from quick host discovery (~2 min) to exhaustive deep scanning (~8 hours). Findings are stored as Vulnerability nodes in Neo4j alongside the recon graph.
A LangGraph-based autonomous agent implementing the ReAct pattern. It progresses through three phases — Informational (intelligence gathering, graph queries, Shodan, Google dorking), Exploitation (Metasploit, Hydra credential testing, social engineering simulation), and Post-Exploitation (enumeration, lateral movement). The agent executes 14 security tools via MCP servers inside a Kali sandbox, supports parallel tool execution via Wave Runner, and provides real-time chat interaction with guidance, stop/resume, and approval workflows. Deep Think mode enables structured strategic analysis before acting.
| Category | Tool | Description | Phases | MCP Server |
|---|---|---|---|---|
| Intelligence | query_graph | Neo4j graph queries -- primary source of truth for recon data | All | -- |
| web_search | Internet search via Tavily for CVE details, exploit PoCs, advisories | All | -- | |
| shodan | Shodan OSINT -- host details, reverse DNS, device search | Info, Exploit | -- | |
| google_dork | Google dorking via SerpAPI -- exposed files, admin panels, directory listings | Info | -- | |
| Scanning | execute_naabu | Fast port scanning and verification | Info, Exploit | network_recon :8000 |
| execute_nmap | Deep service detection (-sV), OS fingerprint, NSE scripts | All | nmap :8004 | |
| execute_nuclei | CVE verification and exploitation with 9,000+ templates + custom uploads | Info, Exploit | nuclei :8002 | |
| execute_wpscan | WordPress vulnerability scanner -- detects vulnerable plugins, themes, users, misconfigurations | Info, Exploit | network_recon :8000 | |
| Web & HTTP | execute_curl | HTTP requests -- reachability, headers, status codes, banners | All | network_recon :8000 |
| execute_playwright | Headless Chromium browser automation -- JS-rendered content extraction and interactive scripting for SPAs, form testing, XSS verification | All | playwright :8005 | |
| Exploitation | metasploit_console | Persistent msfconsole -- exploit execution, session management, post-exploitation | Exploit, Post | metasploit :8003 |
| msf_restart | Full Metasploit reset -- kills all sessions, clears module state | Exploit, Post | metasploit :8003 | |
| execute_hydra | THC Hydra brute force -- 50+ protocols (SSH, FTP, RDP, SMB, HTTP, MySQL, etc.) | Exploit, Post | network_recon :8000 | |
| Code Execution | kali_shell | Full Kali Linux shell -- netcat, sqlmap, smbclient, msfvenom, searchsploit, and 30+ CLI tools | All | network_recon :8000 |
| execute_code | Write and run code files (Python, bash, Ruby, Perl, C, C++) -- no shell escaping | Exploit, Post | network_recon :8000 |
All MCP tools run inside a Kali Linux sandbox container. Tools marked as dangerous require manual confirmation before execution. Stealth mode restricts active tools to passive-only or single-target operations. Note: WPScan is licensed under the WPScan Public Source License (not MIT). Free for pentesting assessments and personal use; commercial use may require a separate license from wpscan.com.
Supports 5 providers and 400+ models: OpenAI (GPT-5.2, GPT-5, GPT-4.1), Anthropic (Claude Opus 4.6, Sonnet 4.5), OpenRouter (300+ models), AWS Bedrock, and any OpenAI-compatible endpoint (Ollama, vLLM, LM Studio, Groq, etc.). Models are dynamically fetched — no hardcoded lists.
A Neo4j knowledge graph with 17 node types and 20+ relationship types — the single source of truth for the target's attack surface. The agent queries it before every decision via natural language → Cypher translation.
A persistent, evolutionary graph tracking everything the AI agent does — tool executions, discoveries, failures, and strategic decisions. Structured chain context replaces flat execution traces, improving agent efficiency by 25%+. Cross-session memory means the agent never starts from zero.
Launch multiple concurrent agent sessions against the same project. Each session creates its own AttackChain in EvoGraph. New sessions automatically load findings and failure lessons from all prior sessions, avoiding redundant work.
Unified view of active sessions — meterpreter, reverse/bind shells, and listeners. Built-in terminal with a Command Whisperer that translates plain English into shell commands.
Full interactive PTY shell access to the Kali sandbox container directly from the graph page via xterm.js. Access all pre-installed pentesting tools (Metasploit, Nmap, Nuclei, Hydra, sqlmap) without leaving the browser. Features dark terminal theme, connection status indicator, auto-reconnect with exponential backoff, fullscreen mode, and browser-side keepalive.
Two-agent pipeline: a Triage Agent runs 9 hardcoded Cypher queries then uses an LLM to correlate, deduplicate, and prioritize findings. A CodeFix Agent clones the target repo, explores the codebase with 11 tools, implements fixes, and opens a GitHub PR — replicating Claude Code's agentic design.
An LLM-powered Intent Router classifies user requests into agent skills: CVE (MSF), SQL Injection, Credential Testing, Social Engineering, Availability Testing, or custom user-defined skills uploaded as Markdown files. Ready-to-use community skills are available for API testing, XSS, SQLi, and SSRF -- download the .md file and upload it via Global Settings > Agent Skills to activate it for your user. You can also contribute your own by opening a PR.
On-demand reference injection via /skill command in the agent chat. Chat Skills are tactical reference docs -- tool playbooks, vulnerability guides, framework-specific notes -- that you inject into the agent's context exactly when you need them. Type /skill ssrf to load SSRF expertise, or click the skill picker button for a browsable list. 36 community-contributed skills ship with RedAmon covering vulnerabilities, tooling, scan modes, frameworks, technologies, and protocols. Unlike Agent Skills (which drive classification and phase-aware workflows), Chat Skills are supplementary context that persists until you change or remove them.
Scans GitHub repositories, gists, and commit history for exposed secrets using 40+ regex patterns and Shannon entropy analysis.
Scans GitHub repositories for leaked credentials using 700+ detectors with automatic verification of whether discovered secrets are still active. Powered by the TruffleHog engine (trufflesecurity/trufflehog), it detects API keys, passwords, tokens, certificates, and more across full commit history. Results are stored as







