Multi-agent AI framework for Claude Code, Copilot, Cursor, Warp, and 4 more platforms
188 agents, 50 CLI commands, 128 skills, 5 core frameworks + training marketplace plugin, 23 addons. SDLC workflows, digital forensics, research management, marketing operations, media curation, ops infrastructure, and fine-tuning dataset curation β all deployable with one command.
npm i -g aiwg # install globally
aiwg use sdlc # deploy SDLC frameworkGet Started Β· Features Β· Agents Β· CLI Reference Β· Documentation Β· Community
AIWG is a deployment tool and support utility for AI context. At its core, aiwg use copies markdown and YAML source files into the specific paths each AI platform looks in β .claude/agents/, ~/.codex/skills/, .cursor/rules/, .github/prompts/, and six more β so one source of truth works across 10 platforms.
Around that core, AIWG ships utilities for things the base platforms do not handle on their own: persistent artifact memory (.aiwg/), background orchestration (aiwg mc), autonomous loops (aiwg ralph), artifact indexing (aiwg index), cost telemetry, health diagnostics, and more. Most are opt-in. The deployment layer works standalone as plain text files the platform reads natively.
AIWG ships five primitive artifact types. All are plain text:
- Agents β specialized personas (Security Auditor, Test Architect) with a scoped toolset
- Skills β natural-language workflows the platform auto-invokes on trigger phrases
- Commands β explicit slash invocations (
/flow-security-review-cycle) - Rules β enforcement directives the platform loads into every session
- Behaviors β lifecycle hooks that fire on events (pre-write, post-session)
Each is a single .md file with YAML frontmatter. Nothing executes until an AI platform reads it.
Because the primitives are text, they compose without runtime coordination:
- One agent file becomes one member of a 180-agent SDLC team that reviews architecture, tests, security, and compliance in parallel.
- One skill becomes a natural-language entry point β "run security review" routes to the right multi-agent flow on every platform that supports skills.
- One framework (SDLC, forensics, marketing) bundles dozens of agents + skills + rules + templates that cross-reference each other. Deploying a framework deploys a working multi-agent ecosystem.
- The
.aiwg/directory gives those agents a shared memory β artifacts from Monday's requirements session are read by Thursday's test design. - Flows orchestrate Primary Author β Parallel Reviewers β Synthesizer β Archive patterns that no single-prompt workflow can match.
The leverage is not in any one file. It is that hundreds of small files β each independently readable and editable β snap together into workflows that would otherwise take a bespoke agent platform to build.
This is also where the research background lives. AIWG implements patterns from cognitive science (Miller 1956, Sweller 1988), multi-agent systems (Jacobs et al. 1991, MetaGPT, AutoGen), and software engineering (Cooper's stage-gate, FAIR Principles, W3C PROV) β applied as file conventions and deployment rules, not as a runtime you depend on.
These are CLI tools and services on top of the text-file substrate. The substrate works without them:
aiwg ralphβ autonomous iterate-until-done loopsaiwg mcβ background mission-control for parallel tasksaiwg daemonβ persistent session manageraiwg indexβ searchable artifact indexaiwg mcpβ MCP server for runtime tool access
Turn any of these on when you want persistence, parallelism, or automation. Turn them off and your deployed agents, skills, and rules still work β they are still text files the platform reads natively.
- Not a prompt library. Prompts are the artifacts, not the product. The product is placing the right prompts where the platform finds them.
- Not an LLM runtime. AIWG never calls a model. The AI platform you already use does that; AIWG configures what it sees.
- Not a framework you import into your app. Nothing is imported at build time. Your project gets a
.aiwg/directory (artifacts) and a few provider-specific context dirs (deployed copies). Delete them and your app is unchanged.
If you have used AI coding assistants and thought "this is amazing for small tasks but falls apart on anything complex," AIWG is the missing infrastructure layer that scales AI assistance to multi-week projects.
Base AI assistants (Claude, GPT-4, Copilot without frameworks) have three fundamental limitations:
Each conversation starts fresh. The assistant has no idea what happened yesterday, what requirements you documented, or what decisions you made last week. You re-explain context every morning.
Without AIWG: Projects stall as context rebuilding eats time. A three-month project requires continuity, not fresh starts every session.
With AIWG: The .aiwg/ directory maintains 50-100+ interconnected artifacts across days, weeks, and months. Later phases build on earlier ones automatically because memory persists. Agents read prior work via @-mentions instead of regenerating from scratch.
The segmented structure also makes large projects tractable. As code files grow, the project doesn't become harder to reason about β agents load only the slice of memory relevant to the current task (@requirements/UC-001.md, @architecture/sad.md, @testing/test-plan.md) rather than the entire codebase. Each subdirectory is a focused knowledge domain that fits comfortably in context, while cross-references keep everything connected.
The artifact index (aiwg index) takes this further. Without any tooling, agents often need to browse 3-6 documents before finding what they need. AIWG's structured artifacts reduce this to 2-3. With the index enabled, agents resolve artifact lookups in one query more often than not β a direct hit on the right requirement, architecture decision, or test case without browsing.
When AI generates broken code or flawed designs, you manually intervene, explain the problem, and hope the next attempt works. There is no systematic learning from failures, no structured retry, no checkpoint-and-resume.
Without AIWG: Research shows 47% of AI workflows produce inconsistent outputs without reproducibility constraints (R-LAM, Sureshkumar et al. 2026). Debugging is trial-and-error.
With AIWG: The agent loop implements closed-loop self-correction β execute, verify, learn from failure, adapt strategy, retry. External Ralph survives crashes and runs for 6-8+ hours autonomously. Debug memory accumulates failure patterns so the agent doesn't repeat mistakes.
Base assistants optimize for "sounds plausible" not "actually works." A general assistant critiques security, performance, and maintainability simultaneously β poorly. No domain specialization, no multi-perspective review, no human approval checkpoints.
Without AIWG: Production code ships without architectural review, security validation, or operational feasibility assessment.
With AIWG: 162 specialized agents provide domain expertise β Security Auditor reviews security, Test Architect reviews testability, Performance Engineer reviews scalability. Multi-agent review panels with synthesis. Human-in-the-loop gates at every phase transition. Research shows 84% cost reduction keeping humans on high-stakes decisions versus fully autonomous systems (Agent Laboratory, Schmidgall et al. 2025).
The .aiwg/ directory is a persistent artifact repository storing requirements, architecture decisions, test strategies, risk registers, and deployment plans across sessions. This implements Retrieval-Augmented Generation patterns (Lewis et al., 2020) β agents retrieve from an evolving knowledge base rather than regenerating from scratch.
Each artifact is discoverable via @-mentions (e.g., @.aiwg/requirements/UC-001-login.md). Context sharing between agents happens through artifacts: the requirements analyst writes use cases, the architecture designer reads them.
Instead of a single general-purpose assistant, AIWG provides 162 specialized agents organized by domain. Complex artifacts go through multi-agent review panels:
Architecture Document Creation:
1. Architecture Designer drafts SAD
2. Review Panel (3-5 agents run in parallel):
- Security Auditor β threat perspective
- Performance Engineer β scalability perspective
- Test Architect β testability perspective
- Technical Writer β clarity and consistency
3. Documentation Synthesizer merges all feedback
4. Human approval gate β accept, iterate, or escalate
Research shows 17.9% accuracy improvement with multi-path review on complex tasks (Wang et al., GSM8K benchmarks, 2023). Agent specialization means security review is done by a security specialist, not a generalist.
Ralph executes tasks iteratively, learns from failures, and adapts strategy based on error patterns. Research from Roig (2025) shows recovery capability β not initial correctness β predicts agentic task success.
Ralph Iteration:
1. Execute task with current strategy
2. Verify results (tests pass, lint clean, types check)
3. If failure: analyze root cause β extract structured learning β adapt strategy
4. Log iteration state (checkpoint for resume)
5. Repeat until success or escalate to human after 3 failed attempts
External Ralph adds crash resilience: PID file tracking, automatic restart, cross-session persistence. Tasks run for 6-8+ hours surviving terminal disconnects and system reboots.
AIWG maintains links between documentation and code to ensure artifacts stay synchronized:
// src/auth/login.ts
/**
* @implements @.aiwg/requirements/UC-001-login.md
* @architecture @.aiwg/architecture/SAD.md#section-4.2
* @tests @test/unit/auth/login.test.ts
*/
export function authenticateUser(credentials: Credentials): Promise<AuthResult> {Verification types: Doc β Code, Code β Doc, Code β Tests, Citations β Sources. The retrieval-first citation architecture reduces citation hallucination from 56% to 0% (LitLLM benchmarks, ServiceNow 2025).
AIWG structures work using Cooper's Stage-Gate methodology (1990), breaking multi-month projects into bounded phases with explicit quality criteria and human approval:
Inception β Elaboration β Construction β Transition β Production
LOM ABM IOC PR
Cognitive load optimization follows Miller's 7Β±2 limits (1956) and Sweller's worked examples approach (1988):
- 4 phases (not 12)
- 3-5 artifacts per phase (not 20)
- 5-7 section headings per template (not 15)
- 3-5 reviewers per panel (not 10)
Voice profiles provide continuous control over AI writing style using 12 parameters (formality, technical depth, sentence variety, jargon density, personal tone, humor, directness, examples ratio, uncertainty acknowledgment, opinion strength, transition style, authenticity markers).
Built-in voices: technical-authority (docs, RFCs), friendly-explainer (tutorials), executive-brief (summaries), casual-conversational (blogs, social). Create custom voices from your existing content with /voice-create.
Here is how the six components work together across a project lifecycle. How long each phase takes depends entirely on the project β AIWG is a force multiplier, not a clock. Most projects arrive at a complete, reviewed document set in hours to a day. What takes time is the human work that matters: reviewing, editing, and making decisions. The more input your team provides, the better the output. AIWG memory lets operators participate through the tools they already use β industry-standard documents and templates, issues, and knowledge bases.
/intake-wizard "Build customer portal with real-time chat" --interactiveMemory: Intake forms capture goals, constraints, stakeholders in .aiwg/intake/
Planning: Executive Orchestrator guides through structured questionnaire
Reasoning: Requirements Analyst drafts initial use cases, Product Designer reviews UX
Verification: Requirements reference intake forms, ensuring alignment
Human Gate: Stakeholder reviews intake β approves transition to Elaboration
/flow-inception-to-elaborationMemory: Architecture doc, ADRs, threat model, test strategy accumulate in .aiwg/
Reasoning: Multi-agent review panel β Architecture Designer drafts, Security Auditor + Performance Engineer + Test Architect critique in parallel, Documentation Synthesizer merges
Learning: Ralph iterates on ADRs (generate options, evaluate against constraints, refine)
Style: Technical documents use technical-authority, stakeholder summaries use executive-brief
Human Gate: Architect reviews SAD, security team approves threat model
/flow-elaboration-to-construction
/ralph "Implement authentication module" --completion "npm test passes"Learning: Ralph handles implementation iterations β execute, verify (run tests), learn ("async race condition in token refresh"), adapt (add synchronization), retry
Verification: Code references requirements (@implements UC-001), tests reference code
Memory: Test plans, implementation, deployment scripts accumulate across iterations
Human Gate: Code review approves merges, QA approves test results
/flow-deploy-to-production
/flow-hypercare-monitoring 14Planning: Deployment checklist β monitoring, rollback plan, incident response Learning: Ralph retries deployment steps if validation fails Verification: Deployment scripts reference architecture (which services, what order) Human Gate: Operations team reviews deployment plan β approves production release
AIWG makes specific, falsifiable claims backed by peer-reviewed research:
| Claim | Evidence | Source |
|---|---|---|
| 84% cost reduction with human-in-the-loop vs fully autonomous | Agent Laboratory study | Schmidgall et al. (2025) |
| 47% workflow failure rate without reproducibility constraints | R-LAM evaluation | Sureshkumar et al. (2026) |
| 0% citation hallucination with retrieval-first vs 56% generation-only | LitLLM benchmarks | ServiceNow (2025) |
| 17.9% accuracy improvement with multi-path review | GSM8K benchmarks | Wang et al. (2023) |
| 18.5x improvement with tree search on planning tasks | Game of 24 results | Yao et al. (2023) |
Full references: docs/research/
Multi-week or multi-month projects where requirements evolve, multiple stakeholders have different concerns, quality gates are required, auditability matters, or context exceeds conversation limits.
Examples: New product features with architecture/security/operational implications, legacy system migrations requiring phased rollback strategies, research projects needing literature review and reproducibility, compliance-heavy domains (healthcare, finance, aerospace) needing audit trails.
Single-session tasks where no memory is needed, quality gates are overkill, and overhead exceeds value.
Examples: "Write a Python script to parse this CSV," "Fix this typo," "Explain how this code works."
AIWG adds structure (templates, phases, gates) that slows trivial tasks but scales to complex multi-week workflows. If your project fits in a single conversation, use a base assistant. If it spans days, weeks, or months, AIWG provides the infrastructure to maintain quality and context.
User intent β AIWG CLI β Deploy agents + rules + templates β AI platform
β β
βΌ βΌ
"aiwg use sdlc" Claude Code / Copilot /
β Cursor / Warp / Factory /
βΌ OpenCode / Codex / Windsurf
ββββββββββββββββ
β 188 Agents β Specialized AI personas with domain expertise
β 50 Commands β CLI + slash commands for workflow automation
β 128 Skills β Natural language workflow triggers
β 35 Rules β Enforcement patterns (security, quality, anti-laziness)
β 334 Templatesβ SDLC artifact templates with progressive disclosure
ββββββββββββββββ
β
βΌ
.aiwg/ artifacts β Persistent project memory across sessions
AIWG orchestrates multi-agent workflows where specialized agents collaborate on complex tasks:
You: "transition to elaboration phase"
AIWG: [Step 1] Requirements Analyst β Analyze vision document, generate use case briefs
[Step 2] Architecture Designer β Baseline architecture, identify technical risks } parallel
[Step 3] Security Architect β Threat model, security requirements }
[Step 4] Documentation Synth. β Merge reviews into Architecture Baseline Milestone
[Step 5] Human Gate β GO / CONDITIONAL_GO / NO_GO decision
[Step 6] β Next phase or iterate
The orchestration pattern: Primary Author β Parallel Reviewers β Synthesizer β Human Gate β Archive. Agents run in parallel where possible, with human-in-the-loop checkpoints at phase transitions.
- 188 specialized agents β domain experts across testing, security, architecture, DevOps, cloud, frontend, backend, data engineering, documentation, and more
- 50 CLI commands β framework deployment, project scaffolding, iterative execution, metrics, reproducibility validation
- 128 workflow skills β natural language triggers for regression testing, forensics, voice profiles, quality gates, and CI/CD integration
- 35 enforcement rules β anti-laziness detection, token security, citation integrity, executable feedback, failure mitigation across 6 LLM archetypes
- 334 artifact templates β progressive disclosure templates for requirements, architecture, testing, security, deployment, and more
- 8 platform support β deploy to Claude Code, Copilot, Cursor, Warp, Factory AI, OpenCode, Codex, and Windsurf
- 5 core frameworks + training marketplace plugin β SDLC, Digital Forensics, Marketing Operations, Research Management, Media Curation, Ops Infrastructure, plus
aiwg-trainingfor fine-tuning dataset curation (corpus-to-dataset pipeline with DPO/KTO/ORPO/SimPO export) - 23 addons β semantic-memory kernel, llm-wiki (Obsidian-native knowledge base), RLM recursive decomposition, voice profiles, testing quality, mutation testing, UAT automation, and more
- Agent Loop β iterative task execution with automatic error recovery and crash resilience (6-8 hour sessions)
- RLM addon β recursive context decomposition for processing 10M+ tokens via sub-agent delegation
- YAML metalanguage β declarative schema-validated workflow definitions (JSON Schema 2020-12)
- MCP server β Model Context Protocol integration for tool-based AI workflows
- Bidirectional traceability β @-mention system linking requirements β architecture β code β tests
- FAIR-aligned artifacts β W3C PROV provenance, GRADE quality assessment, persistent REF-XXX identifiers
- Reproducibility validation β deterministic execution modes, checkpoints, configuration snapshots
Prerequisites: Node.js >=18.0.0 and an AI platform (Claude Code, GitHub Copilot, Cursor, Warp Terminal, or others). See Prerequisites Guide for details.
# Install globally
npm install -g aiwg
# Deploy to your project
cd your-project
aiwg use sdlc # Full SDLC framework (90 agents, 34 rules, 170+ templates)
aiwg use forensics # Digital forensics & incident response (13 agents, 10 skills)
aiwg use marketing # Marketing operations (37 agents, 87+ templates)
aiwg use media-curator # Media archive management (6 agents, 9 commands)
aiwg use research # Research workflow automation (8 agents, 8-stage pipeline)
aiwg use rlm # RLM addon (recursive context decomposition)
aiwg use all # Everything
# Or scaffold a new project
aiwg new my-project
# Check installation health
aiwg doctor/plugin marketplace add jmagly/ai-writing-guide
/plugin install sdlc@aiwgaiwg use sdlc # Claude Code (default)
aiwg use sdlc --provider copilot # GitHub Copilot
aiwg use sdlc --provider cursor # Cursor
aiwg use sdlc --provider warp # Warp Terminal
aiwg use sdlc --provider factory # Factory AI
aiwg use sdlc --provider opencode # OpenCode
aiwg use sdlc --provider openai # OpenAI/Codex
aiwg use sdlc --provider windsurf # Windsurf| Framework | Agents | Templates | What It Does |
|---|---|---|---|
| SDLC Complete | 90 | 170+ | Full software development lifecycle β Inception through Production with multi-agent orchestration, quality gates, and DORA metrics |
| Forensics Complete | 13 | 8 | Digital forensics and incident response β evidence acquisition, timeline reconstruction, IOC extraction, Sigma rule hunting. NIST SP 800-86, MITRE ATT&CK, STIX 2.1 |
| Media/Marketing Kit | 37 | 87+ | End-to-end marketing operations β strategy, content creation, campaign management, brand compliance, analytics, and reporting |
| Media Curator | 6 | β | Intelligent media archive management β discography analysis, source discovery, quality filtering, metadata curation, multi-platform export (Plex, Jellyfin, MPD) |
| Research Complete | 8 | 6 | Academic research automation β paper discovery, citation management, RAG-based summarization, GRADE quality scoring, FAIR compliance, W3C PROV provenance |
| Ops Complete | 2 | 3 | Operational infrastructure β incident management, runbooks, troubleshooting workflows |
| Addon | What It Does |
|---|---|
| RLM | Recursive context decomposition β process 10M+ tokens via sub-agent delegation with parallel fan-out |
| Writing Quality | Content validation, AI pattern detection, authentic voice enforcement |
| Testing Quality | TDD enforcement, mutation testing, flaky test detection and repair |
| Voice Framework | 4 built-in voice profiles (technical-authority, friendly-explainer, executive-brief, casual-conversational) with create/blend/apply skills |
| UAT-MCP Toolkit | User acceptance testing with MCP-powered test execution, coverage tracking, and regression detection |
| AIWG Evals | Agent evaluation framework β archetype resistance testing (Roig 2025), performance benchmarks, quality scoring |
| Ralph | Iterative task execution engine β automatic error recovery, crash resilience, completion tracking |
| Security | Security testing, vulnerability scanning, SAST integration, compliance validation |
| Context Curator | Context pre-filtering to remove distractors β production-grade agent reliability |
| Verbalized Sampling | Probability distribution prompting β 1.6-2.1x output diversity improvement |
| Guided Implementation | Bounded iteration control for issue-to-code automation |
| Skill Factory | Dynamic skill generation and packaging at runtime |
| Doc Intelligence | Document analysis, PDF extraction, documentation site scraping |
| Color Palette | WCAG-compliant color palette generation with trend research |
| Auto Memory | Automatic memory seed templates for new project context initialization |
| Agent Persistence | Agent state management for session continuity |
| AIWG Hooks | Lifecycle event handlers β pre-session, post-write, workflow tracing |
| AIWG Utils | Core meta-utilities (auto-installed with any framework) |
| Droid Bridge | Factory Droid orchestration β multi-platform agent bridge |
| Star Prompt | Repository star prompt for success celebration |
Specialized AI personas deployed to your platform with defined tools, responsibilities, and operating rhythms.
| Domain | Agents | Examples |
|---|---|---|
| Testing & Quality | 11 | Test Engineer, Test Architect, Mutation Analyst, Regression Analyst, Laziness Detector, Reliability Engineer |
| Security & Compliance | 9 | Security Auditor, Security Architect, Compliance Checker, Privacy Officer, Citation Verifier |
| Architecture & Design | 12 | Architecture Designer, API Designer, Cloud Architect, System Analyst, Product Designer, Decision Matrix Expert |
| DevOps & Cloud | 8 | AWS Specialist, Azure Specialist, GCP Specialist, Kubernetes Expert, DevOps Engineer, Multi-Cloud Strategist |
| Backend & Data | 10 | Django Expert, Spring Boot Expert, Data Engineer, Database Optimizer, Software Implementer, Incident Responder |
| Frontend & Mobile | 6 | React Expert, Frontend Specialist, Mobile Developer, Accessibility Specialist, UX Lead |
| AI/ML & Performance | 5 | AI/ML Engineer, Performance Engineer, Cost Optimizer, Metrics Analyst |
| Code Quality | 11 | Code Reviewer, Debugger, Dead Code Analyzer, Technical Debt Analyst, Legacy Modernizer |
| Documentation | 7 | Technical Writer, Documentation Synthesizer, Documentation Archivist, Context Librarian |
| Requirements & Planning | 7 | Requirements Analyst, Requirements Reviewer, Intake Coordinator, RACI Expert |
| Agent/Tool Smiths | 9 | AgentSmith, CommandSmith, MCPSmith, SkillSmith, ToolSmith |
| Governance & Meta | 3 | Executive Orchestrator, Recovery Orchestrator, Migration Planner |
| Agent | What It Does |
|---|---|
| Forensics Orchestrator | Coordinates full investigation lifecycle from scoping through reporting |
| Triage Agent | Quick volatile data capture following RFC 3227 volatility order |
| Acquisition Agent | Evidence collection with chain of custody and SHA-256 hash verification |
| Log Analyst | Auth.log, syslog, journal, and application log analysis for brute force, privilege escalation, lateral movement |
| Persistence Hunter | Sweeps cron, systemd, SSH keys, LD_PRELOAD, PAM modules, kernel modules β maps to MITRE ATT&CK |
| Container Analyst | Docker, containerd, Kubernetes forensics β privilege escalation, container escapes, eBPF monitoring |
| Network Analyst | Connection state, DNS, traffic patterns β beaconing, C2, data exfiltration detection |
| Memory Analyst | Volatility 3 memory forensics β process analysis, rootkit detection, credential extraction |
| Cloud Analyst | AWS/Azure/GCP audit logs, IAM review, network flows, API activity anomaly detection |
| Timeline Builder | Multi-source event correlation β chronological incident timelines with attribution |
| IOC Analyst | IOC extraction, enrichment, STIX 2.1 formatting β actionable IOC register |
| Recon Agent | Target reconnaissance β system topology, services, users, network baselines |
| Reporting Agent | Structured forensic reports β executive summary, technical findings, timeline, remediation |
| Domain | Agents |
|---|---|
| Strategy | Campaign Strategist, Brand Guardian, Positioning Specialist, Market Researcher, Content Strategist, Channel Strategist |
| Creation | Copywriter, Content Writer, Email Marketer, Social Media Specialist, SEO Specialist, Graphic Designer, Art Director |
| Management | Campaign Orchestrator, Production Coordinator, Traffic Manager, Asset Manager, Workflow Coordinator |
| Analytics | Marketing Analyst, Data Analyst, Attribution Specialist, Reporting Specialist, Budget Planner |
| Communications | PR Specialist, Crisis Communications, Corporate Communications, Internal Communications, Media Relations |
Discovery Agent, Acquisition Agent, Documentation Agent, Citation Agent, Quality Agent, Archival Agent, Provenance Agent, Workflow Agent
Discography Analyst, Source Discoverer, Acquisition Manager, Quality Assessor, Metadata Curator, Completeness Tracker
Enforcement patterns that prevent common AI failure modes. Rules deploy automatically with their framework.
| Rule | Severity | What It Enforces |
|---|---|---|
no-attribution |
CRITICAL | AI tools are tools β never add attribution to commits, PRs, docs, or code |
token-security |
CRITICAL | Never hard-code tokens; use heredoc pattern for scoped lifetime; file permissions 600 |
versioning |
CRITICAL | CalVer YYYY.M.PATCH with NO leading zeros; npm rejects leading zeros |
citation-policy |
CRITICAL | Never fabricate citations, DOIs, or URLs; only cite verified sources; GRADE-appropriate hedging |
anti-laziness |
HIGH | Never delete tests to pass, skip tests, remove features, or weaken assertions; escalate after 3 failures |
executable-feedback |
HIGH | Execute tests before returning code; track execution history; max 3 retries with root cause analysis |
failure-mitigation |
HIGH | Detect and recover from 6 LLM failure archetypes: hallucination, context loss, instruction drift, safety, technical, consistency |
research-before-decision |
HIGH | Research codebase before acting: IDENTIFY β SEARCH β EXTRACT β REASON β ACT β VERIFY |
instruction-comprehension |
HIGH | Fully parse all instructions before acting; track multi-part requests to completion |
subagent-scoping |
HIGH | One focused task per subagent; <20% context budget; no delegation chains deeper than 2 levels |
Actionable feedback, mention wiring, HITL gates, agent fallback, provenance tracking, TAO loop, reproducibility validation, SDLC orchestration, agent-friendly code, agent generation guardrails, artifact discovery, HITL patterns, human gate display, thought protocol, reasoning sections, few-shot examples, best output selection, reproducibility, progressive disclosure, conversable agent interface, auto-reply chains, criticality panel sizing, qualified references.
Research metadata (FAIR-compliant YAML frontmatter), index generation (auto-generated INDEX.md per FAIR F4).
Natural language workflow triggers. Say "what's the project status?" and the project-awareness skill activates.
| Category | Skills | Examples |
|---|---|---|
| Regression Testing | 12 | regression-check, regression-baseline, regression-bisect, regression-performance, regression-api-contract, regression-cicd-hooks, regression-learning |
| Voice & Writing | 6 | voice-create, voice-analyze, voice-apply, voice-blend, ai-pattern-detection, brand-compliance |
| Testing & Quality | 8 | auto-test-execution, test-coverage, test-sync, mutation-test, flaky-detect, flaky-fix, tdd-enforce, qa-protocol |
| Forensics & Security | 8 | linux-forensics, memory-forensics, cloud-forensics, container-forensics, sigma-hunting, log-analysis, ioc-extraction, supply-chain-forensics |
| SDLC & Workflow | 10 | sdlc-accelerate, sdlc-reports, gate-evaluation, approval-workflow, iteration-control, risk-cycle, parallel-dispatch, decision-support |
| Documentation | 6 | doc-sync, doc-scraper, doc-splitter, llms-txt-support, pdf-extractor, source-unifier |
| Artifacts & Traceability | 6 | artifact-orchestration, artifact-metadata, artifact-lookup, traceability-check, claims-validator, citation-guard |
| Research | 2 | grade-on-ingest, auto-provenance |
| Infrastructure | 5 | config-validator, template-engine, code-chunker, decompose-file, workspace-health |
| Iteration | 4 | agent-loop, issue-driven-ralph, cross-task-learner, reflection-injection |
| Other | 19 | performance-digest, competitive-intel, audience-synthesis, skill-builder, skill-enhancer, skill-packager, quality-checker, nl-router, tot-exploration, and more |
The SDLC framework implements a phase-gated development lifecycle with 90 specialized agents, 34 enforcement rules, and 170+ artifact templates. Natural language commands drive phase transitions with automated quality gates.
ββββββββββββ βββββββββββββββ ββββββββββββββββ ββββββββββββββ ββββββββββββββ
β CO