freshcrate
Home > Frameworks > aiwg

aiwg

Cognitive architecture for AI-augmented software development. Specialized agents, structured workflows, and multi-platform deployment. Claude Code Β· Codex Β· Copilot Β· Cursor Β· Factory Β· Warp Β· Windsur

Description

Cognitive architecture for AI-augmented software development. Specialized agents, structured workflows, and multi-platform deployment. Claude Code Β· Codex Β· Copilot Β· Cursor Β· Factory Β· Warp Β· Windsurf.

README

AIWG

Multi-agent AI framework for Claude Code, Copilot, Cursor, Warp, and 4 more platforms

188 agents, 50 CLI commands, 128 skills, 5 core frameworks + training marketplace plugin, 23 addons. SDLC workflows, digital forensics, research management, marketing operations, media curation, ops infrastructure, and fine-tuning dataset curation β€” all deployable with one command.

npm i -g aiwg        # install globally
aiwg use sdlc        # deploy SDLC framework

npm versionnpm downloadsLicense: MITGitHub StarsNode VersionTypeScript8 PlatformsGet Started Β· Features Β· Agents Β· CLI Reference Β· Documentation Β· Community

DiscordTelegram

What AIWG Is

AIWG is a deployment tool and support utility for AI context. At its core, aiwg use copies markdown and YAML source files into the specific paths each AI platform looks in β€” .claude/agents/, ~/.codex/skills/, .cursor/rules/, .github/prompts/, and six more β€” so one source of truth works across 10 platforms.

Around that core, AIWG ships utilities for things the base platforms do not handle on their own: persistent artifact memory (.aiwg/), background orchestration (aiwg mc), autonomous loops (aiwg ralph), artifact indexing (aiwg index), cost telemetry, health diagnostics, and more. Most are opt-in. The deployment layer works standalone as plain text files the platform reads natively.

Simple Building Blocks

AIWG ships five primitive artifact types. All are plain text:

  • Agents β€” specialized personas (Security Auditor, Test Architect) with a scoped toolset
  • Skills β€” natural-language workflows the platform auto-invokes on trigger phrases
  • Commands β€” explicit slash invocations (/flow-security-review-cycle)
  • Rules β€” enforcement directives the platform loads into every session
  • Behaviors β€” lifecycle hooks that fire on events (pre-write, post-session)

Each is a single .md file with YAML frontmatter. Nothing executes until an AI platform reads it.

Why It Compounds

Because the primitives are text, they compose without runtime coordination:

  • One agent file becomes one member of a 180-agent SDLC team that reviews architecture, tests, security, and compliance in parallel.
  • One skill becomes a natural-language entry point β€” "run security review" routes to the right multi-agent flow on every platform that supports skills.
  • One framework (SDLC, forensics, marketing) bundles dozens of agents + skills + rules + templates that cross-reference each other. Deploying a framework deploys a working multi-agent ecosystem.
  • The .aiwg/ directory gives those agents a shared memory β€” artifacts from Monday's requirements session are read by Thursday's test design.
  • Flows orchestrate Primary Author β†’ Parallel Reviewers β†’ Synthesizer β†’ Archive patterns that no single-prompt workflow can match.

The leverage is not in any one file. It is that hundreds of small files β€” each independently readable and editable β€” snap together into workflows that would otherwise take a bespoke agent platform to build.

This is also where the research background lives. AIWG implements patterns from cognitive science (Miller 1956, Sweller 1988), multi-agent systems (Jacobs et al. 1991, MetaGPT, AutoGen), and software engineering (Cooper's stage-gate, FAIR Principles, W3C PROV) β€” applied as file conventions and deployment rules, not as a runtime you depend on.

What's Optional

These are CLI tools and services on top of the text-file substrate. The substrate works without them:

  • aiwg ralph β€” autonomous iterate-until-done loops
  • aiwg mc β€” background mission-control for parallel tasks
  • aiwg daemon β€” persistent session manager
  • aiwg index β€” searchable artifact index
  • aiwg mcp β€” MCP server for runtime tool access

Turn any of these on when you want persistence, parallelism, or automation. Turn them off and your deployed agents, skills, and rules still work β€” they are still text files the platform reads natively.

What AIWG Is Not

  • Not a prompt library. Prompts are the artifacts, not the product. The product is placing the right prompts where the platform finds them.
  • Not an LLM runtime. AIWG never calls a model. The AI platform you already use does that; AIWG configures what it sees.
  • Not a framework you import into your app. Nothing is imported at build time. Your project gets a .aiwg/ directory (artifacts) and a few provider-specific context dirs (deployed copies). Delete them and your app is unchanged.

Who It's For

If you have used AI coding assistants and thought "this is amazing for small tasks but falls apart on anything complex," AIWG is the missing infrastructure layer that scales AI assistance to multi-week projects.


What Problems Does AIWG Solve?

Base AI assistants (Claude, GPT-4, Copilot without frameworks) have three fundamental limitations:

1. No Memory Across Sessions

Each conversation starts fresh. The assistant has no idea what happened yesterday, what requirements you documented, or what decisions you made last week. You re-explain context every morning.

Without AIWG: Projects stall as context rebuilding eats time. A three-month project requires continuity, not fresh starts every session.

With AIWG: The .aiwg/ directory maintains 50-100+ interconnected artifacts across days, weeks, and months. Later phases build on earlier ones automatically because memory persists. Agents read prior work via @-mentions instead of regenerating from scratch.

The segmented structure also makes large projects tractable. As code files grow, the project doesn't become harder to reason about β€” agents load only the slice of memory relevant to the current task (@requirements/UC-001.md, @architecture/sad.md, @testing/test-plan.md) rather than the entire codebase. Each subdirectory is a focused knowledge domain that fits comfortably in context, while cross-references keep everything connected.

The artifact index (aiwg index) takes this further. Without any tooling, agents often need to browse 3-6 documents before finding what they need. AIWG's structured artifacts reduce this to 2-3. With the index enabled, agents resolve artifact lookups in one query more often than not β€” a direct hit on the right requirement, architecture decision, or test case without browsing.

2. No Recovery Patterns

When AI generates broken code or flawed designs, you manually intervene, explain the problem, and hope the next attempt works. There is no systematic learning from failures, no structured retry, no checkpoint-and-resume.

Without AIWG: Research shows 47% of AI workflows produce inconsistent outputs without reproducibility constraints (R-LAM, Sureshkumar et al. 2026). Debugging is trial-and-error.

With AIWG: The agent loop implements closed-loop self-correction β€” execute, verify, learn from failure, adapt strategy, retry. External Ralph survives crashes and runs for 6-8+ hours autonomously. Debug memory accumulates failure patterns so the agent doesn't repeat mistakes.

3. No Quality Gates

Base assistants optimize for "sounds plausible" not "actually works." A general assistant critiques security, performance, and maintainability simultaneously β€” poorly. No domain specialization, no multi-perspective review, no human approval checkpoints.

Without AIWG: Production code ships without architectural review, security validation, or operational feasibility assessment.

With AIWG: 162 specialized agents provide domain expertise β€” Security Auditor reviews security, Test Architect reviews testability, Performance Engineer reviews scalability. Multi-agent review panels with synthesis. Human-in-the-loop gates at every phase transition. Research shows 84% cost reduction keeping humans on high-stakes decisions versus fully autonomous systems (Agent Laboratory, Schmidgall et al. 2025).


The Six Core Components

1. Memory β€” Structured Semantic Memory

The .aiwg/ directory is a persistent artifact repository storing requirements, architecture decisions, test strategies, risk registers, and deployment plans across sessions. This implements Retrieval-Augmented Generation patterns (Lewis et al., 2020) β€” agents retrieve from an evolving knowledge base rather than regenerating from scratch.

Each artifact is discoverable via @-mentions (e.g., @.aiwg/requirements/UC-001-login.md). Context sharing between agents happens through artifacts: the requirements analyst writes use cases, the architecture designer reads them.

2. Reasoning β€” Multi-Agent Deliberation with Synthesis

Instead of a single general-purpose assistant, AIWG provides 162 specialized agents organized by domain. Complex artifacts go through multi-agent review panels:

Architecture Document Creation:
  1. Architecture Designer drafts SAD
  2. Review Panel (3-5 agents run in parallel):
     - Security Auditor    β†’ threat perspective
     - Performance Engineer β†’ scalability perspective
     - Test Architect       β†’ testability perspective
     - Technical Writer     β†’ clarity and consistency
  3. Documentation Synthesizer merges all feedback
  4. Human approval gate β†’ accept, iterate, or escalate

Research shows 17.9% accuracy improvement with multi-path review on complex tasks (Wang et al., GSM8K benchmarks, 2023). Agent specialization means security review is done by a security specialist, not a generalist.

3. Learning β€” Closed-Loop Self-Correction (Ralph)

Ralph executes tasks iteratively, learns from failures, and adapts strategy based on error patterns. Research from Roig (2025) shows recovery capability β€” not initial correctness β€” predicts agentic task success.

Ralph Iteration:
  1. Execute task with current strategy
  2. Verify results (tests pass, lint clean, types check)
  3. If failure: analyze root cause β†’ extract structured learning β†’ adapt strategy
  4. Log iteration state (checkpoint for resume)
  5. Repeat until success or escalate to human after 3 failed attempts

External Ralph adds crash resilience: PID file tracking, automatic restart, cross-session persistence. Tasks run for 6-8+ hours surviving terminal disconnects and system reboots.

4. Verification β€” Bidirectional Traceability

AIWG maintains links between documentation and code to ensure artifacts stay synchronized:

// src/auth/login.ts
/**
 * @implements @.aiwg/requirements/UC-001-login.md
 * @architecture @.aiwg/architecture/SAD.md#section-4.2
 * @tests @test/unit/auth/login.test.ts
 */
export function authenticateUser(credentials: Credentials): Promise<AuthResult> {

Verification types: Doc β†’ Code, Code β†’ Doc, Code β†’ Tests, Citations β†’ Sources. The retrieval-first citation architecture reduces citation hallucination from 56% to 0% (LitLLM benchmarks, ServiceNow 2025).

5. Planning β€” Phase Gates with Cognitive Load Management

AIWG structures work using Cooper's Stage-Gate methodology (1990), breaking multi-month projects into bounded phases with explicit quality criteria and human approval:

Inception β†’ Elaboration β†’ Construction β†’ Transition β†’ Production
   LOM          ABM            IOC            PR

Cognitive load optimization follows Miller's 7Β±2 limits (1956) and Sweller's worked examples approach (1988):

  • 4 phases (not 12)
  • 3-5 artifacts per phase (not 20)
  • 5-7 section headings per template (not 15)
  • 3-5 reviewers per panel (not 10)

6. Style β€” Controllable Voice Generation

Voice profiles provide continuous control over AI writing style using 12 parameters (formality, technical depth, sentence variety, jargon density, personal tone, humor, directness, examples ratio, uncertainty acknowledgment, opinion strength, transition style, authenticity markers).

Built-in voices: technical-authority (docs, RFCs), friendly-explainer (tutorials), executive-brief (summaries), casual-conversational (blogs, social). Create custom voices from your existing content with /voice-create.


A Real Project Walkthrough

Here is how the six components work together across a project lifecycle. How long each phase takes depends entirely on the project β€” AIWG is a force multiplier, not a clock. Most projects arrive at a complete, reviewed document set in hours to a day. What takes time is the human work that matters: reviewing, editing, and making decisions. The more input your team provides, the better the output. AIWG memory lets operators participate through the tools they already use β€” industry-standard documents and templates, issues, and knowledge bases.

Inception

/intake-wizard "Build customer portal with real-time chat" --interactive

Memory: Intake forms capture goals, constraints, stakeholders in .aiwg/intake/ Planning: Executive Orchestrator guides through structured questionnaire Reasoning: Requirements Analyst drafts initial use cases, Product Designer reviews UX Verification: Requirements reference intake forms, ensuring alignment Human Gate: Stakeholder reviews intake β†’ approves transition to Elaboration

Elaboration

/flow-inception-to-elaboration

Memory: Architecture doc, ADRs, threat model, test strategy accumulate in .aiwg/ Reasoning: Multi-agent review panel β€” Architecture Designer drafts, Security Auditor + Performance Engineer + Test Architect critique in parallel, Documentation Synthesizer merges Learning: Ralph iterates on ADRs (generate options, evaluate against constraints, refine) Style: Technical documents use technical-authority, stakeholder summaries use executive-brief Human Gate: Architect reviews SAD, security team approves threat model

Construction

/flow-elaboration-to-construction
/ralph "Implement authentication module" --completion "npm test passes"

Learning: Ralph handles implementation iterations β€” execute, verify (run tests), learn ("async race condition in token refresh"), adapt (add synchronization), retry Verification: Code references requirements (@implements UC-001), tests reference code Memory: Test plans, implementation, deployment scripts accumulate across iterations Human Gate: Code review approves merges, QA approves test results

Transition

/flow-deploy-to-production
/flow-hypercare-monitoring 14

Planning: Deployment checklist β€” monitoring, rollback plan, incident response Learning: Ralph retries deployment steps if validation fails Verification: Deployment scripts reference architecture (which services, what order) Human Gate: Operations team reviews deployment plan β†’ approves production release


Quantified Claims and Evidence

AIWG makes specific, falsifiable claims backed by peer-reviewed research:

Claim Evidence Source
84% cost reduction with human-in-the-loop vs fully autonomous Agent Laboratory study Schmidgall et al. (2025)
47% workflow failure rate without reproducibility constraints R-LAM evaluation Sureshkumar et al. (2026)
0% citation hallucination with retrieval-first vs 56% generation-only LitLLM benchmarks ServiceNow (2025)
17.9% accuracy improvement with multi-path review GSM8K benchmarks Wang et al. (2023)
18.5x improvement with tree search on planning tasks Game of 24 results Yao et al. (2023)

Full references: docs/research/


When to Use AIWG (and When Not To)

Good Fit

Multi-week or multi-month projects where requirements evolve, multiple stakeholders have different concerns, quality gates are required, auditability matters, or context exceeds conversation limits.

Examples: New product features with architecture/security/operational implications, legacy system migrations requiring phased rollback strategies, research projects needing literature review and reproducibility, compliance-heavy domains (healthcare, finance, aerospace) needing audit trails.

Not the Best Fit

Single-session tasks where no memory is needed, quality gates are overkill, and overhead exceeds value.

Examples: "Write a Python script to parse this CSV," "Fix this typo," "Explain how this code works."

The Trade-off

AIWG adds structure (templates, phases, gates) that slows trivial tasks but scales to complex multi-week workflows. If your project fits in a single conversation, use a base assistant. If it spans days, weeks, or months, AIWG provides the infrastructure to maintain quality and context.

User intent β†’ AIWG CLI β†’ Deploy agents + rules + templates β†’ AI platform
                β”‚                                                β”‚
                β–Ό                                                β–Ό
         "aiwg use sdlc"                              Claude Code / Copilot /
                β”‚                                     Cursor / Warp / Factory /
                β–Ό                                     OpenCode / Codex / Windsurf
         β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”
         β”‚ 188 Agents   β”‚  Specialized AI personas with domain expertise
         β”‚ 50 Commands  β”‚  CLI + slash commands for workflow automation
         β”‚ 128 Skills   β”‚  Natural language workflow triggers
         β”‚ 35 Rules     β”‚  Enforcement patterns (security, quality, anti-laziness)
         β”‚ 334 Templatesβ”‚  SDLC artifact templates with progressive disclosure
         β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜
                β”‚
                β–Ό
         .aiwg/ artifacts ← Persistent project memory across sessions

How It Works

AIWG orchestrates multi-agent workflows where specialized agents collaborate on complex tasks:

You: "transition to elaboration phase"

AIWG: [Step 1] Requirements Analyst   β†’ Analyze vision document, generate use case briefs
      [Step 2] Architecture Designer  β†’ Baseline architecture, identify technical risks     } parallel
      [Step 3] Security Architect     β†’ Threat model, security requirements                 }
      [Step 4] Documentation Synth.   β†’ Merge reviews into Architecture Baseline Milestone
      [Step 5] Human Gate             β†’ GO / CONDITIONAL_GO / NO_GO decision
      [Step 6] β†’ Next phase or iterate

The orchestration pattern: Primary Author β†’ Parallel Reviewers β†’ Synthesizer β†’ Human Gate β†’ Archive. Agents run in parallel where possible, with human-in-the-loop checkpoints at phase transitions.


Features

  • 188 specialized agents β€” domain experts across testing, security, architecture, DevOps, cloud, frontend, backend, data engineering, documentation, and more
  • 50 CLI commands β€” framework deployment, project scaffolding, iterative execution, metrics, reproducibility validation
  • 128 workflow skills β€” natural language triggers for regression testing, forensics, voice profiles, quality gates, and CI/CD integration
  • 35 enforcement rules β€” anti-laziness detection, token security, citation integrity, executable feedback, failure mitigation across 6 LLM archetypes
  • 334 artifact templates β€” progressive disclosure templates for requirements, architecture, testing, security, deployment, and more
  • 8 platform support β€” deploy to Claude Code, Copilot, Cursor, Warp, Factory AI, OpenCode, Codex, and Windsurf
  • 5 core frameworks + training marketplace plugin β€” SDLC, Digital Forensics, Marketing Operations, Research Management, Media Curation, Ops Infrastructure, plus aiwg-training for fine-tuning dataset curation (corpus-to-dataset pipeline with DPO/KTO/ORPO/SimPO export)
  • 23 addons β€” semantic-memory kernel, llm-wiki (Obsidian-native knowledge base), RLM recursive decomposition, voice profiles, testing quality, mutation testing, UAT automation, and more
  • Agent Loop β€” iterative task execution with automatic error recovery and crash resilience (6-8 hour sessions)
  • RLM addon β€” recursive context decomposition for processing 10M+ tokens via sub-agent delegation
  • YAML metalanguage β€” declarative schema-validated workflow definitions (JSON Schema 2020-12)
  • MCP server β€” Model Context Protocol integration for tool-based AI workflows
  • Bidirectional traceability β€” @-mention system linking requirements β†’ architecture β†’ code β†’ tests
  • FAIR-aligned artifacts β€” W3C PROV provenance, GRADE quality assessment, persistent REF-XXX identifiers
  • Reproducibility validation β€” deterministic execution modes, checkpoints, configuration snapshots

Quick Start

Prerequisites: Node.js >=18.0.0 and an AI platform (Claude Code, GitHub Copilot, Cursor, Warp Terminal, or others). See Prerequisites Guide for details.

Install & Deploy

# Install globally
npm install -g aiwg

# Deploy to your project
cd your-project
aiwg use sdlc              # Full SDLC framework (90 agents, 34 rules, 170+ templates)
aiwg use forensics         # Digital forensics & incident response (13 agents, 10 skills)
aiwg use marketing         # Marketing operations (37 agents, 87+ templates)
aiwg use media-curator     # Media archive management (6 agents, 9 commands)
aiwg use research          # Research workflow automation (8 agents, 8-stage pipeline)
aiwg use rlm               # RLM addon (recursive context decomposition)
aiwg use all               # Everything

# Or scaffold a new project
aiwg new my-project

# Check installation health
aiwg doctor

Claude Code Plugin (Alternative)

/plugin marketplace add jmagly/ai-writing-guide
/plugin install sdlc@aiwg

Multi-Platform Deployment

aiwg use sdlc                          # Claude Code (default)
aiwg use sdlc --provider copilot       # GitHub Copilot
aiwg use sdlc --provider cursor        # Cursor
aiwg use sdlc --provider warp          # Warp Terminal
aiwg use sdlc --provider factory       # Factory AI
aiwg use sdlc --provider opencode      # OpenCode
aiwg use sdlc --provider openai        # OpenAI/Codex
aiwg use sdlc --provider windsurf      # Windsurf

What You Get

Frameworks (6)

Framework Agents Templates What It Does
SDLC Complete 90 170+ Full software development lifecycle β€” Inception through Production with multi-agent orchestration, quality gates, and DORA metrics
Forensics Complete 13 8 Digital forensics and incident response β€” evidence acquisition, timeline reconstruction, IOC extraction, Sigma rule hunting. NIST SP 800-86, MITRE ATT&CK, STIX 2.1
Media/Marketing Kit 37 87+ End-to-end marketing operations β€” strategy, content creation, campaign management, brand compliance, analytics, and reporting
Media Curator 6 β€” Intelligent media archive management β€” discography analysis, source discovery, quality filtering, metadata curation, multi-platform export (Plex, Jellyfin, MPD)
Research Complete 8 6 Academic research automation β€” paper discovery, citation management, RAG-based summarization, GRADE quality scoring, FAIR compliance, W3C PROV provenance
Ops Complete 2 3 Operational infrastructure β€” incident management, runbooks, troubleshooting workflows

Addons (21)

Addon What It Does
RLM Recursive context decomposition β€” process 10M+ tokens via sub-agent delegation with parallel fan-out
Writing Quality Content validation, AI pattern detection, authentic voice enforcement
Testing Quality TDD enforcement, mutation testing, flaky test detection and repair
Voice Framework 4 built-in voice profiles (technical-authority, friendly-explainer, executive-brief, casual-conversational) with create/blend/apply skills
UAT-MCP Toolkit User acceptance testing with MCP-powered test execution, coverage tracking, and regression detection
AIWG Evals Agent evaluation framework β€” archetype resistance testing (Roig 2025), performance benchmarks, quality scoring
Ralph Iterative task execution engine β€” automatic error recovery, crash resilience, completion tracking
Security Security testing, vulnerability scanning, SAST integration, compliance validation
Context Curator Context pre-filtering to remove distractors β€” production-grade agent reliability
Verbalized Sampling Probability distribution prompting β€” 1.6-2.1x output diversity improvement
Guided Implementation Bounded iteration control for issue-to-code automation
Skill Factory Dynamic skill generation and packaging at runtime
Doc Intelligence Document analysis, PDF extraction, documentation site scraping
Color Palette WCAG-compliant color palette generation with trend research
Auto Memory Automatic memory seed templates for new project context initialization
Agent Persistence Agent state management for session continuity
AIWG Hooks Lifecycle event handlers β€” pre-session, post-write, workflow tracing
AIWG Utils Core meta-utilities (auto-installed with any framework)
Droid Bridge Factory Droid orchestration β€” multi-platform agent bridge
Star Prompt Repository star prompt for success celebration

Agents (188)

Specialized AI personas deployed to your platform with defined tools, responsibilities, and operating rhythms.

SDLC Agents (90)

Domain Agents Examples
Testing & Quality 11 Test Engineer, Test Architect, Mutation Analyst, Regression Analyst, Laziness Detector, Reliability Engineer
Security & Compliance 9 Security Auditor, Security Architect, Compliance Checker, Privacy Officer, Citation Verifier
Architecture & Design 12 Architecture Designer, API Designer, Cloud Architect, System Analyst, Product Designer, Decision Matrix Expert
DevOps & Cloud 8 AWS Specialist, Azure Specialist, GCP Specialist, Kubernetes Expert, DevOps Engineer, Multi-Cloud Strategist
Backend & Data 10 Django Expert, Spring Boot Expert, Data Engineer, Database Optimizer, Software Implementer, Incident Responder
Frontend & Mobile 6 React Expert, Frontend Specialist, Mobile Developer, Accessibility Specialist, UX Lead
AI/ML & Performance 5 AI/ML Engineer, Performance Engineer, Cost Optimizer, Metrics Analyst
Code Quality 11 Code Reviewer, Debugger, Dead Code Analyzer, Technical Debt Analyst, Legacy Modernizer
Documentation 7 Technical Writer, Documentation Synthesizer, Documentation Archivist, Context Librarian
Requirements & Planning 7 Requirements Analyst, Requirements Reviewer, Intake Coordinator, RACI Expert
Agent/Tool Smiths 9 AgentSmith, CommandSmith, MCPSmith, SkillSmith, ToolSmith
Governance & Meta 3 Executive Orchestrator, Recovery Orchestrator, Migration Planner

Forensics Agents (13)

Agent What It Does
Forensics Orchestrator Coordinates full investigation lifecycle from scoping through reporting
Triage Agent Quick volatile data capture following RFC 3227 volatility order
Acquisition Agent Evidence collection with chain of custody and SHA-256 hash verification
Log Analyst Auth.log, syslog, journal, and application log analysis for brute force, privilege escalation, lateral movement
Persistence Hunter Sweeps cron, systemd, SSH keys, LD_PRELOAD, PAM modules, kernel modules β€” maps to MITRE ATT&CK
Container Analyst Docker, containerd, Kubernetes forensics β€” privilege escalation, container escapes, eBPF monitoring
Network Analyst Connection state, DNS, traffic patterns β€” beaconing, C2, data exfiltration detection
Memory Analyst Volatility 3 memory forensics β€” process analysis, rootkit detection, credential extraction
Cloud Analyst AWS/Azure/GCP audit logs, IAM review, network flows, API activity anomaly detection
Timeline Builder Multi-source event correlation β€” chronological incident timelines with attribution
IOC Analyst IOC extraction, enrichment, STIX 2.1 formatting β€” actionable IOC register
Recon Agent Target reconnaissance β€” system topology, services, users, network baselines
Reporting Agent Structured forensic reports β€” executive summary, technical findings, timeline, remediation

Marketing Agents (37)

Domain Agents
Strategy Campaign Strategist, Brand Guardian, Positioning Specialist, Market Researcher, Content Strategist, Channel Strategist
Creation Copywriter, Content Writer, Email Marketer, Social Media Specialist, SEO Specialist, Graphic Designer, Art Director
Management Campaign Orchestrator, Production Coordinator, Traffic Manager, Asset Manager, Workflow Coordinator
Analytics Marketing Analyst, Data Analyst, Attribution Specialist, Reporting Specialist, Budget Planner
Communications PR Specialist, Crisis Communications, Corporate Communications, Internal Communications, Media Relations

Research Agents (8)

Discovery Agent, Acquisition Agent, Documentation Agent, Citation Agent, Quality Agent, Archival Agent, Provenance Agent, Workflow Agent

Media Curator Agents (6)

Discography Analyst, Source Discoverer, Acquisition Manager, Quality Assessor, Metadata Curator, Completeness Tracker


Rules (35)

Enforcement patterns that prevent common AI failure modes. Rules deploy automatically with their framework.

Core Rules (10) β€” Always Active

Rule Severity What It Enforces
no-attribution CRITICAL AI tools are tools β€” never add attribution to commits, PRs, docs, or code
token-security CRITICAL Never hard-code tokens; use heredoc pattern for scoped lifetime; file permissions 600
versioning CRITICAL CalVer YYYY.M.PATCH with NO leading zeros; npm rejects leading zeros
citation-policy CRITICAL Never fabricate citations, DOIs, or URLs; only cite verified sources; GRADE-appropriate hedging
anti-laziness HIGH Never delete tests to pass, skip tests, remove features, or weaken assertions; escalate after 3 failures
executable-feedback HIGH Execute tests before returning code; track execution history; max 3 retries with root cause analysis
failure-mitigation HIGH Detect and recover from 6 LLM failure archetypes: hallucination, context loss, instruction drift, safety, technical, consistency
research-before-decision HIGH Research codebase before acting: IDENTIFY β†’ SEARCH β†’ EXTRACT β†’ REASON β†’ ACT β†’ VERIFY
instruction-comprehension HIGH Fully parse all instructions before acting; track multi-part requests to completion
subagent-scoping HIGH One focused task per subagent; <20% context budget; no delegation chains deeper than 2 levels

SDLC Rules (34) β€” Active with Framework

Actionable feedback, mention wiring, HITL gates, agent fallback, provenance tracking, TAO loop, reproducibility validation, SDLC orchestration, agent-friendly code, agent generation guardrails, artifact discovery, HITL patterns, human gate display, thought protocol, reasoning sections, few-shot examples, best output selection, reproducibility, progressive disclosure, conversable agent interface, auto-reply chains, criticality panel sizing, qualified references.

Research Rules (2) β€” Active with Research

Research metadata (FAIR-compliant YAML frontmatter), index generation (auto-generated INDEX.md per FAIR F4).


Skills (128)

Natural language workflow triggers. Say "what's the project status?" and the project-awareness skill activates.

Category Skills Examples
Regression Testing 12 regression-check, regression-baseline, regression-bisect, regression-performance, regression-api-contract, regression-cicd-hooks, regression-learning
Voice & Writing 6 voice-create, voice-analyze, voice-apply, voice-blend, ai-pattern-detection, brand-compliance
Testing & Quality 8 auto-test-execution, test-coverage, test-sync, mutation-test, flaky-detect, flaky-fix, tdd-enforce, qa-protocol
Forensics & Security 8 linux-forensics, memory-forensics, cloud-forensics, container-forensics, sigma-hunting, log-analysis, ioc-extraction, supply-chain-forensics
SDLC & Workflow 10 sdlc-accelerate, sdlc-reports, gate-evaluation, approval-workflow, iteration-control, risk-cycle, parallel-dispatch, decision-support
Documentation 6 doc-sync, doc-scraper, doc-splitter, llms-txt-support, pdf-extractor, source-unifier
Artifacts & Traceability 6 artifact-orchestration, artifact-metadata, artifact-lookup, traceability-check, claims-validator, citation-guard
Research 2 grade-on-ingest, auto-provenance
Infrastructure 5 config-validator, template-engine, code-chunker, decompose-file, workspace-health
Iteration 4 agent-loop, issue-driven-ralph, cross-task-learner, reflection-injection
Other 19 performance-digest, competitive-intel, audience-synthesis, skill-builder, skill-enhancer, skill-packager, quality-checker, nl-router, tot-exploration, and more

Framework Deep Dives

SDLC Complete β€” Full Software Development Lifecycle

The SDLC framework implements a phase-gated development lifecycle with 90 specialized agents, 34 enforcement rules, and 170+ artifact templates. Natural language commands drive phase transitions with automated quality gates.

 β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”    β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”    β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”    β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”    β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”
 β”‚ CO

Release History

VersionChangesUrgencyDate
v2026.3.2Service release fixing three `aiwg index` bugs and upgrading dev mode to delegate the full CLI to your local build. ## What's Fixed **`aiwg index stats/query/deps` without `--graph`** failed with "No artifact index found" β€” all three commands were checking the legacy `.aiwg/.index/metadata.json` root path instead of the graph subdirectories (`project/`, `codebase/`). Now they check graph subdirs first with legacy fallback. **`--use-dev` only changed framework content** β€” the CLI binary still Low3/4/2026
v2026.3.1## AIWG v2026.3.1 β€” "Discovery & Durability" ### Highlights | What changed | Why you care | |--------------|--------------| | **`aiwg index` subsystem** | Agents can search, query deps, and inspect stats across `.aiwg/` artifacts | | **Forensics agent gap-fills** | 6 agents and 3 commands rewritten; 660-line integration test suite | | **Color Palette addon** | Accessible palette generation with WCAG contrast checking | | **Ralph crash resilience** | SnapshotManager fix, state cleanup, e2e testLow3/4/2026
v2026.2.14## Highlights | What changed | Why you care | |--------------|--------------| | **Forensics-complete DFIR framework** | Full digital forensics lifecycle β€” 13 agents, 9 commands, 10 skills, Sigma hunting, evidence chain-of-custody | | **Codebase manageability tooling** | Rules and tools to keep agent-generated code within context window limits | | **17 specialist agents + 7 team configs** | Cloud platform experts (AWS/Azure/GCP), framework specialists, pre-built team compositions | | **UAT-MCP tLow2/28/2026
v2026.2.13## What's New - **Site deploy on tag push (#355)** β€” Pushing a version tag now auto-triggers an aiwg.io rebuild so the marketing site stays current - **Skill/command name collision fix** β€” Providers now prefer skills over commands when both share a name, preventing silent overwrites ## Install ```bash npm install -g aiwg@2026.2.13 ``` **Full changelog**: https://github.com/jmagly/aiwg/compare/v2026.2.12...v2026.2.13Low2/27/2026
v2026.2.12## Highlights | What changed | Why you care | |--------------|--------------| | **`aiwg doc-sync` command** | Detect and auto-fix documentation-code drift with 8 parallel auditors | | **`aiwg sdlc-accelerate` command** | Go from idea to construction-ready with a single command | | **2 new skills** | `doc-sync` and `sdlc-accelerate` with natural language triggers | | **HashiCorp references removed** | Framework is now vendor-neutral for infrastructure tooling | | **CLI reference corrected** | CoLow2/27/2026
v2026.2.10**Full Changelog**: https://github.com/jmagly/aiwg/compare/v2026.2.9...v2026.2.10Low2/15/2026
v2026.2.9# AIWG v2026.2.9 β€” "Manifest Native" Release **Released**: 2026-02-15 This release completes provider normalization around manifest-driven discovery so framework/addon deployment no longer depends on scattered provider-specific curation. Codex now receives the same research and media-curator deployment coverage as the other providers. ## Highlights | What changed | Why you care | |--------------|--------------| | Manifest-native provider deployment | Framework artifacts are discovered from mLow2/15/2026
v2026.2.8## Highlights | What changed | Why you care | |--------------|--------------| | **`aiwg use media-curator`** | Media Curator framework now deployable standalone across all 8 providers | | **`aiwg use research`** | Research Complete framework now deployable standalone across all 8 providers | | **Complete provider list** | All 8 providers shown in `aiwg help` (was 4) | | **Documentation audit** | Stale agent counts, deprecated CLI syntax, missing framework refs β€” all fixed | ## New Commands ``Low2/14/2026
v2026.2.7## Media Curator Framework New AIWG framework for intelligent media archive management β€” 31 files providing the full pipeline from discography analysis through multi-platform export. | What Changed | Why You Care | |--------------|--------------| | **New media-curator framework** | Complete AI-powered media archive management | | **6 agents** | Discography analysis, source discovery, acquisition, quality assessment, metadata curation, completeness tracking | | **9 commands + 9 skills** | Full Low2/14/2026
v2026.2.4## Highlights | What Changed | Why You Care | |--------------|--------------| | **`/address-issues` command** | Issue-thread-driven ralph loops with 2-way human-AI collaboration via issue comments | | **Context window budget** | Configure `AIWG_CONTEXT_WINDOW` to control parallel subagent limits on local/GPU systems | | **`--interactive` and `--guidance`** | Standard AIWG parameters for discovery prompts and upfront direction | ## `/address-issues` β€” Issue-Driven Ralph Loop (#333) Turns your Low2/9/2026
v2026.2.2## Fixes - **glob dependency** β€” Updated from 11.x to 13.x to resolve deprecation warning and security vulnerabilities - **Automated npm publishing** β€” CI now publishes to both Gitea and public npmjs.org on tag push using separate `NPMJS_TOKEN` granular access token (bypasses 2FA) ## Install ```bash npm install -g aiwg@2026.2.2 ``` ## Full Changelog See [CHANGELOG.md](https://github.com/jmagly/ai-writing-guide/blob/main/CHANGELOG.md) for details.Low2/9/2026
v2026.2.0## v2026.2.0 - "Universal Deploy" Release The largest AIWG release to date. Universal deployment ensures all 8 coding platforms receive all 4 artifact types (32 combinations). External Ralph loops enable crash-resilient multi-session task execution spanning 95 commits. ### Highlights | What Changed | Why You Care | |--------------|--------------| | **Universal deployment** | All 8 providers now receive all 4 artifact types β€” 32 combinations | | **External Ralph loop** | Crash-resilient iteratLow2/8/2026
v2026.1.7## Installation ```bash npm install -g aiwg ``` ## What's Changed See the automatically generated release notes below. --- Published to npm with provenance attestation. **Full Changelog**: https://github.com/jmagly/ai-writing-guide/compare/v2026.1.6...v2026.1.7Low1/14/2026
v2026.1.6## Highlights | What Changed | Why You Care | |-------------|--------------| | Complete addon discovery | ALL providers now properly deploy Ralph and other addons | | 8 providers fixed | Claude, Codex, Copilot, Cursor, Factory, OpenCode, Warp, Windsurf | | Ralph loop commands | New iterative task execution with `/ralph` | | Issue management | New `/issue-create`, `/issue-list`, `/issue-update` commands | | Smith generators | New `/smith-agenticdef`, `/smith-mcpdef`, `/smith-sysdef` | ## InstalLow1/14/2026
v2026.01.4## Installation ```bash npm install -g aiwg ``` ## What's Changed See the automatically generated release notes below. --- Published to npm with provenance attestation. **Full Changelog**: https://github.com/jmagly/ai-writing-guide/compare/v2026.01.3...v2026.01.4Low1/14/2026
v2026.01.3# v2026.01.3 - "Ralph Loop & Issue Management" Release ## Highlights | What Changed | Why You Care | |--------------|--------------| | **Ralph Loop** | Iterative AI task execution - "iteration beats perfection" methodology | | **--interactive & --guidance** | All commands now support interactive mode and custom guidance | | Unified issue management | Create, update, list, sync issues across Gitea/GitHub/Jira/Linear or local files | | Token security patterns | Secure token loading via env vars Low1/13/2026
v2024.12.4## Installation ```bash npm install -g aiwg ``` ## What's Changed See the automatically generated release notes below. --- Published to npm with provenance attestation. ## What's Changed * feat: OpenAI Codex CLI Integration by @jmagly in https://github.com/jmagly/ai-writing-guide/pull/62 * feat: implement Cursor IDE integration by @jmagly in https://github.com/jmagly/ai-writing-guide/pull/63 * feat: implement OpenCode provider integration by @jmagly in https://github.com/jmagly/ai-writingLow12/12/2025
v2024.12.3## Installation ```bash npm install -g aiwg ``` ## What's Changed See the automatically generated release notes below. --- Published to npm with provenance attestation. ## What's Changed * Feature/2024.12.3 it just works by @jmagly in https://github.com/jmagly/ai-writing-guide/pull/61 **Full Changelog**: https://github.com/jmagly/ai-writing-guide/compare/v2024.12.2...v2024.12.3Low12/11/2025
v2024.12.2## Skill Seekers Integration & Usability Improvements This release adds **Skill Seekers community integration** with two new addons, **workspace health guidance** for transition points, and **standardized command usability** across all flow commands. ### Added **Skill Seekers Integration** (PRs #206, #207, #208 to Skill Seekers repo): - **doc-intelligence addon** - Intelligent documentation analysis and generation - **skill-factory addon** - Automated skill generation from natural language deLow12/11/2025

Dependencies & License Audit

Loading dependencies...

Similar Packages

skiller-desktop-skills-managerAI agent skills manager for Claude Code, Cursor, Codex and more β€” install, sync, and manage skills from one desktop app.v0.2.11
skillfoundryAI engineering framework with quality gates, persistent memory, and multi-platform support. Works inside Claude Code, Cursor, Copilot, Codex, and Gemini.v2.0.61
@neyugn/agent-kitsUniversal AI Agent Toolkit - Skills, Agents, and Workflows for any AI coding assistant0.5.8
agentic-configProject-agnostic, composable AI workflow automation via pi packages and Claude Code plugins.v0.3.0-alpha
MCAFMCAF is a framework for building software products together with AI coding agents.v1.2.64