PDD (Prompt-Driven Development) is a toolkit for AI-powered code generation and maintenance.
Getting started is simple:
# Install and run
uv tool install pdd-cli
pdd setup
pdd connectThis launches a web interface at localhost:9876 where you can:
- Implement GitHub issues automatically
- Generate and test code from prompts
- Manage your PDD projects visually
For CLI users, PDD also offers powerful agentic commands that implement GitHub issues automatically:
pdd change <issue-url>- Implement feature requests (12-step workflow)pdd bug <issue-url>- Create failing tests for bugspdd fix <issue-url>- Fix the failing testspdd generate <issue-url>- Generate architecture.json from a PRD issue (11-step workflow)pdd test <issue-url>- Generate UI tests from issue descriptions (18-step workflow with exploratory testing, contract validation, accessibility audits)
For prompt-based workflows, the sync command automates the complete development cycle with intelligent decision-making, real-time visual feedback, and sophisticated state management.
For a detailed explanation of the concepts, architecture, and benefits of Prompt-Driven Development, please refer to our full whitepaper. This document provides an in-depth look at the PDD philosophy, its advantages over traditional development, and includes benchmarks and case studies.
Read the Full Whitepaper with Benchmarks
Also see the PromptβDriven Development Doctrine for core principles and practices: docs/prompt-driven-development-doctrine.md
On macOS, you'll need to install some prerequisites before installing PDD:
-
Install Xcode Command Line Tools (required for Python compilation):
xcode-select --install
-
Install Homebrew (recommended package manager for macOS):
/bin/bash -c "$(curl -fsSL https://raw.githubusercontent.com/Homebrew/install/HEAD/install.sh)"After installation, add Homebrew to your PATH:
echo 'eval "$(/opt/homebrew/bin/brew shellenv)"' >> ~/.zprofile && eval "$(/opt/homebrew/bin/brew shellenv)"
-
Install Python (if not already installed):
# Check if Python is installed python3 --version # If Python is not found, install it via Homebrew brew install python
Note: Recent versions of macOS no longer ship with Python pre-installed. PDD requires Python 3.8 or higher. The
brew install pythoncommand installs the latest Python 3 version.
We recommend installing PDD using the uv package manager for better dependency management and automatic environment configuration:
# Install uv if you haven't already
curl -LsSf https://astral.sh/uv/install.sh | sh
# Install PDD using uv tool install
uv tool install pdd-cliThis installation method ensures:
- Faster installations with optimized dependency resolution
- Automatic environment setup without manual configuration
- Proper handling of the PDD_PATH environment variable
- Better isolation from other Python packages
The PDD CLI will be available immediately after installation without requiring any additional environment configuration.
Verify installation:
pdd --versionWith the CLI on your PATH, continue with:
pdd setupThe command detects agentic CLI tools, scans for API keys, configures models, and seeds local configuration files.
If you postpone this step, the CLI detects the missing setup artifacts the first time you run another command and shows a reminder banner so you can complete it later (the banner is suppressed once ~/.pdd/api-env exists or when your project already provides credentials via .env or .pdd/).
If you prefer using pip, you can install PDD with:
pip install pdd-cli# Create virtual environment
python -m venv pdd-env
# Activate environment
# On Windows:
pdd-env\Scripts\activate
# On Unix/MacOS:
source pdd-env/bin/activate
# Install PDD
pip install pdd-cliThe easiest way to use PDD is through the web interface:
# 1. Install PDD
curl -LsSf https://astral.sh/uv/install.sh | sh
uv tool install pdd-cli
# 2. Run setup (API keys, shell completion)
pdd setup
# 3. Launch the web interface
pdd connectThis opens a browser-based interface where you can:
- Run Commands: Execute
pdd change,pdd bug,pdd fix,pdd syncetc. visually - Browse Files: View and edit prompts, code, and tests in your project
- Remote Access: Access your session from any browser via PDD Cloud (use
--local-onlyto disable)
For CLI enthusiasts, implement GitHub issues directly:
Prerequisites:
-
GitHub CLI - Required for issue access:
brew install gh && gh auth login -
One Agentic CLI - Required to run the workflows (install at least one):
- Claude Code:
npm install -g @anthropic-ai/claude-code(requiresANTHROPIC_API_KEY) - Gemini CLI:
npm install -g @google/gemini-cli(requiresGOOGLE_API_KEYorGEMINI_API_KEY) - Codex CLI:
npm install -g @openai/codex(requiresOPENAI_API_KEY)
- Claude Code:
Usage:
# Implement a feature request
pdd change https://github.com/owner/repo/issues/123
# Or fix a bug
pdd bug https://github.com/owner/repo/issues/456
pdd fix https://github.com/owner/repo/issues/456For learning PDD fundamentals or working with existing prompt files:
cd your-project
pdd sync module_name # Full automated workflowSee the Hello Example below for a step-by-step introduction.
If you want to understand PDD fundamentals, follow this manual example to see it in action.
-
Install prerequisites (macOS/Linux):
xcode-select --install # macOS only curl -LsSf https://astral.sh/uv/install.sh | sh uv tool install pdd-cli pdd --version
-
Clone repo
# Clone the repository (if not already done) git clone https://github.com/promptdriven/pdd.git cd pdd/examples/hello
-
Set one API key (choose your provider):
export GEMINI_API_KEY="your-gemini-key" # OR export OPENAI_API_KEY="your-openai-key"
Run the comprehensive setup wizard:
pdd setupThe setup wizard runs these steps:
- Detects agentic CLI tools (Claude, Gemini, Codex) and offers installation and API key configuration if needed
- Scans for API keys across
.env, and~/.pdd/api-env.*, and the shell environment; prompts to add one if none are found - Configures models from a reference CSV
data/llm_model.csvof top models (ELO β₯ 1400) across all LiteLLM-supported providers based on your available keys - Optionally creates a
.pddrcproject config - Tests the first available model with a real LLM call
- Prints a structured summary (CLIs, keys, models, test result)
The wizard can be re-run at any time to update keys, add providers, or reconfigure settings.
Important: After setup completes, source the API environment file so your keys take effect in the current terminal session:
source ~/.pdd/api-env.zsh # or api-env.bash, depending on your shellNew terminal windows will load keys automatically.
If you skip this step, the first regular pdd command you run will detect the missing setup files and print a reminder banner so you can finish onboarding later.
-
Run Hello:
cd ../hello pdd --force generate hello_python.prompt python3 hello.pyβ Expected output:
hello
PDD commands can be run either in the cloud or locally. By default, all commands run in the cloud mode, which provides several advantages:
- No need to manage API keys locally
- Access to more powerful models
- Shared examples and improvements across the PDD community
- Automatic updates and improvements
- Better cost optimization
When running in cloud mode (default), PDD uses GitHub Single Sign-On (SSO) for authentication. On first use, you'll be prompted to authenticate:
- PDD will open your default browser to the GitHub login page
- Log in with your GitHub account
- Authorize PDD Cloud to access your GitHub profile
- Once authenticated, you can return to your terminal to continue using PDD
The authentication token is securely stored locally and automatically refreshed as needed.
When running in local mode with the --local flag, you'll need to set up API keys for the language models:
# For OpenAI
export OPENAI_API_KEY=your_api_key_here
# For Anthropic
export ANTHROPIC_API_KEY=your_api_key_here
# For other supported providers (LiteLLM supports multiple LLM providers)
export PROVIDER_API_KEY=your_api_key_hereAdd these to your .bashrc, .zshrc, or equivalent for persistence.
PDD's local mode uses LiteLLM (version 1.75.5 or higher) for interacting with language models, providing:
- Support for multiple model providers (OpenAI, Anthropic, Google/Vertex AI, and more)
- Automatic model selection based on strength settings
- Response caching for improved performance
- Smart token usage tracking and cost estimation
- Interactive API key acquisition when keys are missing
When keys are missing, PDD will prompt for them interactively and securely store them in your local .env file.
PDD uses a CSV file to configure model selection and capabilities. This configuration is loaded from:
- User-specific configuration:
~/.pdd/llm_model.csv(takes precedence if it exists) - Project-specific configuration:
<PROJECT_ROOT>/.pdd/llm_model.csv - Package default: Bundled with PDD installation (fallback when no local configurations exist)
The CSV includes columns for:
provider: The LLM provider (e.g., "openai", "anthropic", "google")model: The LiteLLM model identifier (e.g., "gpt-4", "claude-3-opus-20240229")input/output: Costs per million tokenscoding_arena_elo: ELO rating for coding abilityapi_key: The environment variable name for the required API keystructured_output: Whether the model supports structured JSON outputreasoning_type: Support for reasoning capabilities ("none", "budget", or "effort")
For a concrete, up-to-date reference of supported models and example rows, see the bundled CSV in this repository: pdd/data/llm_model.csv.
For proper model identifiers to use in your custom configuration, refer to the LiteLLM Model List documentation. LiteLLM typically uses model identifiers in the format provider/model_name (e.g., "openai/gpt-4", "anthropic/claude-3-opus-20240229").
-
Command not found
# Add to PATH if needed export PATH="$HOME/.local/bin:$PATH"
-
Permission errors
# Install with user permissions pip install --user pdd-cli -
macOS-specific issues
- Xcode Command Line Tools not found: Run
xcode-select --installto install the required development tools - Homebrew not found: Install Homebrew using the command in the prerequisites section above
- Python not found or wrong version: Install Python 3 via Homebrew:
brew install python - Permission denied during compilation: Ensure Xcode Command Line Tools are properly installed and you have write permissions to the installation directory
- uv installation fails: Try installing uv through Homebrew:
brew install uv - Python version conflicts: If you have multiple Python versions, ensure
python3points to Python 3.8+:which python3 && python3 --version
- Xcode Command Line Tools not found: Run
Current version: 0.0.179
To check your installed version, run:
pdd --version
PDD includes an auto-update feature to ensure you always have access to the latest features and security patches. You can control this behavior using an environment variable (see "Auto-Update Control" section below).
PDD supports a wide range of programming languages, including but not limited to:
- Python
- JavaScript
- TypeScript
- Java
- C++
- Ruby
- Go
The specific language is often determined by the prompt file's naming convention or specified in the command options.
Prompt files in PDD follow this specific naming format:
<basename>_<language>.prompt
Where:
<basename>is the base name of the file or project<language>is the programming language or context of the prompt file
Examples:
factorial_calculator_python.prompt(basename: factorial_calculator, language: python)responsive_layout_css.prompt(basename: responsive_layout, language: css)data_processing_pipeline_python.prompt(basename: data_processing_pipeline, language: python)
Prompt-Driven Development (PDD) inverts traditional software development by treating prompts as the primary artifact - not code. This paradigm shift has profound implications:
-
Prompts as Source of Truth: In traditional development, source code is the ground truth that defines system behavior. In PDD, the prompts are authoritative, with code being a generated artifact.
-
Natural Language Over Code: Prompts are written primarily in natural language, making them more accessible to non-programmers and clearer in expressing intent.
-
Regenerative Development: When changes are needed, you modify the prompt and regenerate code, rather than directly editing the code. This maintains the conceptual integrity between requirements and implementation.
-
Intent Preservation: Prompts capture the "why" behind code in addition to the "what" - preserving design rationale in a way that comments often fail to do.
To work effectively with PDD, adopt these mental shifts:
-
Prompt-First Thinking: Always start by defining what you want in a prompt before generating any code.
-
Bidirectional Flow:
- Prompt β Code: The primary direction (generation)
- Code β Prompt: Secondary but crucial (keeping prompts in sync with code changes)
-
Modular Prompts: Just as you modularize code, you should modularize prompts into self-contained units that can be composed.
-
Integration via Examples: Modules integrate through their examples, which serve as interfaces, allowing for token-efficient references.
Each workflow in PDD addresses a fundamental development need:
-
Initial Development Workflow
- Purpose: Creating functionality from scratch
- Conceptual Flow: Define dependencies β Generate implementation β Create interfaces β Ensure runtime functionality β Verify correctness
This workflow embodies the prompt-to-code pipeline, moving from concept to tested implementation.
-
Code-to-Prompt Update Workflow
- Purpose: Maintaining prompt as source of truth when code changes
- Conceptual Flow: Sync code changes to prompt β Identify impacts β Propagate changes
This workflow ensures the information flow from code back to prompts, preserving prompts as the source of truth.
-
Debugging Workflows
- Purpose: Resolving different types of issues
- Conceptual Types:
- Context Issues: Addressing misunderstandings in prompt interpretation
- Runtime Issues: Fixing execution failures
- Logical Issues: Correcting incorrect behavior
- Traceability Issues: Connecting code problems back to prompt sections
These workflows recognize that different errors require different resolution approaches.
-
Refactoring Workflow
- Purpose: Improving prompt organization and reusability
- Conceptual Flow: Extract functionality β Ensure dependencies β Create interfaces
This workflow parallels code refactoring but operates at the prompt level.
-
Multi-Prompt Architecture Workflow
- Purpose: Coordinating systems with multiple prompts
- Conceptual Flow: Detect conflicts β Resolve incompatibilities β Regenerate code β Update interfaces β Verify system
This workflow addresses the complexity of managing multiple interdependent prompts.
-
Enhancement Phase: Use Feature Enhancement when adding capabilities to existing modules.
The choice of workflow should be guided by your current development phase:
-
Creation Phase: Use Initial Development when building new functionality.
-
Maintenance Phase: Use Code-to-Prompt Update when existing code changes.
-
Problem-Solving Phase: Choose the appropriate Debugging workflow based on the issue type:
- Preprocess β Generate for prompt interpretation issues
- Crash for runtime errors
- Bug β Fix for logical errors
- Trace for locating problematic prompt sections
-
Restructuring Phase: Use Refactoring when prompts grow too large or complex.
-
System Design Phase: Use Multi-Prompt Architecture when coordinating multiple components.
-
Enhancement Phase: Use Feature Enhancement when adding capabilities to existing modules.
Effective PDD employs these recurring patterns:
-
Dependency Injection via Auto-deps: Automatically including relevant dependencies in prompts.
-
Interface Extraction via Example: Creating minimal reference implementations for reuse.
-
Bidirectional Traceability: Maintaining connections between prompt sections and generated code.
-
Test-Driven Prompt Fixing: Using tests to guide prompt improvements when fixing issues.
-
Hierarchical Prompt Organization: Structuring prompts from high-level architecture to detailed implementations.
pdd [GLOBAL OPTIONS] COMMAND [OPTIONS] [ARGS]...
Here is a brief overview of the main commands provided by PDD. Click the command name to jump to its detailed section:
The following diagram shows how PDD commands interact:
graph TB
subgraph Entry Points
connect["pdd connect (Web UI - Recommended)"]
cli["Direct CLI"]
ghapp["GitHub App"]
end
gen_url["pdd generate <url>"]
subgraph sync workflow
sync["pdd sync"]
s_deps["auto-deps"]
s_gen["generate"]
s_example["example"]
s_crash["crash"]
s_verify["verify"]
s_test["test"]
s_fix["fix"]
s_update["update"]
end
checkup["pdd checkup <url>"]
test_url["pdd test <url>"]
bug_url["pdd bug <url>"]
fix_url["pdd fix <url>"]
change["pdd change <url>"]
sync_url["pdd sync <url>"]
connect --> gen_url
cli --> gen_url
ghapp --> gen_url
gen_url --> sync
sync --> s_deps
s_deps --> s_gen
s_gen --> s_example
s_example --> s_crash
s_crash --> s_verify
s_verify --> s_test
s_test --> s_fix
s_fix --> s_update
sync --> checkup
checkup --> test_url
checkup --> bug_url
checkup --> change
test_url --> fix_url
bug_url --> fix_url
change --> sync_url
sync_url -.-> sync
Key concepts:
- Entry points:
pdd connect(web UI), direct CLI, or the GitHub App - Start:
pdd generate <url>scaffolds architecture, prompts, and.pddrcfrom a PRD GitHub issue - Core loop:
pdd syncruns the full auto-deps β generate β example β crash β verify β test β fix β update cycle for each module - Health check:
pdd checkup <url>identifies what needs attention next - Defect path:
test <url>orbug <url>surfaces failing tests βfix <url>resolves them - Feature path:
change <url>implements the feature βsync <url>re-runs sync across affected modules
connect: [RECOMMENDED] Launch web interface for visual PDD interactionsetup: Configure API keys and shell completion
change: Implement feature requests from GitHub issues (12-step workflow)bug: Analyze bugs and create failing tests from GitHub issuescheckup: Run automated project health check from a GitHub issue (8-step workflow)fix: Fix failing tests (supports issue-driven and manual modes)sync: Multi-module parallel sync from a GitHub issue (when passed a URL instead of basename)test: Generate UI tests from GitHub issues (18-step workflow in agentic mode)
sync: [PRIMARY FOR PROMPT WORKFLOWS] Automated prompt-to-code cyclegenerate: Creates runnable code from a prompt file; supports parameterized prompts via-e/--envexample: Generates a compact example showing how to use functionality defined in a prompttest: Generates or enhances unit tests for a code file and its promptupdate: Updates the original prompt file based on modified codeverify: Verifies functional correctness by running a program and judging output against intentcrash: Fixes errors in a code module and its calling program that caused a crash
preprocess: Preprocesses prompt files, handling includes, comments, and other directivessplit: Splits large prompt files into smaller, more manageable onesextracts prune: Garbage-collect orphaned extracts cache entriesauto-deps: Analyzes and inserts needed dependencies into a prompt filedetect: Analyzes prompts to determine which ones need changes based on a descriptionconflicts: Finds and suggests resolutions for conflicts between two prompt filestrace: Finds the corresponding line number in a prompt file for a given code line
PDD can validate prompt changes against user stories stored as Markdown files. This uses detect under the hood: a story passes when detect returns no required prompt changes.
Defaults:
- Stories live in
user_stories/and matchstory__*.md. - Prompts are loaded from
prompts/(excluding*_llm.promptby default).
Overrides:
PDD_USER_STORIES_DIRsets the stories directory.PDD_PROMPTS_DIRsets the prompts directory.
Commands:
pdd detect --storiesruns the validation suite.pdd changeruns story validation after prompt modifications and fails if any story fails.pdd fix user_stories/story__*.mdapplies a single story to prompts and re-validates it.pdd test <prompt_1.prompt> [prompt_2.prompt ...]generates astory__*.mdfile and links those prompts.pdd test user_stories/story__*.mdupdates prompt links for an existing story file.
Story prompt linkage:
- Stories may include optional metadata to scope validation to a subset of prompts:
<!-- pdd-story-prompts: prompts/a_python.prompt, prompts/b_python.prompt --> - If metadata is missing,
pdd detect --storiesvalidates against the full prompt set. - In
--storiesmode, whendetectidentifies impacted prompts, PDD caches links back into the story metadata for future deterministic runs.
Template:
- See
user_stories/story__template.mdfor a starter format.
These options can be used with any command:
--force: Skip all interactive prompts (file overwrites, API key requests). Useful for CI/automation.--strength FLOAT: Set the strength of the AI model (0.0 to 1.0, default is 0.5).- 0.0: Cheapest available model
- 0.5: Default base model
- 1.0: Most powerful model (highest ELO rating)
--time FLOAT: Controls the reasoning allocation for LLM models supporting reasoning capabilities (0.0 to 1.0, default is 0.25).- For models with specific reasoning token limits (e.g., 64k), a value of
1.0utilizes the maximum available tokens. - For models with discrete effort levels,
1.0corresponds to the highest effort level. - Values between 0.0 and 1.0 scale the allocation proportionally.
- For models with specific reasoning token limits (e.g., 64k), a value of
--temperature FLOAT: Set the temperature of the AI model (default is 0.0).--verbose: Increase output verbosity for more detailed information. Includes token count and context window usage for each LLM call.--quiet: Decrease output verbosity for minimal information.--output-cost PATH_TO_CSV_FILE: Enable cost tracking and output a CSV file with usage details.--review-examples: Review and optionally exclude few-shot examples before command execution.--local: Run commands locally instead of in the cloud.--core-dump: Capture a debug bundle for this run so it can be replayed and analyzed later.report-core: Report a bug by creating a GitHub issue with the core dump file.--context CONTEXT_NAME: Override automatic context detection and use the specified context from.pddrc.--list-contexts: List all available contexts defined in.pddrcand exit.
If something goes wrong and you want the PDD team to be able to reproduce it, you can run any command with a core dump enabled:
pdd --core-dump sync factorial_calculator
pdd --core-dump crash prompts/calc_python.prompt src/calc.py examples/run_calc.py crash_errors.logWhen --core-dump is set, PDD:
- Captures the full CLI command and arguments
- Records relevant logs and internal trace information for that run
- Bundles the prompt(s), generated code, and key metadata needed to replay the issue
At the end of the run, PDD prints the path to the core dump bundle.
Attach that bundle when you open a GitHub issue or send a bug report so maintainers can quickly reproduce and diagnose your problem.
The report-core command helps you report a bug by creating a GitHub issue with the core dump file. It simplifies the reporting process by automatically collecting relevant files and information.
Usage:
pdd report-core [OPTIONS] [CORE_FILE]Arguments:
CORE_FILE: The path to the core dump file (e.g.,.pdd/core_dumps/pdd-core-....json). If omitted, the most recent core dump is used.
Options:
--api: Create the issue directly via the GitHub API instead of opening a browser. This enables automatic Gist creation for attached files.--repo OWNER/REPO: Override the target repository (default:promptdriven/pdd).--description,-d TEXT: A short description of what went wrong.
Authentication:
To use the --api flag, you need to be authenticated with GitHub. PDD checks for credentials in the following order:
- GitHub CLI:
gh auth token(recommended) - Environment Variables:
GITHUB_TOKENorGH_TOKEN - Legacy:
PDD_GITHUB_TOKEN
File Tracking & Gists:
When using --api, PDD will:
- Collect all relevant files (prompts, code, tests, configs, meta files).
- Create a private GitHub Gist containing these files.
- Link the Gist in the created issue.
This ensures that all necessary context is available for debugging while keeping the issue body clean. If you don't use --api, files will be truncated to fit within the URL length limits of the browser-based submission.
--list-contextsreads the nearest.pddrc(searching upward from the current directory), prints the available contexts one per line, and exits immediately with status 0. No autoβupdate checks or subcommands run when this flag is present.--context CONTEXT_NAMEis validated early against the same.pddrcsource of truth. If the name is unknown, the CLI raises aUsageErrorand exits with code 2 before running autoβupdate or subcommands.- Precedence for configuration is: CLI options >
.pddrccontext > environment variables > defaults. See Configuration for details.
PDD automatically updates itself to ensure you have the latest features and security patches. However, you can control this behavior using the PDD_AUTO_UPDATE environment variable:
# Disable auto-updates
export PDD_AUTO_UPDATE=false
# Enable auto-updates (default behavior)
export PDD_AUTO_UPDATE=trueFor persistent settings, add this environment variable to your shell's configuration file (e.g., .bashrc or .zshrc).
This is particularly useful in:
- Production environments where version stability is crucial
- CI/CD pipelines where consistent behavior is required
- Version-sensitive projects that require specific PDD versions
PDD uses a large language model to generate and manipulate code. The --strength and --temperature options allow you to control the model's output:
- Strength: Determines how powerful/expensive a model should be used. Higher values (closer to 1.0) result in high performance models with better capabilities (selected by ELO rating), while lower values (closer to 0.0) select more cost-effective models.
- Temperature: Controls the randomness of the output. Higher values increase diversity but may lead to less coherent results, while lower values produce more focused and deterministic outputs.
- Time: (Optional, controlled by
--time FLOAT) For models supporting reasoning, this scales the allocated reasoning resources (e.g., tokens or effort level) between minimum (0.0) and maximum (1.0), with a default of 0.25.
When running in local mode, PDD uses LiteLLM to select and interact with language models based on a configuration file that includes:
- Input and output costs per million tokens
- ELO ratings for coding ability
- Required API key environment variables
- Structured output capability flags
- Reasoning capabilities (budget-based or effort-based)
PDD includes a feature for tracking and reporting the cost of operations. When enabled, it generates a CSV file with usage details for each command execution.
To enable cost tracking, use the --output-cost option with any command:
pdd --output-cost PATH_TO_CSV_FILE [COMMAND] [OPTIONS] [ARGS]...
The PATH_TO_CSV_FILE should be the desired location and filename for the CSV output.
PDD calculates costs based on the AI model usage for each operation. Costs are presented in USD (United States Dollars) and are calculated using the following factors:
- Model strength: Higher strength settings generally result in higher costs.
- Input size: Larger inputs (e.g., longer prompts or code files) typically incur higher costs.
- Operation complexity: Some operations (like
fixandcrashwith multiple iterations) may be more costly than simpler operations.
The exact cost per operation is determined by the LiteLLM integration using the provider's current pricing model. PDD uses an internal pricing table that is regularly updated to reflect the most current rates.
The generated CSV file includes the following columns:
- timestamp: The date and time of the command execution
- model: The AI model used for the operation
- command: The PDD command that was executed
- cost: The estimated cost of the operation in USD (e.g., 0.05 for 5 cents). This will be zero for local models or operations that do not use a LLM.
- input_files: A list of input files involved in the operation
- output_files: A list of output files generated or modified by the operation
This comprehensive output allows for detailed tracking of not only the cost and type of operations but also the specific files involved in each PDD command execution.
| Version | Changes | Urgency | Date |
|---|---|---|---|
| main@2026-04-21 | Latest activity on main branch | High | 4/21/2026 |
| v0.0.201 | Latest release: v0.0.201 | High | 4/8/2026 |

