freshcrate

pdd

Prompt Driven Development Command Line Interface

Description

Prompt Driven Development Command Line Interface

README

PDD (Prompt-Driven Development) Command Line Interface

PDD-CLI Version Discord

Introduction

PDD (Prompt-Driven Development) is a toolkit for AI-powered code generation and maintenance.

Getting started is simple:

# Install and run
uv tool install pdd-cli
pdd setup
pdd connect

This launches a web interface at localhost:9876 where you can:

  • Implement GitHub issues automatically
  • Generate and test code from prompts
  • Manage your PDD projects visually

PDD Handpaint Demo

For CLI users, PDD also offers powerful agentic commands that implement GitHub issues automatically:

  • pdd change <issue-url> - Implement feature requests (12-step workflow)
  • pdd bug <issue-url> - Create failing tests for bugs
  • pdd fix <issue-url> - Fix the failing tests
  • pdd generate <issue-url> - Generate architecture.json from a PRD issue (11-step workflow)
  • pdd test <issue-url> - Generate UI tests from issue descriptions (18-step workflow with exploratory testing, contract validation, accessibility audits)

For prompt-based workflows, the sync command automates the complete development cycle with intelligent decision-making, real-time visual feedback, and sophisticated state management.

Whitepaper

For a detailed explanation of the concepts, architecture, and benefits of Prompt-Driven Development, please refer to our full whitepaper. This document provides an in-depth look at the PDD philosophy, its advantages over traditional development, and includes benchmarks and case studies.

Read the Full Whitepaper with Benchmarks

Also see the Prompt‑Driven Development Doctrine for core principles and practices: docs/prompt-driven-development-doctrine.md

Installation

Prerequisites for macOS

On macOS, you'll need to install some prerequisites before installing PDD:

  1. Install Xcode Command Line Tools (required for Python compilation):

    xcode-select --install
  2. Install Homebrew (recommended package manager for macOS):

    /bin/bash -c "$(curl -fsSL https://raw.githubusercontent.com/Homebrew/install/HEAD/install.sh)"

    After installation, add Homebrew to your PATH:

    echo 'eval "$(/opt/homebrew/bin/brew shellenv)"' >> ~/.zprofile && eval "$(/opt/homebrew/bin/brew shellenv)"
  3. Install Python (if not already installed):

    # Check if Python is installed
    python3 --version
    
    # If Python is not found, install it via Homebrew
    brew install python

    Note: Recent versions of macOS no longer ship with Python pre-installed. PDD requires Python 3.8 or higher. The brew install python command installs the latest Python 3 version.

Recommended Method: uv

We recommend installing PDD using the uv package manager for better dependency management and automatic environment configuration:

# Install uv if you haven't already 
curl -LsSf https://astral.sh/uv/install.sh | sh

# Install PDD using uv tool install
uv tool install pdd-cli

This installation method ensures:

  • Faster installations with optimized dependency resolution
  • Automatic environment setup without manual configuration
  • Proper handling of the PDD_PATH environment variable
  • Better isolation from other Python packages

The PDD CLI will be available immediately after installation without requiring any additional environment configuration.

Verify installation:

pdd --version

With the CLI on your PATH, continue with:

pdd setup

The command detects agentic CLI tools, scans for API keys, configures models, and seeds local configuration files. If you postpone this step, the CLI detects the missing setup artifacts the first time you run another command and shows a reminder banner so you can complete it later (the banner is suppressed once ~/.pdd/api-env exists or when your project already provides credentials via .env or .pdd/).

Alternative: pip Installation

If you prefer using pip, you can install PDD with:

pip install pdd-cli

Advanced Installation Options

Virtual Environment Installation

# Create virtual environment
python -m venv pdd-env

# Activate environment
# On Windows:
pdd-env\Scripts\activate
# On Unix/MacOS:
source pdd-env/bin/activate

# Install PDD
pip install pdd-cli

Getting Started

Option 1: Web Interface (Recommended)

The easiest way to use PDD is through the web interface:

# 1. Install PDD
curl -LsSf https://astral.sh/uv/install.sh | sh
uv tool install pdd-cli

# 2. Run setup (API keys, shell completion)
pdd setup

# 3. Launch the web interface
pdd connect

This opens a browser-based interface where you can:

  • Run Commands: Execute pdd change, pdd bug, pdd fix, pdd sync etc. visually
  • Browse Files: View and edit prompts, code, and tests in your project
  • Remote Access: Access your session from any browser via PDD Cloud (use --local-only to disable)

Option 2: Issue-Driven CLI

For CLI enthusiasts, implement GitHub issues directly:

Prerequisites:

  1. GitHub CLI - Required for issue access:

    brew install gh && gh auth login
  2. One Agentic CLI - Required to run the workflows (install at least one):

    • Claude Code: npm install -g @anthropic-ai/claude-code (requires ANTHROPIC_API_KEY)
    • Gemini CLI: npm install -g @google/gemini-cli (requires GOOGLE_API_KEY or GEMINI_API_KEY)
    • Codex CLI: npm install -g @openai/codex (requires OPENAI_API_KEY)

Usage:

# Implement a feature request
pdd change https://github.com/owner/repo/issues/123

# Or fix a bug
pdd bug https://github.com/owner/repo/issues/456
pdd fix https://github.com/owner/repo/issues/456

Option 3: Manual Prompt Workflow

For learning PDD fundamentals or working with existing prompt files:

cd your-project
pdd sync module_name  # Full automated workflow

See the Hello Example below for a step-by-step introduction.


πŸš€ Quickstart (Hello Example)

If you want to understand PDD fundamentals, follow this manual example to see it in action.

  1. Install prerequisites (macOS/Linux):

    xcode-select --install      # macOS only
    curl -LsSf https://astral.sh/uv/install.sh | sh
    uv tool install pdd-cli
    pdd --version
  2. Clone repo

      # Clone the repository (if not already done)
     git clone https://github.com/promptdriven/pdd.git
     cd pdd/examples/hello
  3. Set one API key (choose your provider):

    export GEMINI_API_KEY="your-gemini-key"
    # OR
    export OPENAI_API_KEY="your-openai-key"

Post-Installation Setup (Required first step after installation)

Run the comprehensive setup wizard:

pdd setup

The setup wizard runs these steps:

  1. Detects agentic CLI tools (Claude, Gemini, Codex) and offers installation and API key configuration if needed
  2. Scans for API keys across .env, and ~/.pdd/api-env.*, and the shell environment; prompts to add one if none are found
  3. Configures models from a reference CSV data/llm_model.csv of top models (ELO β‰₯ 1400) across all LiteLLM-supported providers based on your available keys
  4. Optionally creates a .pddrc project config
  5. Tests the first available model with a real LLM call
  6. Prints a structured summary (CLIs, keys, models, test result)

The wizard can be re-run at any time to update keys, add providers, or reconfigure settings.

Important: After setup completes, source the API environment file so your keys take effect in the current terminal session:

source ~/.pdd/api-env.zsh   # or api-env.bash, depending on your shell

New terminal windows will load keys automatically.

If you skip this step, the first regular pdd command you run will detect the missing setup files and print a reminder banner so you can finish onboarding later.

  1. Run Hello:

    cd ../hello
    pdd --force generate hello_python.prompt
    python3 hello.py

    βœ… Expected output:

    hello
    

Cloud vs Local Execution

PDD commands can be run either in the cloud or locally. By default, all commands run in the cloud mode, which provides several advantages:

  • No need to manage API keys locally
  • Access to more powerful models
  • Shared examples and improvements across the PDD community
  • Automatic updates and improvements
  • Better cost optimization

Cloud Authentication

When running in cloud mode (default), PDD uses GitHub Single Sign-On (SSO) for authentication. On first use, you'll be prompted to authenticate:

  1. PDD will open your default browser to the GitHub login page
  2. Log in with your GitHub account
  3. Authorize PDD Cloud to access your GitHub profile
  4. Once authenticated, you can return to your terminal to continue using PDD

The authentication token is securely stored locally and automatically refreshed as needed.

Local Mode Requirements

When running in local mode with the --local flag, you'll need to set up API keys for the language models:

# For OpenAI
export OPENAI_API_KEY=your_api_key_here

# For Anthropic
export ANTHROPIC_API_KEY=your_api_key_here

# For other supported providers (LiteLLM supports multiple LLM providers)
export PROVIDER_API_KEY=your_api_key_here

Add these to your .bashrc, .zshrc, or equivalent for persistence.

PDD's local mode uses LiteLLM (version 1.75.5 or higher) for interacting with language models, providing:

  • Support for multiple model providers (OpenAI, Anthropic, Google/Vertex AI, and more)
  • Automatic model selection based on strength settings
  • Response caching for improved performance
  • Smart token usage tracking and cost estimation
  • Interactive API key acquisition when keys are missing

When keys are missing, PDD will prompt for them interactively and securely store them in your local .env file.

Local Model Configuration

PDD uses a CSV file to configure model selection and capabilities. This configuration is loaded from:

  1. User-specific configuration: ~/.pdd/llm_model.csv (takes precedence if it exists)
  2. Project-specific configuration: <PROJECT_ROOT>/.pdd/llm_model.csv
  3. Package default: Bundled with PDD installation (fallback when no local configurations exist)

The CSV includes columns for:

  • provider: The LLM provider (e.g., "openai", "anthropic", "google")
  • model: The LiteLLM model identifier (e.g., "gpt-4", "claude-3-opus-20240229")
  • input/output: Costs per million tokens
  • coding_arena_elo: ELO rating for coding ability
  • api_key: The environment variable name for the required API key
  • structured_output: Whether the model supports structured JSON output
  • reasoning_type: Support for reasoning capabilities ("none", "budget", or "effort")

For a concrete, up-to-date reference of supported models and example rows, see the bundled CSV in this repository: pdd/data/llm_model.csv.

For proper model identifiers to use in your custom configuration, refer to the LiteLLM Model List documentation. LiteLLM typically uses model identifiers in the format provider/model_name (e.g., "openai/gpt-4", "anthropic/claude-3-opus-20240229").

Troubleshooting Common Installation Issues

  1. Command not found

    # Add to PATH if needed
    export PATH="$HOME/.local/bin:$PATH"
  2. Permission errors

    # Install with user permissions
    pip install --user pdd-cli
  3. macOS-specific issues

    • Xcode Command Line Tools not found: Run xcode-select --install to install the required development tools
    • Homebrew not found: Install Homebrew using the command in the prerequisites section above
    • Python not found or wrong version: Install Python 3 via Homebrew: brew install python
    • Permission denied during compilation: Ensure Xcode Command Line Tools are properly installed and you have write permissions to the installation directory
    • uv installation fails: Try installing uv through Homebrew: brew install uv
    • Python version conflicts: If you have multiple Python versions, ensure python3 points to Python 3.8+: which python3 && python3 --version

Version

Current version: 0.0.179

To check your installed version, run:

pdd --version

PDD includes an auto-update feature to ensure you always have access to the latest features and security patches. You can control this behavior using an environment variable (see "Auto-Update Control" section below).

Supported Programming Languages

PDD supports a wide range of programming languages, including but not limited to:

  • Python
  • JavaScript
  • TypeScript
  • Java
  • C++
  • Ruby
  • Go

The specific language is often determined by the prompt file's naming convention or specified in the command options.

Prompt File Naming Convention

Prompt files in PDD follow this specific naming format:

<basename>_<language>.prompt

Where:

  • <basename> is the base name of the file or project
  • <language> is the programming language or context of the prompt file

Examples:

  • factorial_calculator_python.prompt (basename: factorial_calculator, language: python)
  • responsive_layout_css.prompt (basename: responsive_layout, language: css)
  • data_processing_pipeline_python.prompt (basename: data_processing_pipeline, language: python)

Prompt-Driven Development Philosophy

Core Concepts

Prompt-Driven Development (PDD) inverts traditional software development by treating prompts as the primary artifact - not code. This paradigm shift has profound implications:

  1. Prompts as Source of Truth: In traditional development, source code is the ground truth that defines system behavior. In PDD, the prompts are authoritative, with code being a generated artifact.

  2. Natural Language Over Code: Prompts are written primarily in natural language, making them more accessible to non-programmers and clearer in expressing intent.

  3. Regenerative Development: When changes are needed, you modify the prompt and regenerate code, rather than directly editing the code. This maintains the conceptual integrity between requirements and implementation.

  4. Intent Preservation: Prompts capture the "why" behind code in addition to the "what" - preserving design rationale in a way that comments often fail to do.

Mental Model

To work effectively with PDD, adopt these mental shifts:

  1. Prompt-First Thinking: Always start by defining what you want in a prompt before generating any code.

  2. Bidirectional Flow:

    • Prompt β†’ Code: The primary direction (generation)
    • Code β†’ Prompt: Secondary but crucial (keeping prompts in sync with code changes)
  3. Modular Prompts: Just as you modularize code, you should modularize prompts into self-contained units that can be composed.

  4. Integration via Examples: Modules integrate through their examples, which serve as interfaces, allowing for token-efficient references.

PDD Workflows: Conceptual Understanding

Each workflow in PDD addresses a fundamental development need:

  1. Initial Development Workflow

    • Purpose: Creating functionality from scratch
    • Conceptual Flow: Define dependencies β†’ Generate implementation β†’ Create interfaces β†’ Ensure runtime functionality β†’ Verify correctness

    This workflow embodies the prompt-to-code pipeline, moving from concept to tested implementation.

  2. Code-to-Prompt Update Workflow

    • Purpose: Maintaining prompt as source of truth when code changes
    • Conceptual Flow: Sync code changes to prompt β†’ Identify impacts β†’ Propagate changes

    This workflow ensures the information flow from code back to prompts, preserving prompts as the source of truth.

  3. Debugging Workflows

    • Purpose: Resolving different types of issues
    • Conceptual Types:
      • Context Issues: Addressing misunderstandings in prompt interpretation
      • Runtime Issues: Fixing execution failures
      • Logical Issues: Correcting incorrect behavior
      • Traceability Issues: Connecting code problems back to prompt sections

    These workflows recognize that different errors require different resolution approaches.

  4. Refactoring Workflow

    • Purpose: Improving prompt organization and reusability
    • Conceptual Flow: Extract functionality β†’ Ensure dependencies β†’ Create interfaces

    This workflow parallels code refactoring but operates at the prompt level.

  5. Multi-Prompt Architecture Workflow

    • Purpose: Coordinating systems with multiple prompts
    • Conceptual Flow: Detect conflicts β†’ Resolve incompatibilities β†’ Regenerate code β†’ Update interfaces β†’ Verify system

    This workflow addresses the complexity of managing multiple interdependent prompts.

  6. Enhancement Phase: Use Feature Enhancement when adding capabilities to existing modules.

Workflow Selection Principles

The choice of workflow should be guided by your current development phase:

  1. Creation Phase: Use Initial Development when building new functionality.

  2. Maintenance Phase: Use Code-to-Prompt Update when existing code changes.

  3. Problem-Solving Phase: Choose the appropriate Debugging workflow based on the issue type:

    • Preprocess β†’ Generate for prompt interpretation issues
    • Crash for runtime errors
    • Bug β†’ Fix for logical errors
    • Trace for locating problematic prompt sections
  4. Restructuring Phase: Use Refactoring when prompts grow too large or complex.

  5. System Design Phase: Use Multi-Prompt Architecture when coordinating multiple components.

  6. Enhancement Phase: Use Feature Enhancement when adding capabilities to existing modules.

PDD Design Patterns

Effective PDD employs these recurring patterns:

  1. Dependency Injection via Auto-deps: Automatically including relevant dependencies in prompts.

  2. Interface Extraction via Example: Creating minimal reference implementations for reuse.

  3. Bidirectional Traceability: Maintaining connections between prompt sections and generated code.

  4. Test-Driven Prompt Fixing: Using tests to guide prompt improvements when fixing issues.

  5. Hierarchical Prompt Organization: Structuring prompts from high-level architecture to detailed implementations.

Basic Usage

pdd [GLOBAL OPTIONS] COMMAND [OPTIONS] [ARGS]...

Command Overview

Here is a brief overview of the main commands provided by PDD. Click the command name to jump to its detailed section:

Command Relationships

The following diagram shows how PDD commands interact:

graph TB
    subgraph Entry Points
        connect["pdd connect (Web UI - Recommended)"]
        cli["Direct CLI"]
        ghapp["GitHub App"]
    end

    gen_url["pdd generate &lt;url&gt;"]

    subgraph sync workflow
        sync["pdd sync"]
        s_deps["auto-deps"]
        s_gen["generate"]
        s_example["example"]
        s_crash["crash"]
        s_verify["verify"]
        s_test["test"]
        s_fix["fix"]
        s_update["update"]
    end

    checkup["pdd checkup &lt;url&gt;"]
    test_url["pdd test &lt;url&gt;"]
    bug_url["pdd bug &lt;url&gt;"]
    fix_url["pdd fix &lt;url&gt;"]
    change["pdd change &lt;url&gt;"]
    sync_url["pdd sync &lt;url&gt;"]

    connect --> gen_url
    cli --> gen_url
    ghapp --> gen_url
    gen_url --> sync
    sync --> s_deps
    s_deps --> s_gen
    s_gen --> s_example
    s_example --> s_crash
    s_crash --> s_verify
    s_verify --> s_test
    s_test --> s_fix
    s_fix --> s_update
    sync --> checkup
    checkup --> test_url
    checkup --> bug_url
    checkup --> change
    test_url --> fix_url
    bug_url --> fix_url
    change --> sync_url
    sync_url -.-> sync
Loading

Key concepts:

  • Entry points: pdd connect (web UI), direct CLI, or the GitHub App
  • Start: pdd generate <url> scaffolds architecture, prompts, and .pddrc from a PRD GitHub issue
  • Core loop: pdd sync runs the full auto-deps β†’ generate β†’ example β†’ crash β†’ verify β†’ test β†’ fix β†’ update cycle for each module
  • Health check: pdd checkup <url> identifies what needs attention next
  • Defect path: test <url> or bug <url> surfaces failing tests β†’ fix <url> resolves them
  • Feature path: change <url> implements the feature β†’ sync <url> re-runs sync across affected modules

Getting Started

  • connect: [RECOMMENDED] Launch web interface for visual PDD interaction
  • setup: Configure API keys and shell completion

Agentic Commands (Issue-Driven)

  • change: Implement feature requests from GitHub issues (12-step workflow)
  • bug: Analyze bugs and create failing tests from GitHub issues
  • checkup: Run automated project health check from a GitHub issue (8-step workflow)
  • fix: Fix failing tests (supports issue-driven and manual modes)
  • sync: Multi-module parallel sync from a GitHub issue (when passed a URL instead of basename)
  • test: Generate UI tests from GitHub issues (18-step workflow in agentic mode)

Core Commands (Prompt-Based)

  • sync: [PRIMARY FOR PROMPT WORKFLOWS] Automated prompt-to-code cycle
  • generate: Creates runnable code from a prompt file; supports parameterized prompts via -e/--env
  • example: Generates a compact example showing how to use functionality defined in a prompt
  • test: Generates or enhances unit tests for a code file and its prompt
  • update: Updates the original prompt file based on modified code
  • verify: Verifies functional correctness by running a program and judging output against intent
  • crash: Fixes errors in a code module and its calling program that caused a crash

Prompt Management

  • preprocess: Preprocesses prompt files, handling includes, comments, and other directives
  • split: Splits large prompt files into smaller, more manageable ones
  • extracts prune: Garbage-collect orphaned extracts cache entries
  • auto-deps: Analyzes and inserts needed dependencies into a prompt file
  • detect: Analyzes prompts to determine which ones need changes based on a description
  • conflicts: Finds and suggests resolutions for conflicts between two prompt files
  • trace: Finds the corresponding line number in a prompt file for a given code line

Utility Commands

  • auth: Manages authentication with PDD Cloud
  • sessions: Manage remote sessions for connect

User Story Prompt Tests

PDD can validate prompt changes against user stories stored as Markdown files. This uses detect under the hood: a story passes when detect returns no required prompt changes.

Defaults:

  • Stories live in user_stories/ and match story__*.md.
  • Prompts are loaded from prompts/ (excluding *_llm.prompt by default).

Overrides:

  • PDD_USER_STORIES_DIR sets the stories directory.
  • PDD_PROMPTS_DIR sets the prompts directory.

Commands:

  • pdd detect --stories runs the validation suite.
  • pdd change runs story validation after prompt modifications and fails if any story fails.
  • pdd fix user_stories/story__*.md applies a single story to prompts and re-validates it.
  • pdd test <prompt_1.prompt> [prompt_2.prompt ...] generates a story__*.md file and links those prompts.
  • pdd test user_stories/story__*.md updates prompt links for an existing story file.

Story prompt linkage:

  • Stories may include optional metadata to scope validation to a subset of prompts: <!-- pdd-story-prompts: prompts/a_python.prompt, prompts/b_python.prompt -->
  • If metadata is missing, pdd detect --stories validates against the full prompt set.
  • In --stories mode, when detect identifies impacted prompts, PDD caches links back into the story metadata for future deterministic runs.

Template:

  • See user_stories/story__template.md for a starter format.

Global Options

These options can be used with any command:

  • --force: Skip all interactive prompts (file overwrites, API key requests). Useful for CI/automation.
  • --strength FLOAT: Set the strength of the AI model (0.0 to 1.0, default is 0.5).
    • 0.0: Cheapest available model
    • 0.5: Default base model
    • 1.0: Most powerful model (highest ELO rating)
  • --time FLOAT: Controls the reasoning allocation for LLM models supporting reasoning capabilities (0.0 to 1.0, default is 0.25).
    • For models with specific reasoning token limits (e.g., 64k), a value of 1.0 utilizes the maximum available tokens.
    • For models with discrete effort levels, 1.0 corresponds to the highest effort level.
    • Values between 0.0 and 1.0 scale the allocation proportionally.
  • --temperature FLOAT: Set the temperature of the AI model (default is 0.0).
  • --verbose: Increase output verbosity for more detailed information. Includes token count and context window usage for each LLM call.
  • --quiet: Decrease output verbosity for minimal information.
  • --output-cost PATH_TO_CSV_FILE: Enable cost tracking and output a CSV file with usage details.
  • --review-examples: Review and optionally exclude few-shot examples before command execution.
  • --local: Run commands locally instead of in the cloud.
  • --core-dump: Capture a debug bundle for this run so it can be replayed and analyzed later.
  • report-core: Report a bug by creating a GitHub issue with the core dump file.
  • --context CONTEXT_NAME: Override automatic context detection and use the specified context from .pddrc.
  • --list-contexts: List all available contexts defined in .pddrc and exit.

Core Dump Debug Bundles

If something goes wrong and you want the PDD team to be able to reproduce it, you can run any command with a core dump enabled:

pdd --core-dump sync factorial_calculator
pdd --core-dump crash prompts/calc_python.prompt src/calc.py examples/run_calc.py crash_errors.log

When --core-dump is set, PDD:

  • Captures the full CLI command and arguments
  • Records relevant logs and internal trace information for that run
  • Bundles the prompt(s), generated code, and key metadata needed to replay the issue

At the end of the run, PDD prints the path to the core dump bundle.
Attach that bundle when you open a GitHub issue or send a bug report so maintainers can quickly reproduce and diagnose your problem.

report-core Command

The report-core command helps you report a bug by creating a GitHub issue with the core dump file. It simplifies the reporting process by automatically collecting relevant files and information.

Usage:

pdd report-core [OPTIONS] [CORE_FILE]

Arguments:

  • CORE_FILE: The path to the core dump file (e.g., .pdd/core_dumps/pdd-core-....json). If omitted, the most recent core dump is used.

Options:

  • --api: Create the issue directly via the GitHub API instead of opening a browser. This enables automatic Gist creation for attached files.
  • --repo OWNER/REPO: Override the target repository (default: promptdriven/pdd).
  • --description, -d TEXT: A short description of what went wrong.

Authentication:

To use the --api flag, you need to be authenticated with GitHub. PDD checks for credentials in the following order:

  1. GitHub CLI: gh auth token (recommended)
  2. Environment Variables: GITHUB_TOKEN or GH_TOKEN
  3. Legacy: PDD_GITHUB_TOKEN

File Tracking & Gists:

When using --api, PDD will:

  1. Collect all relevant files (prompts, code, tests, configs, meta files).
  2. Create a private GitHub Gist containing these files.
  3. Link the Gist in the created issue.

This ensures that all necessary context is available for debugging while keeping the issue body clean. If you don't use --api, files will be truncated to fit within the URL length limits of the browser-based submission.


Context Selection Flags

  • --list-contexts reads the nearest .pddrc (searching upward from the current directory), prints the available contexts one per line, and exits immediately with status 0. No auto‑update checks or subcommands run when this flag is present.
  • --context CONTEXT_NAME is validated early against the same .pddrc source of truth. If the name is unknown, the CLI raises a UsageError and exits with code 2 before running auto‑update or subcommands.
  • Precedence for configuration is: CLI options > .pddrc context > environment variables > defaults. See Configuration for details.

Auto-Update Control

PDD automatically updates itself to ensure you have the latest features and security patches. However, you can control this behavior using the PDD_AUTO_UPDATE environment variable:

# Disable auto-updates
export PDD_AUTO_UPDATE=false

# Enable auto-updates (default behavior)
export PDD_AUTO_UPDATE=true

For persistent settings, add this environment variable to your shell's configuration file (e.g., .bashrc or .zshrc).

This is particularly useful in:

  • Production environments where version stability is crucial
  • CI/CD pipelines where consistent behavior is required
  • Version-sensitive projects that require specific PDD versions

AI Model Information

PDD uses a large language model to generate and manipulate code. The --strength and --temperature options allow you to control the model's output:

  • Strength: Determines how powerful/expensive a model should be used. Higher values (closer to 1.0) result in high performance models with better capabilities (selected by ELO rating), while lower values (closer to 0.0) select more cost-effective models.
  • Temperature: Controls the randomness of the output. Higher values increase diversity but may lead to less coherent results, while lower values produce more focused and deterministic outputs.
  • Time: (Optional, controlled by --time FLOAT) For models supporting reasoning, this scales the allocated reasoning resources (e.g., tokens or effort level) between minimum (0.0) and maximum (1.0), with a default of 0.25.

When running in local mode, PDD uses LiteLLM to select and interact with language models based on a configuration file that includes:

  • Input and output costs per million tokens
  • ELO ratings for coding ability
  • Required API key environment variables
  • Structured output capability flags
  • Reasoning capabilities (budget-based or effort-based)

Output Cost Tracking

PDD includes a feature for tracking and reporting the cost of operations. When enabled, it generates a CSV file with usage details for each command execution.

Usage

To enable cost tracking, use the --output-cost option with any command:

pdd --output-cost PATH_TO_CSV_FILE [COMMAND] [OPTIONS] [ARGS]...

The PATH_TO_CSV_FILE should be the desired location and filename for the CSV output.

Cost Calculation and Presentation

PDD calculates costs based on the AI model usage for each operation. Costs are presented in USD (United States Dollars) and are calculated using the following factors:

  1. Model strength: Higher strength settings generally result in higher costs.
  2. Input size: Larger inputs (e.g., longer prompts or code files) typically incur higher costs.
  3. Operation complexity: Some operations (like fix and crash with multiple iterations) may be more costly than simpler operations.

The exact cost per operation is determined by the LiteLLM integration using the provider's current pricing model. PDD uses an internal pricing table that is regularly updated to reflect the most current rates.

CSV Output

The generated CSV file includes the following columns:

  • timestamp: The date and time of the command execution
  • model: The AI model used for the operation
  • command: The PDD command that was executed
  • cost: The estimated cost of the operation in USD (e.g., 0.05 for 5 cents). This will be zero for local models or operations that do not use a LLM.
  • input_files: A list of input files involved in the operation
  • output_files: A list of output files generated or modified by the operation

This comprehensive output allows for detailed tracking of not only the cost and type of operations but also the specific files involved in each PDD command execution.

Release History
VersionChangesUrgencyDate
main@2026-04-21Latest activity on main branchHigh4/21/2026
v0.0.201Latest release: v0.0.201High4/8/2026

Dependencies & License Audit

Loading dependencies...

Similar Packages

ossatureAn open-source harness for spec-driven code generation.master@2026-04-18
git-notes-memory🧠 Store and search your notes effectively with Git-native memory storage, enhancing productivity for Claude Code users.main@2026-04-21
mcp-anythingOne command to turn any codebase into an MCP serverv0.1.0
PromptKitAgentic prompts are the most important code you're not engineering. PromptKit fixes that β€” composable, version-controlled prompt components (personas, protocols, formats, templates) that snap togetherv0.6.1
ctxraySee how you really use AI β€” X-ray your AI coding sessions locallyv2.2.1