Patent Pending | MIT License | Patent Details
Talk to ComfyUI like a colleague. It talks back.
You describe what you want in plain English. The agent loads workflows, swaps models, tweaks parameters, installs missing nodes, runs generations, analyzes outputs, and learns what works for you -- all without you touching JSON or hunting through menus. It doesn't ask permission -- it makes the change, reports what it did, and every change is undoable.
graph LR
You([You]) -->|"make it dreamier"| Agent[Comfy Cozy]
Agent -->|loads, patches, runs| ComfyUI[ComfyUI]
ComfyUI -->|image| Agent
Agent -->|"Done. Lowered CFG to 5,<br/>switched to DPM++ 2M Karras.<br/>Here's your render."| You
style You fill:#0066FF,color:#fff
style Agent fill:#8b5cf6,color:#fff
style ComfyUI fill:#ef4444,color:#fff
Session 1 is a capable tool.
Session 100 is a capable tool that knows your style.
| You say | What happens |
|---|---|
| "Load my portrait workflow and make it dreamier" | Loads the file, lowers CFG, switches sampler, saves with full undo |
| "I want to use Flux" | Searches CivitAI + HuggingFace, downloads the model, wires it into your workflow |
| "Repair this workflow" | Finds missing nodes, installs the packs, fixes connections, migrates deprecated nodes |
| "Run this with 30 steps" | Patches the workflow, validates it, queues it to ComfyUI, shows progress |
| "Analyze this output" | Uses Vision AI to diagnose issues and suggest parameter changes |
| "What model should I use for anime?" | Searches CivitAI + HuggingFace + your local models, recommends the best fit |
| "Optimize this for speed" | Profiles GPU usage, checks TensorRT eligibility, applies optimizations |
| "Repair and run this" | Finds missing nodes, installs them, validates, executes -- no confirmation needed |
You need three things. That's it.
| # | What | Where to get it |
|---|---|---|
| 1 | Python 3.11+ | python.org/downloads |
| 2 | ComfyUI running | github.com/comfyanonymous/ComfyUI |
| 3 | One LLM backend | API key (Anthropic / OpenAI / Google) OR Ollama (free, local) |
Got all three? Four steps:
git clone https://github.com/JosephOIbrahim/Comfy-Cozy.git
cd Comfy-Cozypip install -e .Done. That's the only install command you need.
Optional installs (click to expand)
pip install -e ".[dev]" # + test suite (3579 passing tests)
pip install -e ".[dev,stage]" # + USD stage subsystem (~200MB, most users skip this)cp .env.example .envOpen .env, paste your key:
ANTHROPIC_API_KEY=sk-ant-your-key-hereUsing a different LLM? (click to expand)
# OpenAI (requires: pip install openai)
LLM_PROVIDER=openai
OPENAI_API_KEY=sk-your-key-here
# Gemini (requires: pip install google-genai)
LLM_PROVIDER=gemini
GEMINI_API_KEY=your-key-here
# Ollama (no API key needed)
LLM_PROVIDER=ollama
AGENT_MODEL=llama3.1Non-default ComfyUI location? Add this line too:
COMFYUI_DATABASE=C:/path/to/your/ComfyUI
agent runType what you want. Type quit when you're done.
The agent also lives inside ComfyUI as a native sidebar panel. To enable it, create two symlinks from ComfyUI's custom_nodes/ folder to Comfy-Cozy:
Windows (run as Administrator):
cd C:\path\to\ComfyUI\custom_nodes
mklink /D comfy-cozy-panel C:\path\to\Comfy-Cozy\panel
mklink /D comfy-cozy-ui C:\path\to\Comfy-Cozy\uiLinux / macOS:
cd /path/to/ComfyUI/custom_nodes
ln -s /path/to/Comfy-Cozy/panel comfy-cozy-panel
ln -s /path/to/Comfy-Cozy/ui comfy-cozy-uiRestart ComfyUI. The Comfy Cozy chat panel appears in the left sidebar.
graph LR
CN["ComfyUI/custom_nodes/"] --> P["comfy-cozy-panel/ (symlink)"]
CN --> U["comfy-cozy-ui/ (symlink)"]
P -->|"canvas sync (headless)"| Panel["panel/__init__.py"]
U -->|"sidebar + chat"| UI["ui/__init__.py"]
style CN fill:#ef4444,color:#fff
style P fill:#8b5cf6,color:#fff
style U fill:#0066FF,color:#fff
Both symlinks are required:
comfy-cozy-panel-- Canvas sync bridge (runs headlessly -- keeps the agent in sync with your live graph)comfy-cozy-ui-- The visible sidebar: chat window, quick actions, status
Comfy Cozy is provider-agnostic. Same 113 tools, same streaming, same vision analysis -- swap one env var.
# .env
ANTHROPIC_API_KEY=sk-ant-your-key-here
# Run
agent runShips as the default. No extra install. Supports prompt caching for lower costs on long sessions.
# Install the SDK (one time)
pip install openai
# .env
LLM_PROVIDER=openai
OPENAI_API_KEY=sk-your-key-here
AGENT_MODEL=gpt-4o # or gpt-4o-mini for faster/cheaper
# Run
agent runFull tool-use support with streaming. Works with any OpenAI-compatible endpoint.
# Install the SDK (one time)
pip install google-genai
# .env
LLM_PROVIDER=gemini
GEMINI_API_KEY=your-key-here
AGENT_MODEL=gemini-2.5-flash # or gemini-2.5-pro
# Run
agent runFunction declarations mapped automatically. Supports Gemini's thinking mode.
# Install Ollama: https://ollama.com
# Pull a model
ollama pull llama3.1
# .env
LLM_PROVIDER=ollama
AGENT_MODEL=llama3.1 # or any model you've pulled
# Run (no API key needed)
agent runUses Ollama's OpenAI-compatible endpoint at localhost:11434. Override with OLLAMA_BASE_URL if running remotely. No data leaves your machine.
All four providers share the same abstraction layer (agent/llm/):
graph LR
Agent[Agent Loop<br/>113 tools] --> LLM{LLM_PROVIDER}
LLM -->|anthropic| A["Claude<br/>Streaming + Cache"]
LLM -->|openai| B["GPT-4o<br/>Tool Calls"]
LLM -->|gemini| C["Gemini<br/>Function Decl."]
LLM -->|ollama| D["Ollama<br/>Local + Private"]
style Agent fill:#8b5cf6,color:#fff
style A fill:#d97706,color:#fff
style B fill:#10b981,color:#fff
style C fill:#3b82f6,color:#fff
style D fill:#ef4444,color:#fff
Common types (TextBlock, ToolUseBlock, LLMResponse), unified error hierarchy, provider-specific format conversion handled internally. Switch providers with one env var -- no code changes.
The agent runs as an MCP server -- Claude can use all 113 tools directly.
Add this to your Claude Code or Claude Desktop MCP config:
{
"mcpServers": {
"comfyui-agent": {
"command": "agent",
"args": ["mcp"]
}
}
}Now talk to Claude about your ComfyUI workflows. It has full access.
agent run # Start a conversation
agent run --session my-project # Auto-saves so you can pick up later
agent run --verbose # See what's happening under the hoodIf you use the ComfyUI CLI launcher (ComfyUI CLI.lnk), Comfy Cozy is the default mode:
[ 1 ] STABLE Balanced. Works with everything.
[ 2 ] DETERMINISTIC Same prompt = same pixels.
[ 3 ] FAST Sage attention + async offload.
[ 4 ] COMFY COZY * Talk to your workflow. (auto-selects in 10s)
Select 4 (or wait 10 seconds) -- ComfyUI starts in a background window, then the agent launches ready to talk.
agent inspect # See your installed models and nodes
agent parse workflow.json # Analyze a workflow file
agent sessions # List your saved sessionsThe agent ships with built-in knowledge about how each model family actually behaves. It won't use SD 1.5 settings on a Flux workflow.
| Model | Resolution | CFG | Notes |
|---|---|---|---|
| SD 1.5 | 512x512 | 7-12 | Huge LoRA ecosystem. Negative prompts matter. |
| SDXL | 1024x1024 | 5-9 | Better anatomy. Tag-based prompts work best. |
| Flux | 512-1024 | ~1.0 (guidance) | No negative prompts. Needs FluxGuidance node + T5 encoder. |
| SD3 | 1024x1024 | 5-7 | Triple text encoder (CLIP-G, CLIP-L, T5). |
| LTX-2 (video) | 768x512 | ~25 | 121 steps. Frame count must be (N*8)+1. |
| WAN 2.x (video) | 832x480 | 1-3.5 | Dual-noise architecture. 4-20 steps. |
The agent will never mix model families -- no SD 1.5 LoRAs on SDXL checkpoints, no Flux ControlNets on SD3.
| You say | What the agent adjusts |
|---|---|
| "dreamier" or "softer" | Lower CFG (5-7), more steps, DPM++ 2M Karras |
| "sharper" or "crisper" | Higher CFG (8-12), Euler or DPM++ SDE |
| "more photorealistic" | CFG 7-10, realistic checkpoint, negative: "cartoon, anime" |
| "more stylized" | Lower CFG (4-6), artistic checkpoint or LoRA |
| "faster" | Fewer steps (15-20), LCM/Lightning/Turbo, smaller resolution |
| "higher quality" | More steps (30-50), hires fix, upscaler |
| "more variation" | Higher denoise, different seed, lower CFG |
| "less variation" | Lower denoise, same seed, higher CFG |
graph TB
subgraph Browser ["ComfyUI Browser"]
Sidebar["Comfy Cozy Sidebar<br/>Native left panel -- Chat -- Quick Actions"]
end
subgraph Backend ["Agent Backend (Python)"]
Routes["49 REST Routes<br/>+ WebSocket"]
Tools["113 Tools<br/>workflow -- models -- vision -- session -- provision"]
Cog["Cognitive Engine<br/>LIVRPS delta stack -- CWM -- experience"]
end
subgraph ComfyUI ["ComfyUI"]
API["/prompt -- /history -- /ws"]
Canvas["Live Canvas"]
end
subgraph Disk ["Persistence"]
EXP[("experience.jsonl<br/>cross-session learning")]
Sessions[("sessions/<br/>workflow state")]
end
Sidebar <-->|"WebSocket + REST"| Routes
Sidebar <-->|"canvas sync"| Canvas
Routes --> Tools
Tools --> Cog
Tools -->|httpx| API
Cog --> EXP
Tools --> Sessions
style Browser fill:#1a1a2e,color:#F0F0F0,stroke:#0066FF
style Backend fill:#1a1a2e,color:#F0F0F0,stroke:#8b5cf6
style ComfyUI fill:#1a1a2e,color:#F0F0F0,stroke:#ef4444
style Disk fill:#1a1a2e,color:#F0F0F0,stroke:#10b981
graph LR
You([You]) --> Agent[113 Tools]
Agent --> Understand[UNDERSTAND<br/>What do you have?]
Understand --> Discover[DISCOVER<br/>What do you need?]
Discover --> Pilot[PILOT<br/>Make the changes]
Pilot --> Verify[VERIFY<br/>Did it work?]
Verify -->|learn| Agent
style You fill:#0066FF,color:#fff
style Understand fill:#3b82f6,color:#fff
style Discover fill:#d97706,color:#fff
style Pilot fill:#8b5cf6,color:#fff
style Verify fill:#10b981,color:#fff
Four phases, always in order:
- UNDERSTAND -- Reads your workflow, scans your models, checks what's installed
- DISCOVER -- Searches CivitAI, HuggingFace, ComfyUI Manager (31k+ nodes)
- PILOT -- Makes changes through safe, reversible delta layers (never edits your original)
- VERIFY -- Runs the workflow, checks the output, records what worked
When validation finds errors, the agent auto-repairs. One continuous flow, no stopping to ask:
flowchart TD
Run(["You: 'run this'"]) --> Validate["validate_before_execute"]
Validate --> Check{"Errors?"}
Check -->|No| Execute["execute_workflow"]
Check -->|"Missing nodes"| Repair["repair_workflow<br/>auto_install=true"]
Check -->|"Missing inputs"| SetInput["set_input<br/>fill required fields"]
Check -->|"Wrong model name"| Discover["discover<br/>find correct model"]
Repair --> Revalidate["re-validate"]
SetInput --> Revalidate
Discover --> SetInput
Revalidate --> Check2{"Still errors?"}
Check2 -->|No| Execute
Check2 -->|Yes| Report["Report unfixable<br/>issue + ask"]
Execute --> Done(["Done — image ready"])
style Run fill:#0066FF,color:#fff
style Validate fill:#3b82f6,color:#fff
style Repair fill:#8b5cf6,color:#fff
style SetInput fill:#8b5cf6,color:#fff
style Discover fill:#d97706,color:#fff
style Execute fill:#10b981,color:#fff
style Done fill:#10b981,color:#fff
style Report fill:#ef4444,color:#fff
Every change is undoable. Every generation teaches the agent something. The agent is a doer, not a describer -- say "wire the model" and it wires the model. Say "repair this" and it finds the missing nodes, installs them, and validates. Say "run it" and it validates, fixes anything broken, then executes. No confirmation dialogs, no "would you like me to..." -- it acts, then tells you what it did.
Write a creative intent. Hit go. No workflow file needed, no parameters to tune -- the agent composes a workflow, runs it on ComfyUI, scores the result, and learns from it automatically.
flowchart TD
You(["Creative Intent<br/>'cinematic portrait, golden hour'"]) --> INTENT["INTENT<br/>Parse + validate"]
INTENT --> COMPOSE["COMPOSE<br/>Load template<br/>Blend with experience"]
COMPOSE --> PREDICT["PREDICT<br/>CognitiveWorldModel<br/>estimates quality"]
PREDICT --> GATE{"GATE<br/>Arbiter:<br/>proceed?"}
GATE -->|Yes| EXECUTE["EXECUTE<br/>Post to ComfyUI<br/>Monitor WebSocket"]
GATE -->|Interrupt| STOP(["Interrupted<br/>+ reason"])
EXECUTE --> EVALUATE["EVALUATE<br/>Score the output"]
EVALUATE --> LEARN["LEARN<br/>Record to accumulator<br/>Calibrate CWM"]
LEARN --> DONE(["Complete<br/>Experience recorded"])
EVALUATE -->|"score < threshold"| COMPOSE
style You fill:#0066FF,color:#fff
style GATE fill:#d97706,color:#fff
style EXECUTE fill:#ef4444,color:#fff
style LEARN fill:#8b5cf6,color:#fff
style DONE fill:#10b981,color:#fff
style STOP fill:#6b7280,color:#fff
Use from Python:
from cognitive.pipeline import create_default_pipeline, PipelineConfig
pipeline = create_default_pipeline() # fresh accumulator, CWM, arbiter
result = pipeline.run(PipelineConfig(
intent="cinematic portrait, golden hour",
model_family="SD1.5", # optional -- agent detects from intent
))
print(result.success, result.quality.overall, result.stage.value)
if result.warnings:
print("warnings:", result.warnings) # e.g. template family fallback- No executor required. The pipeline calls ComfyUI directly via the real
execute_workflowimplementation. - No evaluator required. Rule-based scoring (success = 0.7, failure = 0.1) enables CWM calibration from day one.
- Template library. Workflows loaded from
agent/templates/(SD 1.5 / SDXL / img2img / LoRA). Hardcoded 7-node SD 1.5 fallback if no template matches. - Experience persists across sessions -- crash-safe. Every run saved atomically (write-to-tmp then
os.replace()). After 30+ runs, the agent starts using your personal history to bias parameter selection. - Pipeline failures are graceful. CWM exceptions return
PipelineStage.FAILEDcleanly. Template mismatches populateresult.warnings.
graph LR
subgraph Session1 ["Session 1"]
I1["Intent"] --> C1["Compose"] --> E1["Execute"] --> S1["Score"]
end
subgraph Session2 ["Session 2+"]
I2["Intent"] --> C2["Compose<br/>(+prior runs)"] --> E2["Execute"] --> S2["Score"]
end
S1 -->|"atomic save"| JSONL[("experience.jsonl<br/>crash-safe")]
JSONL -->|"load on startup"| C2
S2 -->|"atomic save -- cumulative"| JSONL
style JSONL fill:#8b5cf6,color:#fff
style C2 fill:#10b981,color:#fff
A typography-forward chat panel in ComfyUI's native left sidebar. No floating buttons, no separate windows. Uses ComfyUI's own CSS variables -- adapts to any theme automatically.
graph TB
subgraph ComfyUI_App ["ComfyUI"]
subgraph Sidebar ["Left Sidebar"]
Tab["Comfy Cozy Tab<br/>registerSidebarTab()"]
Chat["Chat Window<br/>WebSocket -- streaming -- rich text"]
QA["Quick Actions<br/>Run -- Validate -- Repair -- Optimize -- Undo"]
end
Canvas["Canvas"]
end
subgraph Bridge ["Bidirectional Canvas Bridge"]
C2A["Canvas --> Agent<br/>Auto-sync on change"]
A2C["Agent --> Canvas<br/>Push mutations + highlights"]
end
Tab --> Chat
Tab --> QA
Sidebar <--> Bridge
Bridge <--> Canvas
style ComfyUI_App fill:#1a1a2e,color:#F0F0F0,stroke:#ef4444
style Sidebar fill:#1a1a2e,color:#F0F0F0,stroke:#0066FF
style Bridge fill:#1a1a2e,color:#F0F0F0,stroke:#8b5cf6
What you get:
- Native sidebar tab --
app.extensionManager.registerSidebarTab(), sits alongside ComfyUI's built-in panels - Design system v3 -- Inter + JetBrains Mono, ComfyUI CSS variables, Pentagram-inspired: hairline borders, generous whitespace, 2px radii, zero ornamentation
- Chat -- Auto-growing textarea, streaming responses, rich text (code blocks, bold, inline code), collapsible tool results
- Node pills -- Clickable inline node references, color-coded by slot type. Click = select + center on canvas.
- Quick actions -- Context-aware chips: Run, Validate, Repair, Optimize, Undo
- Canvas bridge -- Agent changes sync to canvas live with node highlighting; canvas re-syncs after each execution
- Self-healing -- Missing node warnings with one-click repair, deprecated node migration
49 panel routes expose the full tool surface: discovery, provisioning, repair, sessions, execution.
Every request passes through a three-layer security chain:
flowchart TD
REST([REST Request]) --> Guard["_guard(request, category)"]
WS([WebSocket /ws]) --> Guard
Guard --> Auth{check_auth}
Auth -->|"no token configured"| Rate{check_rate_limit}
Auth -->|"bearer matches"| Rate
Auth -->|"missing / wrong"| R401(["401 Unauthorized"])
Rate -->|"tokens available"| Size{check_size}
Rate -->|"bucket empty"| R429(["429 -- Retry-After: 1s"])
Size -->|"Content-Length OK"| Handler(["Route handler"])
Size -->|"> 10 MB"| R413(["413 Too Large"])
Size -->|"chunked -- no length"| R411(["411 Length Required"])
style R401 fill:#ef4444,color:#fff
style R429 fill:#d97706,color:#fff
style R413 fill:#ef4444,color:#fff
style R411 fill:#d97706,color:#fff
style Handler fill:#10b981,color:#fff
style Guard fill:#8b5cf6,color:#fff
The agent handles the entire pipeline from "I want Flux" to a wired workflow:
flowchart LR
Search["Search<br/>CivitAI + HF + Registry"] --> Download["Download<br/>to correct folder"]
Download --> Verify["Verify<br/>family + compat"]
Verify --> Wire["Auto-Wire<br/>find loader -- set input"]
Wire --> Ready["Ready to<br/>Queue"]
style Search fill:#3b82f6,color:#fff
style Download fill:#d97706,color:#fff
style Verify fill:#ef4444,color:#fff
style Wire fill:#8b5cf6,color:#fff
style Ready fill:#10b981,color:#fff
provision_model -- one tool call that discovers, downloads, verifies compatibility, finds the right loader node in your workflow, and wires the model in.
Architecture Deep Dive (click to expand)
The agent is built on seven architectural subsystems. Each one degrades independently -- if one breaks, the rest keep working.
graph TB
subgraph Foundation ["Foundation Layer"]
DAG["Workflow Intelligence DAG<br/>6 pure computation nodes"]
OBS["Time-Sampled State<br/>Monotonic step index"]
CAP["Capability Registry<br/>113 tools indexed"]
end
subgraph Safety ["Safety Layer"]
GATE["Pre-Dispatch Gate<br/>5 checks, default-deny"]
BRIDGE["Mutation Bridge<br/>LIVRPS composition + audit"]
end
subgraph Integration ["Integration Layer"]
ADAPT["Inter-Module Adapters<br/>Pure-function translators"]
DEGRADE["Degradation Manager<br/>Per-subsystem fallbacks"]
end
Foundation --> Safety --> Integration
style Foundation fill:#1a1a2e,color:#F0F0F0,stroke:#3b82f6
style Safety fill:#1a1a2e,color:#F0F0F0,stroke:#ef4444
style Integration fill:#1a1a2e,color:#F0F0F0,stroke:#10b981
Before any workflow runs, a DAG of pure functions analyzes it:
graph LR
C[Complexity<br/>TRIVIAL to EXTREME] --> M[Model Requirements<br/>VRAM, family, LoRAs]
M --> O[Optimization<br/>TensorRT, batching]
O --> R[Risk<br/>SAFE to BLOCKED]
R --> RD[Readiness<br/>go / no-go]
style C fill:#3b82f6,color:#fff
style R fill:#ef4444,color:#fff
style RD fill:#10b981,color:#fff
Every tool call passes through a default-deny gate. Read-only tools bypass it (zero overhead). Destructive tools are always locked. The gate auto-detects loaded workflows AND USD stages: if either kind of workspace state exists for the current connection, mutation tools are allowed without explicit session context. Stage tools (stage_write, stage_add_delta) are recognized separately from workflow tools — a USD stage can exist independently of any loaded workflow.
| Version | Changes | Urgency | Date |
|---|---|---|---|
| v4.0.0 | ## The Native Co-Pilot **The agent moved in. It lives inside ComfyUI now.** v3.0.0 gave the agent a brain. v4.0.0 gave it a body — a native sidebar panel inside ComfyUI's own UI, a fully wired autonomous pipeline, and a personality shift from "helpful describer" to "doer who gets things done." 127 commits. 72 hardening cycles. Tests: 2,350 → 3,579. Tools: 108 → 113. --- ### What's New #### Native ComfyUI Sidebar The agent no longer hides behind a tiny pill button. It lives in ComfyUI's ** | High | 4/10/2026 |

