freshcrate
Home > Frameworks > connectonion

connectonion

The Best AI Agent Framework for Agent Collaboration.

Description

The Best AI Agent Framework for Agent Collaboration.

README

πŸ§… ConnectOnion


🌟 Philosophy: "Keep simple things simple, make complicated things possible"

This is the core principle that drives every design decision in ConnectOnion.

🎯 Living Our Philosophy

Step 1: Simple - Create and Use

from connectonion import Agent

agent = Agent(name="assistant")
agent.input("Hello!")  # That's it!

Step 2: Add Your Tools

def search(query: str) -> str:
    """Search for information."""
    return f"Results for {query}"

agent = Agent(name="assistant", tools=[search])
agent.input("Search for Python tutorials")

Step 3: Debug Your Agent

agent = Agent(name="assistant", tools=[search])
agent.auto_debug()  # Interactive debugging session

Step 4: Production Ready

agent = Agent(
    name="production",
    model="gpt-5",                    # Latest models
    tools=[search, analyze, execute], # Your functions as tools
    system_prompt=company_prompt,     # Custom behavior
    max_iterations=10,                # Safety controls
    trust="prompt"                    # Multi-agent ready
)
agent.input("Complex production task")

Step 5: Multi-Agent - Make it Remotely Callable

from connectonion import host
host(agent)  # HTTP server + P2P relay - other agents can now discover and call this agent

✨ Why ConnectOnion?

Most frameworks give you a way to call LLMs. ConnectOnion gives you everything around it β€” so you only write prompt and tools.

Built-in AI Programmer

co ai   # Opens a chat interface with an AI that deeply understands ConnectOnion

co ai is an AI coding assistant built with ConnectOnion. It writes working agent code because it knows the framework inside out. Fully open-source β€” inspect it, modify it, build your own.

Built-in Frontend & Backend β€” Just Write Prompt and Tools

Traditional path: write agent logic β†’ build FastAPI backend β†’ build React frontend β†’ wire APIs β†’ deploy.

ConnectOnion path: write prompt and tools β†’ deploy.

  • Backend: framework handles the API layer
  • Frontend: chat.openonion.ai β€” ready-to-use chat interface
  • All open-source, customizable, but you don't start from zero

Ready-to-Use Tool Ecosystem

Import and use β€” no schema writing, no interface wiring:

from connectonion import bash, Shell                                    # Command execution
from connectonion.useful_tools import FileTools                         # File system (with safety tracking)
from connectonion.useful_tools.browser_tools import BrowserAutomation   # Natural language browser automation

from connectonion import Gmail, Outlook              # Email
from connectonion import GoogleCalendar              # Calendar
from connectonion import Memory                      # Persistent memory
from connectonion import TodoList                    # Task tracking

Need to customize? Copy the source into your project:

co copy Gmail     # Copies Gmail tool source code to your project for modification

Built-in Approval System

Dangerous operations (bash commands, file deletion) automatically trigger approval β€” no permission logic needed from you.

from connectonion.useful_plugins import tool_approval, shell_approval

agent = Agent("assistant", tools=[bash], plugins=[shell_approval])
# Shell commands now require approval before execution

Plugin-based: turn it off, customize it, or replace it entirely.

Skills System β€” Auto-Discovery, Claude Code Compatible

Reusable workflows with automatic permission scoping:

from connectonion.useful_plugins import skills

agent = Agent("assistant", tools=[file_tools], plugins=[skills])

# User types /commit β†’ skill loads β†’ git commands auto-approved β†’ permission cleared after execution

Three-level auto-discovery (project β†’ user β†’ built-in):

.co/skills/skill-name/SKILL.md      # Project-level (highest priority)
~/.co/skills/skill-name/SKILL.md    # User-level
builtin/skill-name/SKILL.md         # Built-in

Automatically loads Claude Code skills from .claude/skills/ β€” no conversion needed.

12 Lifecycle Hooks + Plugin System

Inject logic at any point in the agent execution cycle:

from connectonion import Agent, after_tools, llm_do
from connectonion.useful_plugins import re_act, eval, auto_compact, subagents, ulw

# Built-in plugins β€” same capabilities as Claude Code, open to any agent
agent = Agent("researcher", tools=[search], plugins=[
    re_act,         # Reflect + plan after each tool call
    auto_compact,   # Auto-compress context at 90% capacity
    subagents,      # Spawn sub-agents with independent tools and prompts
    ulw,            # Ultra Light Work β€” fully autonomous mode
])

These plugins mirror Claude Code's internal capabilities β€” auto_compact, subagents, ulw directly correspond to Claude Code's context compression, sub-agent spawning, and autonomous work mode. ConnectOnion makes these capabilities available to any agent you build.

Hooks: after_user_input, before_iteration, before_llm, after_llm, before_tools, before_each_tool, after_each_tool, after_tools, on_error, after_iteration, on_stop_signal, on_complete

Plugins are just lists of event handlers β€” visible, modifiable, co copy-able.

Multi-Agent Trust System (Fast Rules)

When agents call each other, trust decisions happen before LLM involvement β€” zero token cost for 90% of cases:

agent = Agent(
    name="production",
    trust="careful"    # whitelist β†’ allow, unknown β†’ ask LLM, blocked β†’ deny
)

Three presets: open (dev), careful (staging), strict (production).


πŸ’¬ Join the Community

DiscordGet help, share agents, and discuss with 1000+ builders in our active community.


Installation

pip install connectonion

Quickest Start - Use the CLI

# Create a new agent project with one command
co create my-agent

# Navigate and run
cd my-agent
python agent.py

The CLI guides you through API key setup automatically. No manual .env editing needed!

Manual Usage

import os  
from connectonion import Agent

# Set your OpenAI API key
os.environ["OPENAI_API_KEY"] = "your-api-key-here"

# 1. Define tools as simple functions
def search(query: str) -> str:
    """Search for information."""
    return f"Found information about {query}"

def calculate(expression: str) -> float:
    """Perform mathematical calculations."""
    return eval(expression)  # Use safely in production

# 2. Create an agent with tools and personality
agent = Agent(
    name="my_assistant",
    system_prompt="You are a helpful and friendly assistant.",
    tools=[search, calculate]
    # max_iterations=10 is the default - agent will try up to 10 tool calls per task
)

# 3. Use the agent
result = agent.input("What is 25 * 4?")
print(result)  # Agent will use the calculate function

result = agent.input("Search for Python tutorials") 
print(result)  # Agent will use the search function

# 4. View behavior history (automatic!)
print(agent.history.summary())

πŸ” Interactive Debugging with @xray

Debug your agents like you debug code - pause at breakpoints, inspect variables, and test edge cases:

from connectonion import Agent
from connectonion.decorators import xray

# Mark tools you want to debug with @xray
@xray
def search_database(query: str) -> str:
    """Search for information."""
    return f"Found 3 results for '{query}'"

@xray
def send_email(to: str, subject: str) -> str:
    """Send an email."""
    return f"Email sent to {to}"

# Create agent with @xray tools
agent = Agent(
    name="debug_demo",
    tools=[search_database, send_email]
)

# Launch interactive debugging session
agent.auto_debug()

# Or debug a specific task
agent.auto_debug("Search for Python tutorials and email the results")

What happens at each @xray breakpoint:

━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
@xray BREAKPOINT: search_database

Local Variables:
  query = "Python tutorials"
  result = "Found 3 results for 'Python tutorials'"

What do you want to do?
  β†’ Continue execution πŸš€       [c or Enter]
    Edit values πŸ”             [e]
    Quit debugging 🚫          [q]

πŸ’‘ Use arrow keys to navigate or type shortcuts
>

Key features:

  • Pause at breakpoints: Tools decorated with @xray pause execution
  • Inspect state: See all local variables and execution context
  • Edit variables: Modify results to test "what if" scenarios
  • Full Python REPL: Run any code to explore agent behavior
  • See next action: Preview what the LLM plans to do next

Perfect for:

  • Understanding why agents make certain decisions
  • Testing edge cases without modifying code
  • Exploring agent behavior interactively
  • Debugging complex multi-tool workflows

Learn more in the auto_debug guide

πŸ”Œ Plugin System

Package reusable capabilities as plugins and use them across multiple agents:

from connectonion import Agent, after_tools, llm_do

# Define a reflection plugin
def add_reflection(agent):
    trace = agent.current_session['trace'][-1]
    if trace['type'] == 'tool_execution' and trace['status'] == 'success':
        result = trace['result']
        reflection = llm_do(
            f"Result: {result[:200]}\n\nWhat did we learn?",
            system_prompt="Be concise.",
            temperature=0.3
        )
        agent.current_session['messages'].append({
            'role': 'assistant',
            'content': f"πŸ€” {reflection}"
        })

# Plugin is just a list of event handlers
reflection = [after_tools(add_reflection)]  # after_tools fires once after all tools

# Use across multiple agents
researcher = Agent("researcher", tools=[search], plugins=[reflection])
analyst = Agent("analyst", tools=[analyze], plugins=[reflection])

What plugins provide:

  • Reusable capabilities: Package event handlers into bundles
  • Simple pattern: A plugin is just a list of event handlers
  • Easy composition: Combine multiple plugins together
  • Built-in plugins: re_act, eval, system_reminder, image_result_formatter, and more

Built-in plugins are ready to use:

from connectonion.useful_plugins import re_act, system_reminder

agent = Agent("assistant", tools=[search], plugins=[re_act, system_reminder])

Learn more about plugins | Built-in plugins

πŸ”§ Core Concepts

Agent

The main class that orchestrates LLM calls and tool usage. Each agent:

  • Has a unique name for tracking purposes
  • Can be given a custom personality via system_prompt
  • Automatically converts functions to tools
  • Records all behavior to JSON files

Function-Based Tools

NEW: Just write regular Python functions! ConnectOnion automatically converts them to tools:

def my_tool(param: str, optional_param: int = 10) -> str:
    """This docstring becomes the tool description."""
    return f"Processed {param} with value {optional_param}"

# Use it directly - no wrapping needed!
agent = Agent("assistant", tools=[my_tool])

Key features:

  • Automatic Schema Generation: Type hints become OpenAI function schemas
  • Docstring Integration: First line becomes tool description
  • Parameter Handling: Supports required and optional parameters
  • Type Conversion: Handles different return types automatically

System Prompts

Define your agent's personality and behavior with flexible input options:

# 1. Direct string prompt
agent = Agent(
    name="helpful_tutor",
    system_prompt="You are an enthusiastic teacher who loves to educate.",
    tools=[my_tools]
)

# 2. Load from file (any text file, no extension restrictions)
agent = Agent(
    name="support_agent",
    system_prompt="prompts/customer_support.md"  # Automatically loads file content
)

# 3. Using Path object
from pathlib import Path
agent = Agent(
    name="coder",
    system_prompt=Path("prompts") / "senior_developer.txt"
)

# 4. None for default prompt
agent = Agent("basic_agent")  # Uses default: "You are a helpful assistant..."

Example prompt file (prompts/customer_support.md):

# Customer Support Agent

You are a senior customer support specialist with expertise in:
- Empathetic communication
- Problem-solving
- Technical troubleshooting

## Guidelines
- Always acknowledge the customer's concern first
- Look for root causes, not just symptoms
- Provide clear, actionable solutions

Logging

Automatic logging of all agent activities including:

  • User inputs and agent responses
  • LLM calls with timing
  • Tool executions with parameters and results
  • Default storage in .co/logs/{name}.log (human-readable format)

🎯 Example Tools

You can still use the traditional Tool class approach, but the new functional approach is much simpler:

Traditional Tool Classes (Still Supported)

from connectonion.tools import Calculator, CurrentTime, ReadFile

agent = Agent("assistant", tools=[Calculator(), CurrentTime(), ReadFile()])

New Function-Based Approach (Recommended)

def calculate(expression: str) -> float:
    """Perform mathematical calculations."""
    return eval(expression)  # Use safely in production

def get_time(format: str = "%Y-%m-%d %H:%M:%S") -> str:
    """Get current date and time."""
    from datetime import datetime
    return datetime.now().strftime(format)

def read_file(filepath: str) -> str:
    """Read contents of a text file."""
    with open(filepath, 'r') as f:
        return f.read()

# Use them directly!
agent = Agent("assistant", tools=[calculate, get_time, read_file])

The function-based approach is simpler, more Pythonic, and easier to test!

🎨 CLI Templates

ConnectOnion CLI provides templates to get you started quickly:

# Create a minimal agent (default)
co create my-agent

# Create with specific template
co create my-playwright-bot --template playwright

# Initialize in existing directory
co init  # Adds .co folder only
co init --template playwright  # Adds full template

Available Templates:

  • minimal (default) - Simple agent starter
  • playwright - Web automation with browser tools
  • meta-agent - Development assistant with docs search
  • web-research - Web research and data extraction

Each template includes:

  • Pre-configured agent ready to run
  • Automatic API key setup
  • Embedded ConnectOnion documentation
  • Git-ready .gitignore

Learn more in the CLI Documentation and Templates Guide.

πŸ”¨ Creating Custom Tools

The simplest way is to use functions (recommended):

def weather(city: str) -> str:
    """Get current weather for a city."""
    # Your weather API logic here
    return f"Weather in {city}: Sunny, 22Β°C"

# That's it! Use it directly
agent = Agent(name="weather_agent", tools=[weather])

Or use the Tool class for more control:

from connectonion.tools import Tool

class WeatherTool(Tool):
    def __init__(self):
        super().__init__(
            name="weather",
            description="Get current weather for a city"
        )
    
    def run(self, city: str) -> str:
        return f"Weather in {city}: Sunny, 22Β°C"
    
    def get_parameters_schema(self):
        return {
            "type": "object",
            "properties": {
                "city": {"type": "string", "description": "City name"}
            },
            "required": ["city"]
        }

agent = Agent(name="weather_agent", tools=[WeatherTool()])

πŸ“ Project Structure

connectonion/
β”œβ”€β”€ connectonion/
β”‚   β”œβ”€β”€ __init__.py         # Main exports
β”‚   β”œβ”€β”€ agent.py            # Agent class
β”‚   β”œβ”€β”€ tools.py            # Tool interface and built-ins
β”‚   β”œβ”€β”€ llm.py              # LLM interface and OpenAI implementation
β”‚   β”œβ”€β”€ console.py          # Terminal output and logging
β”‚   └── cli/                # CLI module
β”‚       β”œβ”€β”€ main.py         # CLI commands
β”‚       β”œβ”€β”€ docs.md         # Embedded documentation
β”‚       └── templates/      # Agent templates
β”‚           β”œβ”€β”€ basic_agent.py
β”‚           β”œβ”€β”€ chat_agent.py
β”‚           β”œβ”€β”€ data_agent.py
β”‚           └── *.md        # Prompt templates
β”œβ”€β”€ docs/                   # Documentation
β”‚   β”œβ”€β”€ quickstart.md
β”‚   β”œβ”€β”€ concepts/           # Core concepts
β”‚   β”œβ”€β”€ cli/                # CLI commands
β”‚   β”œβ”€β”€ templates/          # Project templates
β”‚   └── ...
β”œβ”€β”€ examples/
β”‚   └── basic_example.py
β”œβ”€β”€ tests/
β”‚   └── test_agent.py
└── pyproject.toml

πŸ§ͺ Running Tests

python -m pytest tests/

Or run individual test files:

python -m unittest tests.test_agent

πŸ“Š Automatic Logging

All agent activities are automatically logged to:

.co/logs/{agent_name}.log  # Default location

Each log entry includes:

  • Timestamp
  • User input
  • LLM calls with timing
  • Tool executions with parameters and results
  • Final responses

Control logging behavior:

# Default: logs to .co/logs/assistant.log
agent = Agent("assistant")

# Log to current directory
agent = Agent("assistant", log=True)  # β†’ assistant.log

# Disable logging
agent = Agent("assistant", log=False)

# Custom log file
agent = Agent("assistant", log="my_logs/custom.log")

πŸ”‘ Configuration

OpenAI API Key

Set your API key via environment variable:

export OPENAI_API_KEY="your-api-key-here"

Or pass directly to agent:

agent = Agent(name="test", api_key="your-api-key-here")

Model Selection

agent = Agent(name="test", model="gpt-5")  # Default: gpt-5-mini

Iteration Control

Control how many tool calling iterations an agent can perform:

# Default: 10 iterations (good for most tasks)
agent = Agent(name="assistant", tools=[...])

# Complex tasks may need more iterations
research_agent = Agent(
    name="researcher", 
    tools=[search, analyze, summarize, write_file],
    max_iterations=25  # Allow more steps for complex workflows
)

# Simple agents can use fewer iterations for safety
calculator = Agent(
    name="calc", 
    tools=[calculate],
    max_iterations=5  # Prevent runaway calculations
)

# Per-request override for specific complex tasks
result = agent.input(
    Release History
VersionChangesUrgencyDate
v0.9.1## Bug Fixes - **Fix `host(agent)` crash on deployed agents** β€” `host()` now accepts an `Agent` instance directly (wraps it in a factory with a warning). Previously, deployed agents using `host(agent, trust="strict")` crashed immediately with `TypeError: 'Agent' object is not callable`. - **Remove dead `import jwt` from `co status`** β€” unused import leftover from a refactor, now cleaned up. ## Migration If you see this warning after upgrading: ``` Warning: host(agent) β€” pass a factory functioHigh4/16/2026
v0.8.9## What's Changed ### New Features - **cc_prompt template**: Added 250 system prompt reference files to `co copy` - run `co copy cc_prompt` to get the full collection organized by category (system, agents, tools, skills, reminders, data) - **Browser tools**: Added hover, mouse_click, right_click, double_click actions - **Browser tools**: Added ambiguous element detection for smarter element finding - **Browser tools**: Added get_system_info for platform-aware keyboard shortcuts ### ImprovementMedium4/1/2026
v0.8.8## What's Changed - **Shadow DOM support** for browser element extraction β€” fixes LinkedIn modal interactions - **Contenteditable detection** β€” `contenteditable="true"` elements now detected as interactive - **Post-click DOM wait** β€” 1s wait after clicks for modals to render - **Coordinate-based clicking for shadow DOM** elements ### Root Cause LinkedIn renders its post modal inside a Shadow DOM. `document.querySelectorAll('*')` cannot pierce shadow boundaries. The fix traverses all open shadMedium3/24/2026
v0.8.7## What's Changed - **Change `co ai` default model** to `co/gemini-3-flash-preview` (from `co/claude-opus-4-5`) for better price-performance - **Add `gemini-3.1-flash-lite-preview` model support** in oo-api with pricing and documentation ## Installation ```bash pip install --upgrade connectonion ``` **Full Changelog**: https://github.com/openonion/connectonion/compare/v0.8.6...v0.8.7Medium3/24/2026
v0.8.6## What's Changed - **System prompt reduced 53%** β€” workflow, ConnectOnion index, and examples now load on-demand via intent detection (`is_build: true`) instead of being baked into the base prompt (25k β†’ 12k chars) - **Browser tools added to co ai** β€” `BrowserAutomation` + `image_result_formatter` plugin (44 tools total) - **Lazy browser init** β€” browser no longer auto-launches on construction, fixing async compatibility with uvicorn server - **Updated `co copy` command** β€” improved copy commaMedium3/23/2026
v0.8.5## Highlights WebSocket protocol refactored with CONNECT/INPUT split for cleaner auth flow. Skills plugin now discovers Claude Code skills alongside ConnectOnion skills. Bash permission matching validates full subcommands with fnmatch patterns. ## What's Changed ### Protocol - WebSocket protocol: split INIT into CONNECT (auth) + INPUT (prompt) for cleaner separation - Connect client updated for CONNECT β†’ CONNECTED β†’ INPUT handshake - Session registry lifecycle: `executing` β†’ `connected` β†’ `suLow3/22/2026
v0.8.4## What's Changed - Add session checkpoint for approval wait recovery - Add command field to tool_blocked event for UI display - Fix test pollution and update tests for current API - Simplify session cleanup to 10min idle for all non-running sessions - Rename useAgent to useAgentForHuman in docs - Update README with non-obvious advantages ## Installation ```bash pip install connectonion==0.8.4 ``` **Full Changelog**: https://github.com/openonion/connectonion/compare/v0.8.3...v0.8.4Low3/20/2026
v0.8.3# Release v0.8.3 ## Highlights This patch release adds **multimodal input support** (images and files) and completes the **unified permissions system** refactoring. The tool approval plugin now has comprehensive documentation and modular architecture, making it easier to configure and extend. ## What's Changed ### ✨ Features - **Multimodal Input Support** (PR-116): `Agent.input()` now accepts `images` and `files` parameters - File upload API with size limits and validation - Images and filesLow3/13/2026
v0.8.2# Release v0.8.2 ## Highlights This release adds a new `on_stop_signal` event hook that fires when operations are interrupted by plugins (like tool approval). It enables proper cleanup, state saving, and resource management when users reject or interrupt agent operations. ## What's Changed ### ✨ Features - **New `on_stop_signal` event** (#111) - Fires when `stop_signal` is set in the session, enabling cleanup of interrupted operations - Supports rollback of partial changes - Save checkpoLow3/12/2026
v0.8.0# ConnectOnion v0.8.0 ## ✨ Features - Add file upload configuration for agent tools - Improved .env file loading with better fallback logic ## πŸ”§ Improvements - Enhanced environment variable handling - Better configuration defaults ## πŸ“¦ Installation ```bash pip install --upgrade connectonion ``` ## πŸ”— Links - [PyPI Package](https://pypi.org/project/connectonion/0.8.0/) - [Documentation](https://docs.connectonion.com) **Full Changelog**: https://github.com/openonion/connectonion/compare/v0Low3/11/2026
v0.7.6## What's Changed ### ✨ Features - **File upload configuration**: Control upload limits via - : 10MB per file (configurable) - : 10 files max (configurable) - All limits shown in MB for easy configuration - **Smart .env loading**: Fallback chain with visibility - Priority: local β†’ global - Shows which file was loaded: `[env] /path/to/file` - **Host banner improvements**: Shows absolute paths for config and logs ### ♻️ Refactoring - Refactor file tools into `FileTools` class with sLow3/8/2026
v0.7.5## What's Changed ### πŸ› Bug Fixes & Improvements - Fix co ai plan mode: rename `plan_mode.md` β†’ `enter_plan_mode.md` so assembler actually loads it (tool name must match filename) - Fix 3 multiline `${has_tool()}` blocks in `main.md` that were output as raw template syntax to LLM - Fix `co create` crashing with 500 error when already authenticated β€” skip re-auth if `OPENONION_API_KEY` already in `~/.co/keys.env` ### ✨ Improvements - Plan mode system reminder now injects agent spec format + `cLow3/2/2026
v0.7.4# Release v0.7.4 ## Highlights Bug fix release improving `co ai` command and browser agent screenshot API consistency. ## What's Changed ### πŸ› Bug Fixes - **Fix co ai chat URL mismatch** - Browser now opens with the correct agent address that matches the running agent. Previously, browser used `~/.co/` address while agent ran in `cwd/.co/`, causing offline status display. - **Rename take_screenshot parameter** - Changed `filename` to `path` for clarity and consistency across the codebase. Low2/27/2026
v0.7.3# Release v0.7.3 ## Highlights Critical bug fix for browser automation. The `image_result_formatter` plugin was causing 500 Internal Server Error when processing screenshots. This release fixes the message sequence to comply with LLM API requirements. ## What's Changed ### πŸ› Bug Fixes - **Fixed 500 Internal Server Error in image_result_formatter plugin** - Changed image message role from 'assistant' to 'user' - LLM APIs (OpenAI, Anthropic, Gemini) require alternating user/assistant messLow2/27/2026
v0.7.2# Release v0.7.2 ## Highlights This patch release fixes a critical relay reconnection bug and adds OpenRouter LLM support. ## What's Changed ### πŸ› Bug Fixes - Fix relay reconnection timestamp bug (552902f) ### ✨ Features - Add OpenRouter support to LLM providers (#84) ## Installation ```bash pip install connectonion==0.7.2 ``` ## Breaking Changes None **Full Changelog**: https://github.com/openonion/connectonion/compare/v0.7.1...v0.7.2 Low2/27/2026
v0.7.1# Release v0.7.1 ## Highlights This patch release adds `before_iteration`/`after_iteration` lifecycle events for fine-grained agent loop control, a new `receive_all()` method for mode polling, and various documentation and build improvements. ## What's Changed ### ✨ Features - Add `before_iteration`/`after_iteration` events and `receive_all()` for mode polling ### πŸ› Bug Fixes - Remove force-include for docs in wheel build ### πŸ“š Documentation - Add LLM-Note headers to files missing documeLow2/18/2026
v0.7.0# Release v0.7.0 ## Highlights ConnectOnion v0.7.0 improves developer experience with **comprehensive code documentation** and fixes **WebSocket image support** for oo-chat integration. ## What's Changed ### πŸ› Bug Fixes - **Image support over WebSocket**: Fixed base64 image transmission and display in oo-chat (#82) - Enhanced `image_result_formatter` plugin with WebSocket support - Added `io.send_image()` method for sending base64-encoded images - Improved image display with better foLow2/15/2026
v0.6.9## What's New ### ✨ New LLM Providers - **Groq** (`groq/` prefix): Fast inference on Groq hardware - Example: `Agent("bot", model="groq/llama-3.3-70b-versatile")` - Requires `GROQ_API_KEY` environment variable - **Grok** (`grok/` prefix): xAI's Grok models - Example: `Agent("bot", model="grok/grok-4")` - Requires `XAI_API_KEY` environment variable - **OpenRouter** (`openrouter/` prefix): Multi-provider gateway - Example: `Agent("bot", model="openrouter/openai/gpt-4o-mini")` - RequLow2/14/2026
v0.6.8## Highlights Image support for multimodal input and CLI improvements. ## What's Changed ### ✨ Features - Add image support for multimodal user input (`agent.input()` now accepts `images` parameter) - Auto-open chat UI when running `co ai` - Add one-shot mode to `co ai` command - Add `co_dir` parameter for configurable .co folder location ### πŸ› Bug Fixes - Host TUI improvements and TrustAgent serialization fixes - Improve chat link visibility in host banner ### πŸ§ͺ Tests - Add tests for imaLow2/14/2026
v0.6.7## What's Changed ### Host TUI Redesign - Redesigned host banner with `[host]` prefix for clear layer separation - Agent info shown by Agent banner, host info shown by host banner - Simplified relay logs with `[host]` prefix ### Relay Improvements - Changed heartbeat interval to 5 minutes (was 1 minute) - Added β™₯ icon for heartbeat indicator - Cleaner relay connection messages ### Other Changes - Prefer write tool plugin for file operations - Various bug fixes and improvements ## InstallatioLow2/10/2026
v0.6.6## Highlights Added `reject_explain` approval mode for beginner-friendly tool explanations. When users click "Explain" on a tool approval, the agent explains the action like teaching a 15-year-old. ## What's Changed ### ✨ Features - **reject_explain mode**: Asks the agent to explain: 1. CONTEXT: What you're trying to accomplish overall 2. CONCEPT: What this type of action is (e.g., "bash is like giving text instructions to your computer") 3. THIS STEP: What specifically this will do, usLow2/7/2026
v0.6.5## Highlights - **New TrustAgent class** for managing client trust levels and onboarding - **Fast rules engine** for YAML-based policy parsing without LLM calls - **Payment verification** via oo-api for onboarding - **CLI trust commands** (`co trust show`, `co trust list`) - **WebSocket onboarding flow** with payment address support ## What's Changed ### Trust System - New `TrustAgent` class with promote/demote/block methods - Fast rules engine parses YAML policies without LLM calls - PaymentLow2/5/2026
v0.6.4## What's Changed ### Network - Restructured network module into `asgi/`, `host/`, `io/`, `trust/` submodules - Added new IO patterns for agent communication - Improved trust verification system ### New Tools - `ask_user`, `bash`, `edit`, `glob_files`, `grep_files`, `multi_edit`, `read_file`, `write_file` - `tool_approval` plugin for interactive tool approval ### CLI - Added `co_ai` module for AI-powered CLI interactions - Improved browser agent ### Refactoring - Improved code organization aLow1/31/2026
v0.6.3## What's Changed ### ✨ Features - **system_reminder plugin** - Injects contextual guidance into tool results to nudge agent behavior (like Claude Code's system reminders) - **TUI Input improvements** - Add Shift+Tab support and `on_special_key` callback - **useful_prompts module** - Add Coding Agent Prompt template (`co copy coding_agent`) ### πŸ”§ Improvements - Use hidden directory for browser agent profile (cleaner user directories) ## Installation ```bash pip install connectonion==0.6.3 `Low1/30/2026
v0.6.2## What's Changed ### πŸ› Bug Fixes - Fix Chrome profile copy to handle locked files gracefully (when Chrome is running) ### ✨ Improvements - Update CLI browser_agent with new element finder approach (LLM selects from indexed list instead of generating CSS) - Sync browser-agent with consolidated scroll module (~100 lines AI + fallback strategy) ## Installation ```bash pip install --upgrade connectonion ``` **Full Changelog**: https://github.com/openonion/connectonion/compare/v0.6.1...v0.6.2Low12/26/2025
v0.6.1# Release v0.6.1 ## Highlights This release fixes structured output support for Anthropic Claude models via managed keys and adds comprehensive documentation for model compatibility. ## What's Changed ### Bug Fixes - **Fixed structured output for Anthropic models via managed keys** - `llm_do(..., output=Model)` now works correctly with Claude 4.5/4.1 models by using Anthropic's native structured outputs API with the `output_format` parameter and `structured-outputs-2025-11-13` beta header #Low12/25/2025
v0.6.0## What's New in v0.6.0 ### Code Reorganization - Reorganized core modules into `connectonion/core/` folder (agent, llm, events, tool_executor, tool_factory, tool_registry, usage) - All imports remain the same - no breaking changes ### Chat TUI Component - New Textual-based Chat component for building terminal chat interfaces - Thinking indicator with elapsed timer: `β Ή Thinking... 5s (usually 3-10s)` - Tool execution display with tree-style connector: ``` β Ή Search emails in inbox └─ seLow12/21/2025
v0.5.10# Release v0.5.10 ## Highlights Worker isolation for hosted agents - each request now gets a fresh deep copy of the agent, ensuring complete isolation between concurrent requests. This enables stateful tools (like browser automation) to work correctly without interference. ## What's Changed ### Worker Isolation - **Deep copy per request**: Each HTTP/WebSocket request gets its own isolated agent instance - **Stateful tools support**: Browser tools, file handles, and other stateful resources Low12/19/2025
v0.5.9## What's New ### New CLI Command: `co copy` Copy built-in tools and plugins to your project for customization. ```bash # List available items co copy --list # Copy a tool co copy Gmail # Copy a plugin co copy re_act # Copy multiple items co copy Gmail Shell WebFetch ``` Copied files are placed in `./tools/` or `./plugins/` directories automatically. ### Documentation - Added "Customizing" sections to all tool and plugin documentation - Updated docs-site with copy command examples **FullLow12/18/2025
v0.5.2## What's New ### Major Features - **`host(agent)` function** - New way to serve agents with full ASGI implementation - **Interactive API docs** at `/docs` endpoint (Swagger-like UI) - **Rich endpoint support**: `/input`, `/sessions`, `/health`, `/info`, `/ws` - **Raw ASGI** for maximum performance (no Flask/Starlette overhead) - **Trust system** with open/careful/strict levels ### Breaking Changes - Removed `serve()` function (replaced by `host()`) - Removed `agent.serve()` method (use `host(Low12/14/2025
v0.5.1Mark as Production/Stable on PyPILow12/7/2025
v0.5.0## Highlights ConnectOnion 0.5.0 brings **Microsoft Outlook/Calendar integration**, a new **session logging system**, and **modern Python packaging**. The CLI has been migrated from argparse to Typer for a better developer experience. ## What's Changed ### New Features - Add Microsoft Outlook and Calendar integration for email automation - Add session logging system with Logger facade for debugging and evaluation - Add extra_content support for Gemini 3 thought_signature ### Improvements - MLow12/7/2025
v0.4.9## Highlights This release introduces `ToolRegistry` - a cleaner API for accessing agent tools with attribute access, plus 93 new unit tests for improved reliability. ## What's Changed ### Refactoring - Refactored `tool_map` dict to `ToolRegistry` class with attribute access - New API: `agent.tools.tool_name.run()` instead of `agent.tool_map["name"]()` - Class instances accessible via `agent.tools.gmail`, `agent.tools.calendar` - O(1) tool lookup with conflict detection for duplicate names #Low12/1/2025
v0.4.8## πŸ“¦ Packaging Modernization This release modernizes ConnectOnion's packaging infrastructure: - **pyproject.toml + hatchling** - Modern Python packaging standard - **uv.lock** - Reproducible dependency installs (97 packages locked) - **Python 3.10+** - Dropped Python 3.9 (EOL October 2025) ## ✨ Benefits - **Faster installs** - Use `uv pip install connectonion` for 10-100x faster installs - **Reproducible builds** - Lock file ensures consistent dependency versions - **Modern tooling** - CompLow11/29/2025
v0.4.7## πŸ› Bug Fix - **Added missing `beautifulsoup4` dependency** - The `WebFetch` tool requires BeautifulSoup4 for HTML parsing, but it wasn't declared in the package dependencies. ## πŸ“¦ Installation ```bash pip install --upgrade connectonion ```Low11/27/2025
v0.4.6## ✨ New Features - **Token Usage Tracking** - New `usage.py` module with cost calculation per LLM call - Model pricing data for OpenAI, Anthropic, and Gemini - New agent properties: `total_cost`, `last_usage`, `context_percent` - **GoogleCalendar Tool** - Full Google Calendar integration - List, create, update, delete events - Create Google Meet meetings - Find free time slots - **SlashCommand System** - Load commands from markdown files - YAML frontmatter for metadata - ToLow11/27/2025
v0.4.5## Highlights - Fix Google OAuth account switching - running `co auth google` again now properly lets you switch accounts - New TodoList tool for agent task management - New shell_approval plugin for command approval workflow ## What's Changed ### πŸ› Bug Fixes - Google OAuth: Revoke old connection before new OAuth flow (fixes account switching) ### ✨ Features - Add TodoList tool for agent task management - Add shell_approval plugin for command approval workflow - Events: add on_error event suLow11/25/2025
v0.4.4# ConnectOnion v0.4.4 ## πŸ› Critical Bug Fix - **Gmail dependencies now included** - Fixes ImportError when using Gmail class - Added `google-auth>=2.0.0` - Added `google-api-python-client>=2.0.0` - Added `httpx>=0.24.0` ## ✨ New Features - **DiffWriter tool** - Human-in-the-loop file writing with interactive approval - **pick() utility** - Arrow key navigation for selecting from lists - **yes_no() utility** - Interactive yes/no prompts with keyboard controls ## πŸ“¦ Installation ```bashLow11/24/2025
v0.4.3# Release v0.4.3 ## Highlights This patch release improves documentation for Google OAuth integration and enhances the overall README presentation with better formatting and engagement. ## What's Changed ### πŸ“š Documentation - Enhanced README with professional formatting and better engagement - Improved philosophy section with clear step-by-step progression - Added comprehensive Google OAuth documentation for `co auth google` command - Renamed cli-auth-google.md to gmail-calendar-integrationLow11/21/2025
v0.4.2# Release v0.4.2 ## Highlights This release introduces a powerful plugin system that makes ConnectOnion agents more composable and reusable. Plugins allow you to bundle event handlers together and share them across agents. We've included three built-in plugins to get you started: reflection, ReAct reasoning, and automatic image formatting for vision-enabled workflows. ## What's Changed ### ✨ Features - **Plugin System**: New `plugins` parameter on Agent allows bundling event handlers for reLow11/16/2025
v0.4.1# Release v0.4.1 ## Highlights This patch release fixes critical email API endpoint issues and enhances CLI documentation with comprehensive help system improvements. ## What's Changed ### πŸ› Bug Fixes - Fix email API paths to use `/api/v1/` for client-side functions (d209098) - Ensures proper routing for email-related API calls ### πŸ“š Documentation - Add comprehensive help system following CLI best practices (8c33072) - Improved `co --help` output with better formatting - Added detaiLow11/12/2025
v0.4.0# Release v0.4.0 ## Highlights This release significantly simplifies ConnectOnion's architecture by introducing **automatic `.env` file loading**. Users no longer need to manually call `load_dotenv()` in their projects - the framework handles it automatically. We've also removed over 30 lines of complex configuration logic, making the codebase cleaner and easier to maintain. ## What's Changed ### ✨ Features - **Automatic .env Loading**: Framework now auto-loads `.env` files from current worLow11/5/2025
v0.3.9Release v0.3.9 Hook into Your Agent's Lifecycle with the New Event System! πŸŽ‰ The new event system (on_events parameter) lets you monitor, log, and control your agent's execution at every step. Perfect for debugging, performance monitoring, reflection, and custom behavior injection. - **6 Event Types**: Hook into any point in the agent lifecycle - `after_user_input`: Fires once per turn, add context or timestamps - `before_llm`: Before each LLM call, modify messages - `after_llm`: AfterLow11/4/2025
v0.3.8# Release v0.3.8 ## Highlights This release brings significant improvements to testing infrastructure, adds network features for agent-to-agent communication, and includes important bug fixes. The entire test suite has been migrated from unittest to pytest, adopting FastAPI-style testing best practices. ## What's Changed ### ✨ New Features - **Network Features**: Added `agent.serve()` and `agent.connect()` for agent-to-agent communication over websockets - **CI Workflow**: Integrated GitHub Low11/4/2025
v0.3.7## πŸ”„ Changes ### Fixed - Migrated GeminiLLM to OpenAI-compatible endpoint (simplified from ~300 lines to 56 lines) - Removed google-generativeai dependency - now uses openai SDK for all providers - Fixed test import errors by removing outdated test_cli_browser.py - Added mock_helpers.py test utilities for better test organization - Removed try-except blocks following 'let it crash' philosophy - Fixed GitHub URLs in setup.py (connectonion β†’ openonion) ### Improved - GeminiLLM now uses Google'sLow10/31/2025
v0.3.6## πŸŽ‰ What's New ### Post-Execution Analysis - **Structured feedback after agent execution**: Get comprehensive analysis of agent performance, including task completion status, quality rating, identified problems, and actionable improvement suggestions - **Automatic display**: Analysis automatically shown after auto_debug() completes - **Pydantic-validated output**: Structured ExecutionAnalysis model ensures consistent, parseable results ### Debug Explainer Enhancements - **Experimental inveLow10/29/2025
v0.3.5# ConnectOnion v0.3.5 ## Highlights Simplified email error handling following the "let it crash" philosophy. Email functions now raise clear exceptions instead of silently returning empty results, making debugging significantly easier. ## What's Changed ### ♻️ Refactoring - Email functions (`get_emails`, `mark_read`, `mark_unread`) now raise `ValueError` for missing `OPENONION_API_KEY` - Removed config.toml dependency; now use `load_dotenv()` like `send_email()` - Removed all try-except blocLow10/26/2025
v0.3.4# Release v0.3.4 ## Highlights This release adds two new powerful CLI commands for better account management: `co status` for checking your account balance and usage at any time, and `co reset` for resetting your account and starting fresh with a new keypair. ## What's Changed ### ✨ Features - **New `co status` command**: Check account balance, usage, and details without re-authenticating - **New `co reset` command**: Reset account and create a new one with fresh Ed25519 keypair - Display baLow10/24/2025
v0.3.3# Release v0.3.3 ## Highlights This patch release includes documentation improvements to better communicate feature status in the auto_debug documentation. ## What's Changed ### πŸ“š Documentation - Add feature status markers to auto_debug documentation ## Installation ```bash pip install connectonion==0.3.3 ``` ## Breaking Changes None **Full Changelog**: https://github.com/openonion/connectonion/compare/v0.3.2...v0.3.3Low10/21/2025
v0.3.2# Release v0.3.2 ## Highlights This patch release enhances the interactive debugger with comprehensive Python docstrings and improved source code inspection, making debugging sessions more informative and professional. ## What's Changed ### πŸ“š Documentation - **Enhanced Debugger Documentation**: Added comprehensive Python docstrings to all debugger UI functions following Google-style conventions - **Function Documentation Coverage**: 14+ functions now have detailed Args, Returns, and behaviLow10/20/2025

Dependencies & License Audit

Loading dependencies...

Similar Packages

aragA-RAG: Agentic Retrieval-Augmented Generation via Hierarchical Retrieval Interfaces. State-of-the-art RAG framework with keyword, semantic, and chunk read tools for multi-hop QA.v0.1.0
Ollama-Terminal-AgentAutomate shell tasks using a local Ollama model that plans, executes, and fixes commands without cloud or API dependencies.main@2026-04-21
KohakuTerrariumKohakuTerrarium is a general-purpose AI agent framework and batteries-included app for building, running, and composing self-contained agents and multi-agent teams, with built-in tools, sub-agents, pev1.1.0
AGI-Alpha-Agent-v0META‑AGENTIC α‑AGI πŸ‘οΈβœ¨ β€” Mission 🎯 End‑to‑end: Identify πŸ” β†’ Out‑Learn πŸ“š β†’ Out‑Think 🧠 β†’ Out‑Design 🎨 β†’ Out‑Strategise β™ŸοΈ β†’ Out‑Execute ⚑main@2026-04-18
opentulpaSelf-hosted personal AI agent that lives in your DMs. Describe any workflow: triage Gmail, pull a Giphy feed, build a Slack bot, monitor markets. It writes the code, runs it, schedules it, and saves imain@2026-04-17