freshcrate
Home > Frameworks > Wee-Orchestrator

Wee-Orchestrator

šŸ€ Self-hosted multi-agent AI orchestrator — chat with Claude, Gemini & Copilot CLI from Telegram, WebEx, or browser. 5 runtimes, 17+ models, task scheduling, skill plugins.

Description

šŸ€ Self-hosted multi-agent AI orchestrator — chat with Claude, Gemini & Copilot CLI from Telegram, WebEx, or browser. 5 runtimes, 17+ models, task scheduling, skill plugins.

README

šŸ€ Wee-Orchestrator

One platform. Every AI. Any channel.

Python 3.10+ License: MIT

Wee-Orchestrator is a unified AI agent platform that lets you chat with any AI CLI runtime — GitHub Copilot, Claude Code, OpenCode, Google Gemini, or OpenAI Codex — from Telegram, WebEx, or a beautiful browser-based Web UI. Switch models, agents, and runtimes on the fly with slash commands. Schedule recurring AI tasks. Send files and images. All from one place.

Wee-Orchestrator Architecture


✨ Why Wee-Orchestrator?

Problem Wee-Orchestrator Solution
Juggling multiple AI tools and CLIs One unified interface across 5 runtimes and 17+ models
AI is stuck in the terminal Chat from anywhere — Telegram, WebEx, or the Web UI
No memory between sessions Persistent sessions with full conversation history
Can't automate AI tasks Built-in task scheduler with cron-like scheduling
One-size-fits-all agents Multi-agent architecture — switch agents per task
Complex setup Zero-config bot creation with the Starter Kit

šŸ“ø Screenshots

Chat Interface Task Scheduler
Chat Interface Task Scheduler
Secure Pairing Login Architecture Overview
Login Screen Architecture

šŸš€ Key Features

  • šŸ”€ 5 AI Runtimes — GitHub Copilot CLI, Claude Code, OpenCode, Google Gemini, OpenAI Codex
  • šŸ’¬ 3 Channels — Telegram bot, WebEx bot (via RabbitMQ), glassmorphism Web UI with SSE streaming
  • šŸ¤– Multi-Agent — Define specialized agents in agents.json, switch with /agent; hot-reload on change (no restart needed)
  • šŸ”„ Live Model Switching — Change models mid-conversation with /model
  • šŸ“… Task Scheduler — Schedule recurring AI jobs with natural language (every day at 9am)
  • šŸ“ File & Image Support — Upload, download, and inline images across all channels
  • šŸŽ¤ Audio Transcription — Voice messages auto-transcribed via Whisper (OpenAI or local)
  • šŸ” Secure Auth — Pairing-code login, per-user ACLs, agent/model pinning, yolo/restricted modes
  • šŸ“œ Session History — Full conversation persistence with search and resume
  • ⚔ Background Tasks — Delegate long-running work to background agents with in-thread status updates
  • šŸ”” In-Thread Notifications — Real-time task lifecycle updates (queued → running → complete) in your conversation
  • šŸ”Œ Extensible Skills — Plugin architecture for adding capabilities (Cisco Meraki, Home Assistant, etc.)
  • **āš™ļø Slash Command Registry — Pure-server commands that bypass the LLM for reduced latency; auto-registers with Telegram BotFather for autocomplete; built-in /secret command for secure credential management

šŸ—ļø Architecture

  Telegram ──► TelegramConnector ──┐
                                   │
  WebEx ─────► WebEXConnector ─────┼──► SessionManager ──► AI CLI Runtimes
                                   │       │                (Copilot, Claude,
  Browser ───► FastAPI /api/v1 ā”€ā”€ā”€ā”€ā”˜       │                 OpenCode, Gemini,
                                           │                 Codex)
                                    TaskScheduler

Each inbound message flows through a channel connector, into the shared SessionManager (which handles slash commands, session state, and agent routing), and out to the selected AI CLI runtime as a subprocess. Responses stream back in real time.

For the full component diagram, sequence diagrams, and deployment topology, see ARCHITECTURE.md.


šŸ“‹ Overview

Wee-Orchestrator provides a flexible framework to:

  • Chat with AI agents from Telegram, WebEx, or the browser-based Web UI
  • Call AI CLIs (Copilot, OpenCode, Claude Code, Gemini, Codex) from N8N workflows
  • Maintain session affinity across multiple conversation turns
  • Switch between different agent repositories dynamically
  • Configure agents via JSON config files instead of hardcoding
  • Support multiple AI models and runtimes
  • Schedule recurring AI tasks with the built-in Task Scheduler
  • Execute bash commands directly with ! prefix
  • Send and receive files and images over Telegram and WebEx
  • Enforce per-user agent pinning, model pinning, and yolo/restricted mode ACLs

For release history and feature documentation see CHANGELOG.md and RELEASE_NOTES.md.

⚔ Quick Start

# 1. Clone the repo
git clone https://github.com/leprachuan/Wee-Orchestrator.git
cd Wee-Orchestrator

# 2. Install dependencies
pip install -r requirements.txt

# 3. Configure your environment
cp .env.example .env    # Edit with your API keys and bot tokens

# 4. Define your agents
vi agents.json           # Add your agent definitions

# 5. Start the API server
python3 agent_manager.py --api

# 6. (Optional) Start channel connectors
python3 telegram_connector.py   # Telegram bot
python3 webex_connector.py      # WebEx bot

Then open http://localhost:8000/ui in your browser and pair via Telegram or WebEx.

šŸš€ Want to create your own bot? Use the Wee-Orchestrator Starter Kit to scaffold one in minutes.


šŸ’¬ Slash Commands

Command Description
/agent <name> Switch to a different agent
/model <model> Change AI model mid-conversation
/runtime <runtime> Switch AI runtime (copilot, claude, gemini, opencode, codex, devin)
/timeout <seconds> Adjust execution timeout
/status Check running task status
/cancel Cancel the current running task
/schedule list List all scheduled jobs
/schedule add <name> | <schedule> | <task> Create a scheduled job
/help Show all available commands

Bot Setup Guide

Wee-Orchestrator enables you to create custom bots — specialized AI agents with their own configuration, knowledge base, and capabilities. Each bot is a self-contained repository that can be integrated with Wee-Orchestrator.

šŸš€ New here? Use the Wee-Orchestrator Starter Kit to scaffold a new bot in minutes — includes AGENTS.md, skill management with security scanning, memory structure, and setup scripts.

What is a Bot?

A bot is a Git repository containing:

  1. Core Configuration — An AGENTS.md file defining agent behavior, preferences, and runtime configurations
  2. Knowledge Base — A memory/ directory using the PARA methodology (Projects, Areas, Resources, Archive) for organizing operational knowledge
  3. Focus Areas — Organized folders for specific domains (e.g., email_triage/, smart_home/, infrastructure/)
  4. Skills Integration — References to specialized skills from pot-o-skills or custom skills
  5. Documentation — README, guides, and workflow documentation

Example Bot Structure

my-bot/
ā”œā”€ā”€ README.md                  # Bot overview & usage
ā”œā”€ā”€ AGENTS.md                  # Agent behavior & configuration
ā”œā”€ā”€ .env                       # Credentials (git-ignored)
ā”œā”€ā”€ .gitignore                 # Protect secrets
│
ā”œā”€ā”€ memory/                    # Knowledge base (PARA methodology)
│   ā”œā”€ā”€ projects/              # Active multi-step initiatives
│   ā”œā”€ā”€ areas/                 # Ongoing responsibility areas
│   ā”œā”€ā”€ resources/             # Reference material & best practices
│   └── archive/               # Completed/deprecated items
│
ā”œā”€ā”€ skills/                    # Custom skill implementations
│   ā”œā”€ā”€ custom-skill-1/
│   └── custom-skill-2/
│
└── domain-folders/            # Domain-specific organization
    ā”œā”€ā”€ email/                 # Email processing
    ā”œā”€ā”€ home-automation/       # Smart home tasks
    └── infrastructure/        # Infrastructure management

Key Components

AGENTS.md

Defines the bot's behavior, preferences, and runtime configuration:

  • Agent name, purpose, and timezone
  • Preferred models and runtimes (Claude, Copilot, Gemini)
  • Tool permissions and access control
  • Sub-agent delegation rules
  • Skill definitions and repository locations
  • Security and credential management

Example excerpt:

---
name: my-bot
runtime: copilot
model: gpt-5-sonnet
timezone: EST/EDT
---

## Behavior

- Preferred AI runtime: Claude > Copilot > Gemini
- Task routing: Delegate to specialized sub-agents for domain expertise
- Notification channel: Telegram

Memory Structure (PARA)

Organize knowledge for long-term retention and reuse:

  • Projects/ — Active multi-step work (e.g., home-automation-setup.md)
  • Areas/ — Ongoing responsibilities (e.g., orchestration.md, security.md)
  • Resources/ — Reference material (e.g., best-practices.md, api-docs.md)
  • Archive/ — Completed or deprecated knowledge

Skills

Skills extend your bot's capabilities by providing pre-built integrations with external APIs and services. Skills should be sourced from reputable, official repositories to minimize security risks.

Recommended Skill Sources
  1. pot-o-skills — Community skills for cloud networking and security

    • Repository: https://github.com/leprachuan/pot-o-skills
    • Skills: Cisco Meraki, Cisco Security Cloud Control, and more
    • Status: Public, open-source, actively maintained
    • Usage: Clone and link into your bot's skills/ directory
  2. Anthropic Official Skills — Official skills from Anthropic

    • Repository: https://github.com/anthropics/skills
    • Status: Official, production-ready
    • Security: Vetted and maintained by Anthropic team
    • Best for: Claude AI integration, code generation, analysis
  3. Custom Skills — Implement your own domain-specific skills

    • Location: ./skills/ directory in your bot repository
    • Documentation: Must include SKILL.md, README, and examples
    • Security: You control the code and updates
āš ļø Skills Security Guidelines

Skills have full access to your system — they can execute commands, read files, and call APIs. Follow these practices:

  • āœ… Only use official skills from original software/service authors

    • Example: Use Cisco's official Meraki skill, not community forks
    • Example: Use Anthropic's official skills, not third-party versions
  • āœ… Validate before installation

    • Review the source code in the skill repository
    • Check for hardcoded credentials or suspicious patterns
    • Verify the repository is actively maintained
    • Look for security issues reported in GitHub Issues
  • āœ… Use trusted repositories

    • Official repos (Anthropic, GitHub, etc.)
    • Long-standing community projects with active maintainers
    • Projects with security policies and issue tracking
    • Avoid random GitHub repos without documentation or maintenance
  • āš ļø Audit custom skills carefully

    • Never trust a skill without reviewing its code first
    • Check for unintended API calls or data exfiltration
    • Validate input sanitization
    • Ensure credentials are handled safely
  • āœ… Keep skills updated

    • Periodically review and update to latest versions
    • Subscribe to security advisories from skill repositories
    • Remove unused skills to reduce attack surface
Using Skills in Your Bot
# Link public skills from pot-o-skills (verified, open-source)
ln -s /opt/pot-o-skills/cisco-meraki ./skills/
ln -s /opt/pot-o-skills/cisco-security-cloud-control ./skills/

# Link Anthropic official skills (verified, official)
ln -s /opt/anthropic-skills/code-analysis ./skills/
ln -s /opt/anthropic-skills/file-operations ./skills/

# Or implement custom skills in skills/ directory
mkdir skills/my-custom-skill
Discovering Skills
  • pot-o-skills: https://github.com/leprachuan/pot-o-skills

    cd /opt && git clone https://github.com/leprachuan/pot-o-skills.git
  • Anthropic Skills: https://github.com/anthropics/skills

    cd /opt && git clone https://github.com/anthropics/skills.git
  • Custom Community Skills: Search GitHub for topic:agent-skills with verification:

    • āœ… Active maintenance (recent commits)
    • āœ… Clear documentation
    • āœ… Security policy file
    • āœ… Public issue tracking

Domain Folders

Organize bot work by area of focus:

  • Keep related scripts, templates, and documentation together
  • Example: email/ for email processing, home/ for automation tasks
  • Each folder can have its own README with domain-specific guidance

Getting Started

šŸ’” Recommended: Fork the Wee-Orchestrator Starter Kit instead of starting from scratch — it includes everything below pre-configured with best practices, security scanning, and setup scripts.

  1. Create your bot repository:

    mkdir my-bot && cd my-bot
    git init
    git remote add origin https://github.com/username/my-bot.git
  2. Add AGENTS.md: Copy and customize the AGENTS.md template from Wee-Orchestrator with your bot's preferences

  3. Create memory directory:

    mkdir -p memory/{projects,areas,resources,archive}
    echo "# Knowledge Base" > memory/INDEX.md
  4. Add .env and .gitignore:

    cp /opt/n8n-copilot-shim-dev/.env.example .env
    echo ".env" >> .gitignore
    echo "*.key" >> .gitignore
    echo "secrets.json" >> .gitignore
  5. Link or implement skills:

    mkdir skills
    ln -s /opt/pot-o-skills skills/cisco-meraki
  6. Register with Wee-Orchestrator: Update Wee-Orchestrator's agents.json to include your bot:

    {
      "agents": [
        {
          "name": "my-bot",
          "path": "/opt/my-bot",
          "enabled": true
        }
      ]
    }

Best Practices

  • Secrets First: Store all credentials in .env (git-ignored), never commit secrets
  • Document Decisions: Use memory/areas/ to record architectural decisions and conventions
  • Skill Reuse: Leverage pot-o-skills before building custom skills
  • Domain Organization: Group related work into focused folders for maintainability
  • README Clarity: Each folder should have clear purpose and examples

Resources


Requirements

This project requires one or more of the following AI CLI tools to be installed:

Claude Code CLI

Prerequisites:

  • Node.js 18+ (for npm installation) OR native binary support
  • Anthropic API key for authentication

Installation:

Native binary (recommended):

curl -fsSL https://claude.ai/install.sh | bash

Or via npm:

npm install -g @anthropic-ai/claude-code

Supported Systems: macOS 10.15+, Linux (Ubuntu 20.04+/Debian 10+, Alpine), Windows 10+ (via WSL)

Reference: Claude Code Quickstart Documentation

GitHub Copilot CLI

Prerequisites:

  • Node.js 22 or higher
  • Active GitHub Copilot subscription (Pro, Pro+, Business, or Enterprise plan)
  • GitHub account for authentication

Installation:

npm install -g @github/copilot
copilot  # Launch and authenticate

For authentication, use the /login command or set GH_TOKEN environment variable with a fine-grained PAT.

Supported Systems: macOS, Linux, Windows (via WSL)

Reference: GitHub Copilot CLI Installation Guide

OpenCode CLI

Prerequisites:

  • Node.js or compatible runtime

Installation (Recommended):

curl -fsSL https://opencode.ai/install | bash

Or via npm:

npm i -g opencode-ai@latest

Alternative package managers:

  • Homebrew: brew install opencode
  • Scoop (Windows): scoop bucket add extras && scoop install extras/opencode
  • Arch Linux: paru -S opencode-bin

Supported Systems: Windows, macOS, Linux

Reference: OpenCode Documentation

Google Gemini CLI

Prerequisites:

  • Python 3.7 or higher
  • Google Cloud account with Gemini API access
  • Google API key for authentication

Installation:

pip install google-generativeai
# Or using the CLI wrapper
pip install gemini-cli

Authentication:

Set your API key as an environment variable:

export GOOGLE_API_KEY='your-api-key-here'

Or configure it in your shell profile:

echo 'export GOOGLE_API_KEY="your-api-key-here"' >> ~/.bashrc
source ~/.bashrc

Supported Systems: Windows, macOS, Linux

Reference: Google Gemini API Documentation

Tool Permissions & Access Control

All AI runtimes in this system are configured with full tool access to enable read, write, and execute operations without approval prompts. This provides maximum automation capabilities.

Permission Configuration by Runtime

GitHub Copilot CLI

  • Flags Used: --allow-all-tools --allow-all-paths
  • Enables:
    • All MCP tools and shell commands without approval
    • Read/write/execute permissions for all files and directories
  • Security Note: Gives Copilot the same permissions as your user account

Claude Code CLI

  • Flags Used: --permission-mode bypassPermissions
  • Enables:
    • Auto-approve all file edits, writes, and reads
    • Execute shell commands without approval
    • Access web/network tools without prompts
  • Also Known As: YOLO mode or dontAsk mode

OpenCode CLI

  • Configuration: Uses opencode.json file for permission settings
  • Required Setup:
    1. Copy the example config: cp opencode.example.json opencode.json
    2. Place opencode.json in your agent directories or project root
  • Permissions Enabled:
    • edit: allow
    • write: allow
    • bash: allow
    • read: allow
    • webfetch: allow
  • Reference: OpenCode Permissions Documentation

Google Gemini CLI

  • Flags Used: --yolo
  • Enables:
    • Read/write file operations without confirmation
    • Shell command execution without approval
    • All built-in tools with unrestricted access
  • Built-in Tools: read_file, write_file, run_shell_command

OpenAI Codex CLI

  • Flags Used: --dangerously-bypass-approvals-and-sandbox
  • Enables:
    • Disables all approval prompts
    • Removes sandbox restrictions (full file system access)
    • Allows all shell commands and tools without confirmation
  • Security Note: Only use in trusted, controlled environments

Security Considerations

āš ļø Warning: These configurations grant AI agents extensive system access:

  • Full file system access: Can read, modify, or delete any file your user can access
  • Command execution: Can run any shell command with your user privileges
  • No safety prompts: All operations execute automatically without confirmation

Best Practices:

  1. Use in controlled environments: Development containers, VMs, or sandboxed systems
  2. Regular backups: Maintain backups of critical files and directories
  3. Code review: Review AI-generated changes before committing to production
  4. Limit agent scope: Configure agents to work in specific project directories
  5. Monitor activity: Review session logs and agent outputs regularly

Recommended Use Cases:

  • āœ… Development and testing environments
  • āœ… Automated CI/CD pipelines in isolated containers
  • āœ… Personal projects with version control
  • āŒ Production systems without review
  • āŒ Shared systems with sensitive data
  • āŒ Public or untrusted environments

Configuration

Agent Configuration

The system loads agents from agents.json or a custom config file. Each agent represents a repository context where the AI CLI will operate.

Config Format:

{
  "agents": [
    {
      "name": "devops",
      "description": "DevOps and infrastructure management",
      "path": "/path/to/MyHomeDevops"
    },
    {
      "name": "projects",
      "description": "Software development projects",
      "path": "/path/to/projects"
    }
  ]
}

Configuration Fields:

  • name (required): Short identifier for the agent (used in /agent set commands)
  • description (required): Brief human-readable description of the agent
  • path (required): Full path to the repository or project directory

Environment Configuration

āš ļø API_HOST Security Warning Never set API_HOST=0.0.0.0 — this exposes the server on every network interface including your LAN and any public NIC. Always bind to specific trusted interfaces (e.g. 127.0.0.1,<tailscale-ip>). See Network Binding & Secure Access.

The default agent, model, and runtime can be customized via environment variables. This is useful for:

  • Different users having different defaults
  • Docker container configuration
  • CI/CD pipeline customization
  • Development vs. production setups

Available Environment Variables:

# Default agent for new sessions
COPILOT_DEFAULT_AGENT=orchestrator        # Default: orchestrator

# Default model for new sessions  
COPILOT_DEFAULT_MODEL=gpt-5-mini          # Default: gpt-5-mini

# Default runtime for new sessions
COPILOT_DEFAULT_RUNTIME=copilot           # Default: copilot

Usage Examples:

# Set orchestrator as default
export COPILOT_DEFAULT_AGENT=orchestrator
export COPILOT_DEFAULT_RUNTIME=copilot

# Or set family agent with Claude runtime
export COPILOT_DEFAULT_AGENT=family
export COPILOT_DEFAULT_MODEL=claude-sonnet
export COPILOT_DEFAULT_RUNTIME=claude

# Run the agent
python3 agent_manager.py "Your prompt" "session_id"

Docker Example:

ENV COPILOT_DEFAULT_AGENT=orchestrator
ENV COPILOT_DEFAULT_MODEL=gpt-5-mini
ENV COPILOT_DEFAULT_RUNTIME=copilot

Reference Configuration:

Copy .env.example to .env and customize:

cp .env.example .env
# Edit .env with your defaults

When environment variables are not set, the system uses these hardcoded defaults:

  • Agent: orchestrator
  • Model: gpt-5-mini
  • Runtime: copilot

Setup

  1. Copy the agent manager script:

    cp agent_manager.py /usr/local/bin/agent-manager
    chmod +x /usr/local/bin/agent-manager
  2. Configure your agents:

    • Copy agents.example.json to agents.json
    • Edit agents.json with your actual repository paths
    • Place agents.json in the same directory as the script or current working directory
  3. Optional: Specify config location via environment variable

    export AGENTS_CONFIG=/path/to/custom/agents.json

Usage

Command Line

The agent manager supports both positional arguments (for backwards compatibility) and named options for more flexibility.

Basic Usage (Positional Arguments)

python agent_manager.py "<prompt>" [session_id] [config_file]

Arguments:

  • prompt: The prompt/command to send to the AI CLI
  • session_id (optional): N8N session identifier for tracking conversations (default: "default")
  • config_file (optional): Path to agents.json config file

Examples:

# Basic usage
python agent_manager.py "List all files in the current directory"

# With session ID
python agent_manager.py "Continue debugging the issue" "session-123"

# With custom config file
python agent_manager.py "Deploy the app" "session-456" "/etc/agents.json"

Advanced Usage (Named Arguments)

python agent_manager.py [options] "<prompt>" [session_id]

Options:

Agent Options:

  • --agent NAME - Set the agent to use (e.g., devops, family, projects)
  • --list-agents - List all available agents and exit

Model Options:

  • --model NAME - Set the model to use (e.g., gpt-5, sonnet, gemini-1.5-pro)
  • --list-models - List all available models for current runtime and exit

Runtime Options:

  • --runtime NAME - Set the runtime to use (choices: copilot, opencode, claude, gemini, codex, devin)
  • --list-runtimes - List all available runtimes and exit

Configuration:

  • --config FILE or -c FILE - Path to agents.json configuration file

Examples:

# List available agents
python agent_manager.py --list-agents

# List available agents with custom config
python agent_manager.py --list-agents --config my-agents.json

# List available runtimes
python agent_manager.py --list-runtimes

# List available models
python agent_manager.py --list-models

# Set agent via CLI
python agent_manager.py --agent devops "Check server status"

# Set runtime and model via CLI
python agent_manager.py --runtime gemini --model gemini-1.5-pro "Analyze this code"

# Combine multiple options
python agent_manager.py --agent family --runtime claude --model sonnet "Find recipes for dinner"

# Use custom configuration file
python agent_manager.py --config /etc/my-agents.json --agent projects "Review pull requests"

# All options together
python agent_manager.py --config my-agents.json --agent devops --runtime claude --model haiku "Deploy to production" "session-123"

Getting Help:

python agent_manager.py --help

Slash Commands

Interact with the agent manager using slash commands:

Bash Commands

!<command>                 # Execute bash command directly (e.g., !pwd, !ls -la)

Examples:

!pwd                       # Show current working directory
!echo "Hello World"        # Echo a message
!ls -lh                    # List files with details
!date                      # Show current date/time
!git status                # Run git commands
!python3 --version         # Check installed versions

Features:

  • Commands execute directly without hitting any AI runtime
  • 10-second timeout for safety
  • Runs in current working directory
  • Supports pipes, redirects, and command chaining (&&, ||, |)
  • Returns stdout/stderr output

Runtime Management

/runtime list              # Show available runtimes (copilot, opencode, claude, gemini)
/runtime set <runtime>     # Switch runtime (e.g., /runtime set gemini)
/runtime current           # Show current runtime

Model Management

/model list                # Show available models for current runtime
/model set "<model>"       # Switch model (e.g., /model set "claude-opus-4.5")
/model current             # Show current model

Agent Management

/agent list                # Show all available agents with descriptions
/agent set "<agent>"       # Switch to an agent (e.g., /agent set "projects")
/agent current             # Show current agent and its context

Similar Packages

saas-builderAI-native SaaS framework that builds full-stack apps using autonomous AI agents0.0.0
GenericAgentSelf-evolving agent: grows skill tree from 3.3K-line seed, achieving full system control with 6x less token consumptionmain@2026-04-21
JackrabbitAIA Python framework for building flexible, multi-provider AI applications and Discord bots using OpenAI and Cohere.main@2026-04-19
opentulpaSelf-hosted personal AI agent that lives in your DMs. Describe any workflow: triage Gmail, pull a Giphy feed, build a Slack bot, monitor markets. It writes the code, runs it, schedules it, and saves imain@2026-04-17
sawzhang_skillsClaude Code skills collection — CCA study guides, Twitter research, MCP review, auto-iteration tools0.0.0