One platform. Every AI. Any channel.
Wee-Orchestrator is a unified AI agent platform that lets you chat with any AI CLI runtime ā GitHub Copilot, Claude Code, OpenCode, Google Gemini, or OpenAI Codex ā from Telegram, WebEx, or a beautiful browser-based Web UI. Switch models, agents, and runtimes on the fly with slash commands. Schedule recurring AI tasks. Send files and images. All from one place.
| Problem | Wee-Orchestrator Solution |
|---|---|
| Juggling multiple AI tools and CLIs | One unified interface across 5 runtimes and 17+ models |
| AI is stuck in the terminal | Chat from anywhere ā Telegram, WebEx, or the Web UI |
| No memory between sessions | Persistent sessions with full conversation history |
| Can't automate AI tasks | Built-in task scheduler with cron-like scheduling |
| One-size-fits-all agents | Multi-agent architecture ā switch agents per task |
| Complex setup | Zero-config bot creation with the Starter Kit |
| Chat Interface | Task Scheduler |
![]() |
![]() |
| Secure Pairing Login | Architecture Overview |
![]() |
![]() |
- š 5 AI Runtimes ā GitHub Copilot CLI, Claude Code, OpenCode, Google Gemini, OpenAI Codex
- š¬ 3 Channels ā Telegram bot, WebEx bot (via RabbitMQ), glassmorphism Web UI with SSE streaming
- š¤ Multi-Agent ā Define specialized agents in
agents.json, switch with/agent; hot-reload on change (no restart needed) - š Live Model Switching ā Change models mid-conversation with
/model - š
Task Scheduler ā Schedule recurring AI jobs with natural language (
every day at 9am) - š File & Image Support ā Upload, download, and inline images across all channels
- š¤ Audio Transcription ā Voice messages auto-transcribed via Whisper (OpenAI or local)
- š Secure Auth ā Pairing-code login, per-user ACLs, agent/model pinning, yolo/restricted modes
- š Session History ā Full conversation persistence with search and resume
- ā” Background Tasks ā Delegate long-running work to background agents with in-thread status updates
- š In-Thread Notifications ā Real-time task lifecycle updates (queued ā running ā complete) in your conversation
- š Extensible Skills ā Plugin architecture for adding capabilities (Cisco Meraki, Home Assistant, etc.)
- **āļø Slash Command Registry ā Pure-server commands that bypass the LLM for reduced latency; auto-registers with Telegram BotFather for autocomplete; built-in
/secretcommand for secure credential management
Telegram āāāŗ TelegramConnector āāā
ā
WebEx āāāāāāŗ WebEXConnector āāāāāā¼āāāŗ SessionManager āāāŗ AI CLI Runtimes
ā ā (Copilot, Claude,
Browser āāāāŗ FastAPI /api/v1 āāāāā ā OpenCode, Gemini,
ā Codex)
TaskScheduler
Each inbound message flows through a channel connector, into the shared SessionManager (which handles slash commands, session state, and agent routing), and out to the selected AI CLI runtime as a subprocess. Responses stream back in real time.
For the full component diagram, sequence diagrams, and deployment topology, see ARCHITECTURE.md.
Wee-Orchestrator provides a flexible framework to:
- Chat with AI agents from Telegram, WebEx, or the browser-based Web UI
- Call AI CLIs (Copilot, OpenCode, Claude Code, Gemini, Codex) from N8N workflows
- Maintain session affinity across multiple conversation turns
- Switch between different agent repositories dynamically
- Configure agents via JSON config files instead of hardcoding
- Support multiple AI models and runtimes
- Schedule recurring AI tasks with the built-in Task Scheduler
- Execute bash commands directly with
!prefix - Send and receive files and images over Telegram and WebEx
- Enforce per-user agent pinning, model pinning, and yolo/restricted mode ACLs
For release history and feature documentation see CHANGELOG.md and RELEASE_NOTES.md.
# 1. Clone the repo
git clone https://github.com/leprachuan/Wee-Orchestrator.git
cd Wee-Orchestrator
# 2. Install dependencies
pip install -r requirements.txt
# 3. Configure your environment
cp .env.example .env # Edit with your API keys and bot tokens
# 4. Define your agents
vi agents.json # Add your agent definitions
# 5. Start the API server
python3 agent_manager.py --api
# 6. (Optional) Start channel connectors
python3 telegram_connector.py # Telegram bot
python3 webex_connector.py # WebEx botThen open http://localhost:8000/ui in your browser and pair via Telegram or WebEx.
š Want to create your own bot? Use the Wee-Orchestrator Starter Kit to scaffold one in minutes.
| Command | Description |
|---|---|
/agent <name> |
Switch to a different agent |
/model <model> |
Change AI model mid-conversation |
/runtime <runtime> |
Switch AI runtime (copilot, claude, gemini, opencode, codex, devin) |
/timeout <seconds> |
Adjust execution timeout |
/status |
Check running task status |
/cancel |
Cancel the current running task |
/schedule list |
List all scheduled jobs |
/schedule add <name> | <schedule> | <task> |
Create a scheduled job |
/help |
Show all available commands |
Wee-Orchestrator enables you to create custom bots ā specialized AI agents with their own configuration, knowledge base, and capabilities. Each bot is a self-contained repository that can be integrated with Wee-Orchestrator.
š New here? Use the Wee-Orchestrator Starter Kit to scaffold a new bot in minutes ā includes
AGENTS.md, skill management with security scanning, memory structure, and setup scripts.
A bot is a Git repository containing:
- Core Configuration ā An
AGENTS.mdfile defining agent behavior, preferences, and runtime configurations - Knowledge Base ā A
memory/directory using the PARA methodology (Projects, Areas, Resources, Archive) for organizing operational knowledge - Focus Areas ā Organized folders for specific domains (e.g.,
email_triage/,smart_home/,infrastructure/) - Skills Integration ā References to specialized skills from pot-o-skills or custom skills
- Documentation ā README, guides, and workflow documentation
my-bot/
āāā README.md # Bot overview & usage
āāā AGENTS.md # Agent behavior & configuration
āāā .env # Credentials (git-ignored)
āāā .gitignore # Protect secrets
ā
āāā memory/ # Knowledge base (PARA methodology)
ā āāā projects/ # Active multi-step initiatives
ā āāā areas/ # Ongoing responsibility areas
ā āāā resources/ # Reference material & best practices
ā āāā archive/ # Completed/deprecated items
ā
āāā skills/ # Custom skill implementations
ā āāā custom-skill-1/
ā āāā custom-skill-2/
ā
āāā domain-folders/ # Domain-specific organization
āāā email/ # Email processing
āāā home-automation/ # Smart home tasks
āāā infrastructure/ # Infrastructure management
Defines the bot's behavior, preferences, and runtime configuration:
- Agent name, purpose, and timezone
- Preferred models and runtimes (Claude, Copilot, Gemini)
- Tool permissions and access control
- Sub-agent delegation rules
- Skill definitions and repository locations
- Security and credential management
Example excerpt:
---
name: my-bot
runtime: copilot
model: gpt-5-sonnet
timezone: EST/EDT
---
## Behavior
- Preferred AI runtime: Claude > Copilot > Gemini
- Task routing: Delegate to specialized sub-agents for domain expertise
- Notification channel: TelegramOrganize knowledge for long-term retention and reuse:
- Projects/ ā Active multi-step work (e.g.,
home-automation-setup.md) - Areas/ ā Ongoing responsibilities (e.g.,
orchestration.md,security.md) - Resources/ ā Reference material (e.g.,
best-practices.md,api-docs.md) - Archive/ ā Completed or deprecated knowledge
Skills extend your bot's capabilities by providing pre-built integrations with external APIs and services. Skills should be sourced from reputable, official repositories to minimize security risks.
-
pot-o-skills ā Community skills for cloud networking and security
- Repository: https://github.com/leprachuan/pot-o-skills
- Skills: Cisco Meraki, Cisco Security Cloud Control, and more
- Status: Public, open-source, actively maintained
- Usage: Clone and link into your bot's
skills/directory
-
Anthropic Official Skills ā Official skills from Anthropic
- Repository: https://github.com/anthropics/skills
- Status: Official, production-ready
- Security: Vetted and maintained by Anthropic team
- Best for: Claude AI integration, code generation, analysis
-
Custom Skills ā Implement your own domain-specific skills
- Location:
./skills/directory in your bot repository - Documentation: Must include SKILL.md, README, and examples
- Security: You control the code and updates
- Location:
Skills have full access to your system ā they can execute commands, read files, and call APIs. Follow these practices:
-
ā Only use official skills from original software/service authors
- Example: Use Cisco's official Meraki skill, not community forks
- Example: Use Anthropic's official skills, not third-party versions
-
ā Validate before installation
- Review the source code in the skill repository
- Check for hardcoded credentials or suspicious patterns
- Verify the repository is actively maintained
- Look for security issues reported in GitHub Issues
-
ā Use trusted repositories
- Official repos (Anthropic, GitHub, etc.)
- Long-standing community projects with active maintainers
- Projects with security policies and issue tracking
- Avoid random GitHub repos without documentation or maintenance
-
ā ļø Audit custom skills carefully- Never trust a skill without reviewing its code first
- Check for unintended API calls or data exfiltration
- Validate input sanitization
- Ensure credentials are handled safely
-
ā Keep skills updated
- Periodically review and update to latest versions
- Subscribe to security advisories from skill repositories
- Remove unused skills to reduce attack surface
# Link public skills from pot-o-skills (verified, open-source)
ln -s /opt/pot-o-skills/cisco-meraki ./skills/
ln -s /opt/pot-o-skills/cisco-security-cloud-control ./skills/
# Link Anthropic official skills (verified, official)
ln -s /opt/anthropic-skills/code-analysis ./skills/
ln -s /opt/anthropic-skills/file-operations ./skills/
# Or implement custom skills in skills/ directory
mkdir skills/my-custom-skill-
pot-o-skills: https://github.com/leprachuan/pot-o-skills
cd /opt && git clone https://github.com/leprachuan/pot-o-skills.git
-
Anthropic Skills: https://github.com/anthropics/skills
cd /opt && git clone https://github.com/anthropics/skills.git
-
Custom Community Skills: Search GitHub for
topic:agent-skillswith verification:- ā Active maintenance (recent commits)
- ā Clear documentation
- ā Security policy file
- ā Public issue tracking
Organize bot work by area of focus:
- Keep related scripts, templates, and documentation together
- Example:
email/for email processing,home/for automation tasks - Each folder can have its own README with domain-specific guidance
š” Recommended: Fork the Wee-Orchestrator Starter Kit instead of starting from scratch ā it includes everything below pre-configured with best practices, security scanning, and setup scripts.
-
Create your bot repository:
mkdir my-bot && cd my-bot git init git remote add origin https://github.com/username/my-bot.git
-
Add AGENTS.md: Copy and customize the AGENTS.md template from Wee-Orchestrator with your bot's preferences
-
Create memory directory:
mkdir -p memory/{projects,areas,resources,archive} echo "# Knowledge Base" > memory/INDEX.md -
Add .env and .gitignore:
cp /opt/n8n-copilot-shim-dev/.env.example .env echo ".env" >> .gitignore echo "*.key" >> .gitignore echo "secrets.json" >> .gitignore
-
Link or implement skills:
mkdir skills ln -s /opt/pot-o-skills skills/cisco-meraki
-
Register with Wee-Orchestrator: Update Wee-Orchestrator's
agents.jsonto include your bot:{ "agents": [ { "name": "my-bot", "path": "/opt/my-bot", "enabled": true } ] }
- Secrets First: Store all credentials in
.env(git-ignored), never commit secrets - Document Decisions: Use
memory/areas/to record architectural decisions and conventions - Skill Reuse: Leverage pot-o-skills before building custom skills
- Domain Organization: Group related work into focused folders for maintainability
- README Clarity: Each folder should have clear purpose and examples
- Wee-Orchestrator: https://github.com/leprachuan/Wee-Orchestrator
- pot-o-skills: https://github.com/leprachuan/pot-o-skills (Cisco Meraki, SCC, and more)
- AGENTS.md Template: See ./AGENTS.md for full configuration reference
This project requires one or more of the following AI CLI tools to be installed:
Prerequisites:
- Node.js 18+ (for npm installation) OR native binary support
- Anthropic API key for authentication
Installation:
Native binary (recommended):
curl -fsSL https://claude.ai/install.sh | bashOr via npm:
npm install -g @anthropic-ai/claude-codeSupported Systems: macOS 10.15+, Linux (Ubuntu 20.04+/Debian 10+, Alpine), Windows 10+ (via WSL)
Reference: Claude Code Quickstart Documentation
Prerequisites:
- Node.js 22 or higher
- Active GitHub Copilot subscription (Pro, Pro+, Business, or Enterprise plan)
- GitHub account for authentication
Installation:
npm install -g @github/copilot
copilot # Launch and authenticateFor authentication, use the /login command or set GH_TOKEN environment variable with a fine-grained PAT.
Supported Systems: macOS, Linux, Windows (via WSL)
Reference: GitHub Copilot CLI Installation Guide
Prerequisites:
- Node.js or compatible runtime
Installation (Recommended):
curl -fsSL https://opencode.ai/install | bashOr via npm:
npm i -g opencode-ai@latestAlternative package managers:
- Homebrew:
brew install opencode - Scoop (Windows):
scoop bucket add extras && scoop install extras/opencode - Arch Linux:
paru -S opencode-bin
Supported Systems: Windows, macOS, Linux
Reference: OpenCode Documentation
Prerequisites:
- Python 3.7 or higher
- Google Cloud account with Gemini API access
- Google API key for authentication
Installation:
pip install google-generativeai
# Or using the CLI wrapper
pip install gemini-cliAuthentication:
Set your API key as an environment variable:
export GOOGLE_API_KEY='your-api-key-here'Or configure it in your shell profile:
echo 'export GOOGLE_API_KEY="your-api-key-here"' >> ~/.bashrc
source ~/.bashrcSupported Systems: Windows, macOS, Linux
Reference: Google Gemini API Documentation
All AI runtimes in this system are configured with full tool access to enable read, write, and execute operations without approval prompts. This provides maximum automation capabilities.
- Flags Used:
--allow-all-tools --allow-all-paths - Enables:
- All MCP tools and shell commands without approval
- Read/write/execute permissions for all files and directories
- Security Note: Gives Copilot the same permissions as your user account
- Flags Used:
--permission-mode bypassPermissions - Enables:
- Auto-approve all file edits, writes, and reads
- Execute shell commands without approval
- Access web/network tools without prompts
- Also Known As: YOLO mode or dontAsk mode
- Configuration: Uses
opencode.jsonfile for permission settings - Required Setup:
- Copy the example config:
cp opencode.example.json opencode.json - Place
opencode.jsonin your agent directories or project root
- Copy the example config:
- Permissions Enabled:
edit: allowwrite: allowbash: allowread: allowwebfetch: allow
- Reference: OpenCode Permissions Documentation
- Flags Used:
--yolo - Enables:
- Read/write file operations without confirmation
- Shell command execution without approval
- All built-in tools with unrestricted access
- Built-in Tools: read_file, write_file, run_shell_command
- Flags Used:
--dangerously-bypass-approvals-and-sandbox - Enables:
- Disables all approval prompts
- Removes sandbox restrictions (full file system access)
- Allows all shell commands and tools without confirmation
- Security Note: Only use in trusted, controlled environments
- Full file system access: Can read, modify, or delete any file your user can access
- Command execution: Can run any shell command with your user privileges
- No safety prompts: All operations execute automatically without confirmation
Best Practices:
- Use in controlled environments: Development containers, VMs, or sandboxed systems
- Regular backups: Maintain backups of critical files and directories
- Code review: Review AI-generated changes before committing to production
- Limit agent scope: Configure agents to work in specific project directories
- Monitor activity: Review session logs and agent outputs regularly
Recommended Use Cases:
- ā Development and testing environments
- ā Automated CI/CD pipelines in isolated containers
- ā Personal projects with version control
- ā Production systems without review
- ā Shared systems with sensitive data
- ā Public or untrusted environments
The system loads agents from agents.json or a custom config file. Each agent represents a repository context where the AI CLI will operate.
Config Format:
{
"agents": [
{
"name": "devops",
"description": "DevOps and infrastructure management",
"path": "/path/to/MyHomeDevops"
},
{
"name": "projects",
"description": "Software development projects",
"path": "/path/to/projects"
}
]
}Configuration Fields:
name(required): Short identifier for the agent (used in/agent setcommands)description(required): Brief human-readable description of the agentpath(required): Full path to the repository or project directory
ā ļø API_HOSTSecurity Warning Never setAPI_HOST=0.0.0.0ā this exposes the server on every network interface including your LAN and any public NIC. Always bind to specific trusted interfaces (e.g.127.0.0.1,<tailscale-ip>). See Network Binding & Secure Access.
The default agent, model, and runtime can be customized via environment variables. This is useful for:
- Different users having different defaults
- Docker container configuration
- CI/CD pipeline customization
- Development vs. production setups
Available Environment Variables:
# Default agent for new sessions
COPILOT_DEFAULT_AGENT=orchestrator # Default: orchestrator
# Default model for new sessions
COPILOT_DEFAULT_MODEL=gpt-5-mini # Default: gpt-5-mini
# Default runtime for new sessions
COPILOT_DEFAULT_RUNTIME=copilot # Default: copilotUsage Examples:
# Set orchestrator as default
export COPILOT_DEFAULT_AGENT=orchestrator
export COPILOT_DEFAULT_RUNTIME=copilot
# Or set family agent with Claude runtime
export COPILOT_DEFAULT_AGENT=family
export COPILOT_DEFAULT_MODEL=claude-sonnet
export COPILOT_DEFAULT_RUNTIME=claude
# Run the agent
python3 agent_manager.py "Your prompt" "session_id"Docker Example:
ENV COPILOT_DEFAULT_AGENT=orchestrator
ENV COPILOT_DEFAULT_MODEL=gpt-5-mini
ENV COPILOT_DEFAULT_RUNTIME=copilotReference Configuration:
Copy .env.example to .env and customize:
cp .env.example .env
# Edit .env with your defaultsWhen environment variables are not set, the system uses these hardcoded defaults:
- Agent:
orchestrator - Model:
gpt-5-mini - Runtime:
copilot
-
Copy the agent manager script:
cp agent_manager.py /usr/local/bin/agent-manager chmod +x /usr/local/bin/agent-manager
-
Configure your agents:
- Copy
agents.example.jsontoagents.json - Edit
agents.jsonwith your actual repository paths - Place
agents.jsonin the same directory as the script or current working directory
- Copy
-
Optional: Specify config location via environment variable
export AGENTS_CONFIG=/path/to/custom/agents.json
The agent manager supports both positional arguments (for backwards compatibility) and named options for more flexibility.
python agent_manager.py "<prompt>" [session_id] [config_file]Arguments:
prompt: The prompt/command to send to the AI CLIsession_id(optional): N8N session identifier for tracking conversations (default: "default")config_file(optional): Path to agents.json config file
Examples:
# Basic usage
python agent_manager.py "List all files in the current directory"
# With session ID
python agent_manager.py "Continue debugging the issue" "session-123"
# With custom config file
python agent_manager.py "Deploy the app" "session-456" "/etc/agents.json"python agent_manager.py [options] "<prompt>" [session_id]Options:
Agent Options:
--agent NAME- Set the agent to use (e.g., devops, family, projects)--list-agents- List all available agents and exit
Model Options:
--model NAME- Set the model to use (e.g., gpt-5, sonnet, gemini-1.5-pro)--list-models- List all available models for current runtime and exit
Runtime Options:
--runtime NAME- Set the runtime to use (choices: copilot, opencode, claude, gemini, codex, devin)--list-runtimes- List all available runtimes and exit
Configuration:
--config FILEor-c FILE- Path to agents.json configuration file
Examples:
# List available agents
python agent_manager.py --list-agents
# List available agents with custom config
python agent_manager.py --list-agents --config my-agents.json
# List available runtimes
python agent_manager.py --list-runtimes
# List available models
python agent_manager.py --list-models
# Set agent via CLI
python agent_manager.py --agent devops "Check server status"
# Set runtime and model via CLI
python agent_manager.py --runtime gemini --model gemini-1.5-pro "Analyze this code"
# Combine multiple options
python agent_manager.py --agent family --runtime claude --model sonnet "Find recipes for dinner"
# Use custom configuration file
python agent_manager.py --config /etc/my-agents.json --agent projects "Review pull requests"
# All options together
python agent_manager.py --config my-agents.json --agent devops --runtime claude --model haiku "Deploy to production" "session-123"Getting Help:
python agent_manager.py --helpInteract with the agent manager using slash commands:
!<command> # Execute bash command directly (e.g., !pwd, !ls -la)
Examples:
!pwd # Show current working directory
!echo "Hello World" # Echo a message
!ls -lh # List files with details
!date # Show current date/time
!git status # Run git commands
!python3 --version # Check installed versionsFeatures:
- Commands execute directly without hitting any AI runtime
- 10-second timeout for safety
- Runs in current working directory
- Supports pipes, redirects, and command chaining (&&, ||, |)
- Returns stdout/stderr output
/runtime list # Show available runtimes (copilot, opencode, claude, gemini)
/runtime set <runtime> # Switch runtime (e.g., /runtime set gemini)
/runtime current # Show current runtime
/model list # Show available models for current runtime
/model set "<model>" # Switch model (e.g., /model set "claude-opus-4.5")
/model current # Show current model
/agent list # Show all available agents with descriptions
/agent set "<agent>" # Switch to an agent (e.g., /agent set "projects")
/agent current # Show current agent and its context




