A self-hosted AI workspace with chat, code execution, parallel multi-agent orchestration, cross-machine agent connection, and a skill marketplace. Mix different AI providers in the same agent team — OpenAI-compatible APIs, Claude Code CLI, and Codex CLI. Connect agents across machines on your network so distributed teams can collaborate in real time. Connect external MCP servers to extend the AI's toolbox. Built with 16 built-in tools and designed for long-running sessions with smart context compression and checkpoint recovery.
- Per-Project Agent Mode Override — each project can override the global sub-agent mode (Auto Spawn, Auto Create, Manual, Realtime, Auto Swarm) and pick its own YAML config, architecture type, agent count, and connection protocols. The active override is shown as a clickable purple tag in the project header and chat banner.
- Auto Architecture — AI-Decided Settings — new "Auto (AI decides)" option for architecture type and agent count (3–8 default). Connection protocols are now multi-select toggle buttons instead of a single dropdown.
- Full Chat Log with Agent Reasoning — every chat session now records a complete log file capturing user messages, tool calls (with arguments), sub-agent reasoning text, and final responses. New Log button next to Activity opens a live-updating panel; new Export button downloads the log as
.txt. - Finished Tasks History — Tasks page now shows the last 100 completed/cancelled/errored tasks with status, duration, agents used, and tools called. Open Chat button on each finished task jumps directly to that session.
- Project List Sorting — sort projects by A–Z or Recent (most recently updated). Sort preference persists across reloads.
- Sub-Agent Reasoning in Chat Log — orchestrators and worker agents stream their intermediate thinking text to the chat log between tool calls, giving full visibility into the decision-making chain.
- AsyncLocalStorage Settings Override — project agent overrides now propagate correctly through every async call in the backend, ensuring
getSettings()returns the project-scoped configuration throughout the entire chat lifecycle.
Warning: This app executes AI-generated code and shell commands. Run it inside Docker or a sandboxed environment. See Security & Docker Setup.
AI Chat with tool-calling — generates React/Recharts visualizations rendered in the output panel.
Visual Agent Editor — drag-and-drop multi-agent design with mesh networking and YAML export.
Minecraft Task Monitor — live pixel-art agents with speech bubbles, walking animations, and inter-agent interactions.
- AI Chat with 16 Built-in Tools — web search, Python, React, shell, files, skills, sub-agents
- Mix Any Model per Agent — assign different AI providers per agent (API, Claude Code CLI, Codex CLI)
- Parallel Multi-Agent System — 7 orchestration topologies, 4 communication protocols, P2P swarm governance
- Cross-Machine Agent Connection — connect agents running on different machines over the network, enabling distributed multi-agent collaboration across your infrastructure
- Minecraft Task Monitor — live pixel-art characters (Steve, Creeper, Enderman, etc.) with speech bubbles showing agent activity, walking animations when agents interact
- Long-Running Session Stability — sliding window compression, smart tool result handling, checkpoint recovery
- MCP Integration — connect any Model Context Protocol server (Stdio, SSE, StreamableHTTP)
- Output Panel — renders React components, charts, HTML, PDF, Word, Excel, images, and Markdown
- Skills & ClawHub — install AI skills from the marketplace or build your own
- Projects — dedicated workspaces with memory, skill selection, and file browser
Mac:
- Download
TigerCoWork.zip - Unzip, right-click
TigerCoWork.appand select Open
Windows:
- Download
TigerCoWorkInstaller.zip - Unzip and run
TigerCoWorkInstaller.bat
Prerequisite: Docker Desktop must be installed and running.
| Mac | Windows | |
|---|---|---|
| Start | Double-click TigerCoWork.app |
Double-click TigerCoWorkStart.bat |
| Stop | Docker Desktop → Containers → Stop | Double-click TigerCoWorkStop.bat |
Mac/Linux:
curl -fsSL https://raw.githubusercontent.com/Sompote/tiger_cowork/main/install.sh | bashWindows (PowerShell):
irm https://raw.githubusercontent.com/Sompote/tiger_cowork/main/install.ps1 | iexLog in to your Linux server directly or via SSH:
ssh root@<your-server-ip>
⚠️ Security Warning: AI agents can execute arbitrary code and shell commands that may modify or delete files on the host system. It is strongly recommended to run Tiger CoWork on a VPS or dedicated machine that contains no important data. Do not run it on a machine with sensitive or irreplaceable information.
Prerequisites: Node.js >= 18, npm, Python 3 (optional)
git clone https://github.com/Sompote/tiger_cowork.git
cd tiger_cowork
bash setup.sh # installs deps, prompts for ClawHub token
npm run build && npm start # → http://localhost:3001Running in background (recommended): Use PM2 to keep Tiger CoWork running after you close the terminal.
npm install -g pm2 # install PM2 globally npm run build # build production bundle pm2 start npm --name "tiger-cowork" -- start # start in background pm2 save # save process list for auto-restart pm2 startup # enable auto-start on system bootUseful PM2 commands:
pm2 status # check running processes pm2 logs tiger-cowork # view logs pm2 restart tiger-cowork # restart pm2 stop tiger-cowork # stop
First-time token: The default token is
your-secret-token-herein the UI. Please change it later in.envfor your security.
- Open
http://localhost:3001 - Go to Settings → enter your API Key, API URL, and Model
- Click Test Connection to verify
- Start chatting — the AI can search the web, run code, generate charts, and more
| Document | Description |
|---|---|
| Technical Documentation | Architecture, agent system, communication protocols, orchestration topologies, MCP setup, CLI agents, API endpoints, configuration |
| Changelog | Full version history and release notes |
This project is licensed under the MIT License.




