Run multiple AI agents on one repo. Zero merge conflicts. Zero duplicate work.
Quick Start ā¢
How Is This Different? ā¢
Features ā¢
API ā¢
CLI ā¢
Contributing ā¢
Full Docs
When multiple AI agents (Claude, GPT, Gemini, Cursor, Copilot, Ollama) work on the same codebase, they step on each other ā duplicate tasks, merge conflicts, lost context. Agent Coordinator is a lightweight HTTP server that gives your agents atomic task claiming, lease-based file locks, a message bus, and a real-time dashboard so they can work together without chaos.
Works with any LLM. Works on any project. No framework lock-in ā it's just an HTTP API.
Real-time dashboard: agent roster, Kanban task board, file locks, streaming output
git clone https://github.com/mkalkere/agent-coordinator.git
cd agent-coordinator
python3 -m venv .venv && source .venv/bin/activate
pip install -e ".[dev]"
agent-os init --name my-project
agent-os serveOr with Docker: docker compose up -d
Then create agents and start them:
agent-os agent create dev-1 --preset developer
agent-os agent create rev-1 --preset reviewer
agent-os claude start --agent dev-1 # Terminal 2
agent-os claude start --agent rev-1 # Terminal 3The developer claims tasks, writes code, creates PRs. The reviewer auto-reviews. All coordination happens through the HTTP API ā agents don't need to know about each other.
Most AI agent frameworks handle orchestration ā what agents do. Agent Coordinator handles coordination ā how agents share resources without conflicts.
| Agent Coordinator | CrewAI | LangGraph | gstack | |
|---|---|---|---|---|
| Independent agents, shared filesystem | Core purpose | Different model | Different model | No |
| Atomic task claiming | Yes | No | No | No |
| File lock leases | Yes | No | No | No |
| Agent health monitoring | Yes (auto-reclaim) | No | No | No |
| Provider agnostic | Any LLM | Mostly | Yes (via LangChain) | Claude Code |
| Framework lock-in | None (HTTP API) | Yes | Yes (LangChain) | Yes |
| Dashboard | Built-in | Paid add-on | LangSmith (separate) | No |
Agent Coordinator is infrastructure, not a framework. Any tool that can make HTTP requests can be a coordinated agent.
- Atomic task claiming ā
INSERT...SELECTensures no two agents grab the same work - Lease-based file locks ā auto-expire, no deadlocks, even if an agent crashes
- Message bus ā priority levels, acknowledgments, TTL, event subscriptions
- Health monitoring ā stale agents detected at 30 min, resources reclaimed at 60 min
- Git worktree per agent ā automatic workspace isolation, no merge conflicts
- Hierarchical memory ā L1 (agent-local), L2 (shared), L3 (cross-project)
- 41 built-in skills ā markdown-based, loaded on demand, provider-agnostic
- 5 agent presets ā developer, reviewer, investigator, analyst, research
- Cost tracking ā per-agent budgets with auto-model-downgrade
- FastAPI + SQLite (WAL mode) ā no external database, single-file deployment
- 17 API routers ā agents, tasks, locks, messages, memory, teams, dashboard, and more
- Real-time dashboard ā agent status, Kanban board, locks, streaming output
- Swagger docs ā interactive API explorer at
/docs
āāāāāāāāāāāāā āāāāāāāāāāāāā
ā Claude āāāāāā HTTP āāāāāŗāāāāāāāāāāāāāāāāāāāāā HTTP āāāāāŗā GPT ā
āāāāāāāāāāāāā ā Coordinator ā āāāāāāāāāāāāā
āāāāāāāāāāāāā ā :9889 ā āāāāāāāāāāāāā
ā Gemini āāāāāā HTTP āāāāāŗā āāāāāā HTTP āāāāāŗā Ollama ā
āāāāāāāāāāāāā ā Tasks ā āāāāāāāāāāāāā
ā Locks ā
ā Messages ā
ā Memory ā
ā Dashboard ā
ā ā
ā SQLite WAL ā
āāāāāāāāāāāāāāāā
agent-coordinator/
āāā src/agent_os/server/ # FastAPI server (17 routers)
āāā src/agent_os/ # CLI, agent runner, memory, workspace
āāā .os/skills/ # 41 built-in skills
āāā tests/ # 4900+ tests
āāā docs/ # Documentation
Contributions welcome! The short version:
git checkout -b feat/your-feature
# write code + tests
python3 -m pytest tests/ -v
pre-commit run --all-files
gh pr createEvery PR goes through a 2-round review before merge. We use Conventional Commits.
See CONTRIBUTING.md for the full guide ā architecture, code quality rules, and PR process.
Report a bug | Request a feature
If you're an AI agent in a workspace with .os/, read .os/README.md for your operating protocol.
MIT Ā© 2026 Mallikarjuna Kalkere

