OpenACM is a self-hosted autonomous AI agent that runs on your PC. It controls your local environment, writes and executes code, navigates the web, and connects to any MCP server ā all through a modern web dashboard.
No subscriptions. No cloud dependency. Your data stays local.
Created and maintained by Jeison Hernandez / JsonProductions.
If you use or build on OpenACM, a credit or a star goes a long way.
- Run commands & code ā executes shell commands and stateful Python (Jupyter kernel)
- Browse the web ā Playwright-powered browser automation: login, scrape, screenshot
- MCP Server support ā connect to any Model Context Protocol server (unity-mcp, filesystem, custom tools, etc.)
- Multi-channel ā chat via Web, Telegram, or Console, all sharing the same AI brain
- Skills system ā define reusable Markdown-based skills the AI triggers automatically
- Sub-agents ā spawn specialized agents that work in parallel on tasks
- RAG memory ā ChromaDB long-term memory that persists across conversations
- Local intent router ā hybrid local/cloud architecture that skips the LLM for simple commands (~5ms, no tokens spent)
- Loop trace debugger ā inspect every iteration: context size, tool calls, LLM timing, truncations
- Python 3.12+
- Node.js 20+
- An API key from any supported LLM provider
git clone https://github.com/Json55Hdz/OpenACM.git
cd OpenACM
.\setup.bat # first time: installs everything and launches OpenACMNext time just run:
.\run.batgit clone https://github.com/Json55Hdz/OpenACM.git
cd OpenACM
chmod +x setup.sh run.sh
./setup.sh # first time: installs everything and launches OpenACMNext time just run:
./run.shdocker-compose up -d --build
docker logs openacm # your dashboard token is printed hereOpen http://localhost:47821, paste the token, done.
- The console prints your Dashboard Token ā copy it
- Open
http://localhost:47821 - Paste the token to log in
- Go to Configuration and add your LLM API key
Uses LiteLLM internally ā any provider it supports works:
| Provider | Example model |
|---|---|
| OpenAI | gpt-4o, gpt-4o-mini |
| Anthropic | claude-opus-4-5, claude-sonnet-4-5 |
| Google Gemini | gemini/gemini-2.0-flash |
| Groq | groq/llama-3.3-70b-versatile |
| Ollama (local) | ollama/llama3.2 |
| Any OpenAI-compatible API | configure custom base URL in settings |
Connect to any MCP server from the MCP Servers dashboard page:
| Mode | When to use |
|---|---|
| Remote HTTP (modern) | unity-mcp, most modern servers ā just paste the URL |
| Remote SSE (legacy) | older SSE-based MCP servers |
| Local stdio | run a local process (npx @modelcontextprotocol/server-filesystem, etc.) |
Once connected, the AI automatically sees and uses those tools.
| Page | What it does |
|---|---|
| Dashboard | Real-time stats, activity, live events |
| Chat | Multi-channel conversations with tool call visibility |
| Tools | Browse available tools and execution history |
| Skills | Create and manage Markdown-based AI skills |
| Agents | Manage sub-agents |
| MCP Servers | Connect to external tool servers |
| Traces | Per-request debugger: context size, tool timings, errors |
| Configuration | LLM model, API keys, channels, preferences |
OpenACM/
āāā frontend/ # React + Next.js dashboard
ā āāā app/ # Page routes
ā āāā components/ # UI components
ā āāā hooks/ # API and WebSocket hooks
ā āāā stores/ # Zustand state
āāā src/openacm/
ā āāā core/
ā ā āāā brain.py # Agentic loop + trace system
ā ā āāā llm_router.py # LiteLLM interface + retries
ā ā āāā local_router.py # Local intent classifier
ā ā āāā memory.py # Conversation memory
ā ā āāā rag.py # Vector memory (ChromaDB)
ā āāā tools/
ā ā āāā mcp_client.py # MCP server manager
ā ā āāā registry.py # Tool registry
ā ā āāā ... # Built-in tools
ā āāā web/
ā āāā server.py # FastAPI server + WebSocket
āāā skills/ # Built-in skill definitions
āāā config/ # Local config (not committed)
āāā setup.bat / setup.sh
āāā run.bat / run.sh
OpenACM is fully self-hosted. The only outbound traffic is what you explicitly trigger:
- LLM API calls to the provider you configured
- Telegram/Discord messages if you connect those channels
- Browser requests when you ask it to visit a site
Everything else ā conversations, API keys, files, memory ā lives in data/ and config/ on your machine. Use Ollama for a fully offline setup.
| Minimum | Recommended | |
|---|---|---|
| OS | Windows 10 / Ubuntu 20.04 / macOS 12 | Windows 11 / Ubuntu 22.04 |
| RAM | 8 GB | 16 GB |
| Storage | 5 GB | 10 GB |
| Python | 3.12+ | 3.12+ |
| Node.js | 20+ | 20+ |
Contributions are welcome. See CONTRIBUTING.md.
MIT ā free to use, modify, and distribute.
Copyright (c) 2026 Jeison David Hernandez Pena (JsonProductions). All copies and derivatives must include the original copyright notice.

