A lean replacement for OpenClaw.
Single binary. 22 tools. Three-tier memory. Telegram + Discord + MCP.
7.5 MB binary ยท 14 MB RAM ยท 5,918 lines ยท 97% tool accuracy ยท 0% hallucination
Quick Start ยท Features ยท Benchmark ยท Architecture ยท Roadmap
๐ ็น้ซไธญๆ ยท ็ฎไฝไธญๆ ยท ๆฅๆฌ่ช ยท ํ๊ตญ์ด ยท Espaรฑol ยท Portuguรชs
The idea started with a simple observation: someone rewrote OpenClaw in Go and cut memory usage from 1GB+ down to 35MB. That was impressive. But we asked โ could we go further?
Most people don't need 430,000 lines of TypeScript. They need an agent that talks to Telegram, reads their files, runs their code, and opens a GitHub PR when something breaks. That's it.
RustClaw is the 80/20 version of OpenClaw โ the features that matter, in a single cargo build.
| RustClaw | OpenClaw | |
| ๐ฆ Binary | 7.5 MB static | requires Node.js 24 + npm |
| ๐พ Idle RAM | 14 MB | 1 GB+ |
| โก Startup | < 100 ms | 5โ10 s |
| ๐ Code | 5,918 lines | ~430,000 lines |
| ๐ง Memory | Three-tier (vector + graph + history) | Basic session |
| ๐ง Tools | 22 built-in + MCP | Plugin system |
| ๐ค LLM | Anthropic, OpenAI, Ollama, Gemini | OpenAI |
| ๐ฑ Channels | Telegram, Discord, WebSocket | Web UI |
Note
RustClaw is not trying to replace OpenClaw. It's proof that the core of what makes an AI agent useful doesn't require a gigabyte of RAM. It requires good architecture, the right language, and the willingness to start over with clearer constraints.
Built entirely with Claude Code by Ad Huang. Zero human-written code.
๐ชถ Runs anywhere โ 7.5 MB binary, 14 MB RAM. Raspberry Pi, $5 VPS, or your laptop. No Node.js, no Python, no Docker required.
๐ง Remembers everything โ Three-tier memory (vector + graph + history) with mixed-mode scoping. Tell the bot your name in Telegram, it remembers in Discord. Facts auto-extracted, contradictions auto-resolved.
๐ก๏ธ Safe by design โ 14 dangerous command patterns blocked. Tool output truncated. Patch files verified before modification. Error retry with auto-recovery. 120s timeout with graceful fallback.
๐ง Actually does things โ 97% tool accuracy on 500-question benchmark. 0% hallucination rate. The bot reads your files, runs your commands, creates PRs โ it doesn't just describe what it would do.
๐ MCP-ready โ Connect any MCP server. Tools auto-discovered and routed transparently. Your LLM sees one unified tool list โ local and remote, no difference.
๐ Benchmarked and proven โ 500-question professional benchmark covering daily ops, coding, system administration, and adversarial prompts. v3โv5 improvement: 81% โ 97%. Zero timeouts.
โ๏ธ Claude Code inspired โ Understand-first tool ordering, history compression, workspace context loading, error retry hints. The same patterns that make Claude Code effective, applied to an open-source agent.
| Requirement | Install |
|---|---|
| Rust 1.85+ | curl --proto '=https' --tlsv1.2 -sSf https://sh.rustup.rs | sh |
| LLM backend | Ollama, OpenAI, Anthropic, or Gemini |
git clone https://github.com/Adaimade/RustClaw.git && cd RustClaw
cargo build --release
# โ target/release/rustclaw (7.5 MB)mkdir -p ~/.rustclaw
cp config.example.toml ~/.rustclaw/config.toml| Ollama (local) | Anthropic | Gemini |
[agent]
provider = "openai"
api_key = "ollama"
base_url = "http://127.0.0.1:11434"
model = "qwen2.5:32b" |
[agent]
provider = "anthropic"
api_key = "sk-ant-..."
model = "claude-sonnet-4-20250514" |
[agent]
provider = "openai"
api_key = "your-key"
base_url = "https://generativelanguage.googleapis.com/v1beta/openai"
model = "gemini-2.5-flash" |
Security: RustClaw binds to
0.0.0.0by default for cloud deploy. Never put API keys in code โ use~/.rustclaw/config.toml(gitignored) or environment variables (RUSTCLAW__AGENT__API_KEY).
# Start everything (gateway + channels + cron + memory)
rustclaw gateway
# One-shot agent call with tool access
rustclaw agent "List all .rs files and count total lines of code"
# GitHub operations
rustclaw github scan
rustclaw github fix 12322 built-in tools with autonomous execution. Supports Anthropic and OpenAI function calling. Max 10 iterations per request.
Layered tool loading โ understand first, then act, then check:
๐๏ธ Understand โก Act ๐ Check
โโโ read_file โโโ run_command โโโ process_check
โโโ list_dir โโโ write_file โโโ docker_status
โโโ search_code โโโ patch_file โโโ system_stats
โโโ http_ping
๐ฌ Discord (on-demand) ๐ง Email (on-demand) โโโ pm2_status
โโโ create/delete channel โโโ fetch_inbox โโโ process_list
โโโ create_role/set_topic โโโ read_email
โโโ kick/ban_member โโโ send_email
Safety: 14 dangerous patterns blocked ยท output truncated to 4000c ยท patch verification ยท error retry hints ยท 120s graceful timeout
Powered by R-Mem architecture.
โโ ๐ Short-term โโ conversation history (SQLite)
โโ ๐ฆ Long-term โโโ LLM fact extraction โ dedup โ ADD/UPDATE/DELETE/NONE
โ โโโ Integer ID mapping ยท contradiction detection ยท semantic dedup
โโ ๐ธ๏ธ Graph โโโโโโโ entity + relation extraction with soft-delete
Mixed-mode recall โ three scopes merged:
| Scope | Example | Shared across |
|---|---|---|
| Local | telegram:-100xxx |
Single group |
| User | user:12345 |
All channels for one person |
| Global | global:system |
Everyone |
| Channel | Features |
|---|---|
| Telegram | Long polling ยท streaming edit ยท ACL ยท session history |
| Discord | @mention ยท server management ยท scan / fix issue #N / pr status |
| Gateway | OpenClaw-compatible WebSocket on :18789/ws |
[mcp]
servers = [
{ name = "fs", command = "npx @modelcontextprotocol/server-filesystem /tmp" },
]Auto-scan repos ยท auto-PR from issues ยท system monitoring alerts ยท email classification โ all scheduled via cron, notifications to Discord.
500-question tool calling benchmark, qwen2.5:32b (local Ollama):
| Version | Total | Timeout | Speed |
|---|---|---|---|
| v3 baseline | 81% | 74 | 44s/q |
| v4 timeout fix | 85% | 3 | 36s/q |
| v5 optimized | 97% | 0 | 38s/q |
| Category | v5 Score |
|---|---|
| Core operations | 92% |
| Basic tools | 95% |
| Medium tasks | 100% |
| Advanced reasoning | 98% |
| Hallucination traps | 100% |
| Multi-step chains | 99% |
src/
โโโ main.rs CLI dispatch + startup
โโโ cli/mod.rs clap subcommands
โโโ config.rs TOML + env config
โโโ gateway/ WebSocket server + protocol + handshake
โโโ agent/runner.rs LLM streaming + agentic loop + history compression
โโโ channels/ Telegram (teloxide) + Discord (serenity)
โโโ tools/ 22 tools: fs, shell, search, discord, email, system, github, mcp
โโโ session/ MemoryManager + SQLite store + graph + embedding + extraction
โโโ cron/ Scheduled jobs (system, email, GitHub)
30 files ยท 5,918 lines ยท 7.5 MB binary ยท Zero external services
| Status | Feature |
|---|---|
| โ | Tool calling (22 tools + agentic loop) |
| โ | Three-tier memory (vector + graph + mixed scope) |
| โ | Telegram + Discord channels |
| โ | MCP client (transparent tool routing) |
| โ | GitHub integration (scan + auto-PR) |
| โ | System monitoring + cron alerts |
| โ | Email (IMAP + SMTP) |
| โ | SQLite persistence |
| ๐ฒ | Web UI dashboard |
| ๐ฒ | Slack / LINE channels |
| ๐ฒ | RAG (document search) |
| ๐ฒ | Multi-agent routing |
| ๐ฒ | WASM plugin system |
| ๐ฒ | Prometheus metrics |
Community contributions welcome โ open an issue or PR.
MIT License ยท v0.4.0
Created by Ad Huang with Claude Code
The framework is there. The rest is up to the community.
