freshcrate
Home > Security > carapace

carapace

A secure, stable Rust alternative to openclaw/moltbot/clawdbot

Description

A secure, stable Rust alternative to openclaw/moltbot/clawdbot

README

carapace

Stable release available. Carapace is ready for real use on its verified stable paths; partial and in-progress areas are called out explicitly in the docs.

A security-focused, open-source personal AI assistant. Runs on your machine. Works through Signal, Telegram, Discord, Slack, webhooks, and console. Supports Anthropic, OpenAI, Codex, Ollama, Gemini, Vertex AI, Bedrock, and Venice AI. Extensible via WASM plugins and guarded filesystem tools. Written in Rust.

A hardened alternative to openclaw / clawdbot — for when your assistant needs a hard shell.

Features

  • Multi-provider LLM engine — Anthropic, OpenAI API key, Codex subscription login, Ollama, Google Gemini, Vertex AI, AWS Bedrock, and Venice AI with streaming, tool dispatch, and cancellation
  • Multi-channel messaging — Signal, Telegram, Discord, Slack, console, and webhooks
  • Channel activity framework — per-channel typing indicators and after-response read receipts, with Signal as the first activity-enabled built-in channel
  • Tooling and local workspace access — built-in agent tools, guarded filesystem tools for explicit roots, and channel-specific tool schemas
  • Signed plugin runtime — plugins are signature-verified and run with strict permissions and resource limits
  • Secure defaults — local-first binding, locked-down auth behavior, encrypted secret storage, guarded tool execution, root-scoped filesystem access, and OS-level subprocess sandboxing for protected paths
  • Infrastructure — TLS, mTLS, mDNS discovery, config hot-reload, Tailscale integration, Prometheus metrics, audit logging. Multi-node clustering is partially implemented

Expectations vs OpenClaw

Carapace focuses on a hardened core first. If you're coming from openclaw, the following are planned but not yet on par:

  • Broader channel coverage (e.g., WhatsApp/iMessage/Teams/Matrix/WebChat)
  • Companion apps / nodes (macOS + iOS/Android clients)
  • Browser control and live canvas/A2UI experiences
  • Skills/onboarding UX and multi-agent routing
  • Automatic model/provider failover

Security

Carapace is designed to address the major vulnerability classes reported in the January 2026 openclaw security disclosures:

Threat Carapace defense
Unauthenticated access Denied by default when credentials configured; CSRF-protected control endpoints
Exposed network ports Localhost-only binding (127.0.0.1)
Plaintext secret storage OS credential store (Keychain / Keyutils / Credential Manager) with AES-256-GCM fallback
Skills supply chain Ed25519 signatures + WASM capability sandbox + resource limits
Prompt injection Prompt guard + inbound classifier + exec approval flow + tool policies
No process sandboxing OS-level subprocess sandboxing on macOS/Linux/Windows for sandbox-required paths; unsupported paths fail closed
SSRF / DNS rebinding Private IP blocking + post-resolution validation

See docs/security.md for the full security model. See docs/security-comparison.md for a threat-by-threat comparison with OpenClaw. See docs/feature-status.yaml and docs/feature-evidence.yaml for verified-vs-partial implementation status.

Quick Start

  1. Install cara from the latest release (Linux/macOS/Windows):
  2. Run guided setup:
    cara setup
  3. Start the assistant:
    cara
  4. Verify first-run outcome:
    cara verify --outcome auto --port 18789
  5. Start local interactive chat:
    cara chat

Use /help in chat for REPL commands (/new, /exit, /quit).

If you use cloud models, finish one provider onboarding path before launching: set one provider key (for example ANTHROPIC_API_KEY, OPENAI_API_KEY, GOOGLE_API_KEY, or VENICE_API_KEY), use Codex sign-in through cara setup --provider codex or the Control UI, or use Gemini Google sign-in through cara setup --provider gemini --auth-mode oauth or the Control UI. Codex and Gemini Google sign-in both require CARAPACE_CONFIG_PASSWORD so the stored auth profile stays encrypted at rest. If you are not sure where to start, choose local-chat as your first outcome, start with one provider, and add channels only after cara verify --outcome auto passes. If you want Cara to inspect one local project directory, enable the filesystem block for a single workspace root and start with the guarded local project assistant recipe.

Status

Carapace ships a stable release line. Core paths are tested and verified for routine use, while partial and in-progress areas remain explicitly documented.

  • Working now: setup wizard, local chat (cara chat), token auth enforcement, health/control endpoints (including durable task controls), control UI frontend foundation (status/channels/redacted config editor), Codex subscription onboarding, per-channel activity config with Signal typing/read-receipt flows, and OpenAI-compatible HTTP endpoints.
  • In progress: advanced Control UI flows (pairing/workflow UX), broader channel smoke evidence, and hardened internet-facing deployment guidance.

See docs/feature-status.yaml and docs/feature-evidence.yaml for the current source of truth.

Roadmap

  • Roadmap — what we're building now, next, and later
  • Up next: Anthropic subscription onboarding, guided Bedrock/Vertex onboarding, provider migration/import paths, and advanced Control UI flows
  • Recently shipped: first stable release, long-running assistant MVP (durable queue + autonomy verify), cross-platform subprocess sandboxing, guided setup (cara setup), first-run verifier (cara verify), Gemini onboarding (Google sign-in or API key via CLI and Control UI), Codex onboarding (OpenAI subscription login via CLI and Control UI), Vertex AI provider support, per-channel activity features with Signal typing indicators and after-response read receipts, and guarded filesystem tools for explicit workspace roots

Docs

Contributing

If you want to build from source or contribute, start here:

License

Apache-2.0 — see LICENSE.

Release History

VersionChangesUrgencyDate
v0.7.0## Summary - Added encrypted session artifacts at rest, including `.crypto-manifest` recovery metadata, stricter integrity handling, and fail-closed recovery behavior for encrypted session state. - Added named execution routes plus session-level route precedence so requests, sessions, agents, and defaults can target reusable backend definitions instead of repeating raw `provider:model` strings everywhere. - Unified shared OAuth onboarding and auth-profile persistence for Codex and Gemini flows, High4/13/2026
v0.6.0## Summary - Added guided onboarding and Control UI onboarding/status support for more provider paths, including Anthropic setup-token auth profiles, Bedrock validation, and Vertex setup guidance. - Standardized model routing on explicit `provider:model` syntax and expanded Vertex AI support to Anthropic, Meta, Mistral, and Nvidia third-party publishers via `streamRawPredict`. - Added migration/import flows for OpenClaw, OpenCode, Aider, and NemoClaw so existing provider configuration can be broHigh4/6/2026

Dependencies & License Audit

Loading dependencies...

Similar Packages

ai-notes-apiNo descriptionmaster@2026-04-21
orbitOne API for 20+ LLM providers, your databases, and your files — self-hosted, open-source AI gateway with RAG, voice, and guardrails.v2.6.6
gloamyFrontier self improving AI intern / coworkerv0.1.9
smgEngine-agnostic LLM gateway in Rust. Full OpenAI & Anthropic API compatibility across SGLang, vLLM, TRT-LLM, OpenAI, Gemini & more. Industry-first gRPC pipeline, KV cache-aware routing, chat history, v1.4.1
Wee-Orchestrator🍀 Self-hosted multi-agent AI orchestrator — chat with Claude, Gemini & Copilot CLI from Telegram, WebEx, or browser. 5 runtimes, 17+ models, task scheduling, skill plugins.main@2026-04-21