freshcrate
Home > MCP Servers > bernstein

Description

Declarative Agent Orchestration. Ship while you sleep.

README

Bernstein

Orchestrate any AI coding agent. Any model. One command.

Bernstein TUI โ€” live task dashboard

CI codecov GitHub starsPyPI npm VS Marketplace Python 3.12+ License MCP Compatible A2A Compatible Share on X

Documentation ยท Getting Started ยท Glossary ยท Limitations

Wall of fame

"lol, good luck, keep vibecoding shit that you have no idea about xD" โ€” PeaceFirePL, Reddit


Bernstein takes a goal, breaks it into tasks, assigns them to AI coding agents running in parallel, verifies the output, and merges the results. You come back to working code, passing tests, and a clean git history.

No framework to learn. No vendor lock-in. Agents are interchangeable workers โ€” swap any agent, any model, any provider. The orchestrator itself is deterministic Python code. Zero LLM tokens on scheduling.

pip install bernstein
bernstein -g "Add JWT auth with refresh tokens, tests, and API docs"

Also available via pipx, uv tool install, brew, dnf copr, and npx bernstein-orchestrator. See install options.

Supported agents

Bernstein auto-discovers installed CLI agents. Mix them in the same run โ€” cheap local models for boilerplate, heavy cloud models for architecture.

Agent Models Install
Claude Code opus 4.6, sonnet 4.6, haiku 4.5 npm install -g @anthropic-ai/claude-code
Codex CLI gpt-5.4, o3, o4-mini npm install -g @openai/codex
Gemini CLI gemini-3-pro, 3-flash npm install -g @google/gemini-cli
Cursor sonnet 4.6, opus 4.6, gpt-5.4 Cursor app
Aider Any OpenAI/Anthropic-compatible pip install aider-chat
Ollama + Aider Local models (offline) brew install ollama
Amp, Cody, Continue.dev, Goose, Kilo, Kiro, OpenCode, Qwen, Roo Code, Tabby Various See docs
Generic Any CLI with --prompt Built-in

Tip

Run bernstein --headless for CI pipelines โ€” no TUI, structured JSON output, non-zero exit on failure.

Quick start

cd your-project
bernstein init                    # creates .sdd/ workspace + bernstein.yaml
bernstein -g "Add rate limiting"  # agents spawn, work in parallel, verify, exit
bernstein live                    # watch progress in the TUI dashboard
bernstein stop                    # graceful shutdown with drain

For multi-stage projects, define a YAML plan:

bernstein run plan.yaml           # skips LLM planning, goes straight to execution
bernstein run --dry-run plan.yaml # preview tasks and estimated cost

How it works

  1. Decompose โ€” the manager breaks your goal into tasks with roles, owned files, and completion signals
  2. Spawn โ€” agents start in isolated git worktrees, one per task. Main branch stays clean.
  3. Verify โ€” the janitor checks concrete signals: tests pass, files exist, lint clean, types correct
  4. Merge โ€” verified work lands in main. Failed tasks get retried or routed to a different model.

The orchestrator is a Python scheduler, not an LLM. Scheduling decisions are deterministic, auditable, and reproducible.

Capabilities

Core orchestration โ€” parallel execution, git worktree isolation, janitor verification, quality gates (lint + types + PII scan), cross-model code review, circuit breaker for misbehaving agents, token growth monitoring with auto-intervention.

Intelligence โ€” contextual bandit router learns optimal model/effort pairs over time. Knowledge graph for codebase impact analysis. Semantic caching saves tokens on repeated patterns. Cost anomaly detection with Z-score flagging.

Enterprise โ€” HMAC-chained tamper-evident audit logs. Policy limits with fail-open defaults and multi-tenant isolation. PII output gating. OAuth 2.0 PKCE. SSO/SAML/OIDC auth. WAL crash recovery โ€” no silent data loss.

Observability โ€” Prometheus /metrics, OTel exporter presets, Grafana dashboards. Per-model cost tracking (bernstein cost). Terminal TUI and web dashboard. Agent process visibility in ps.

Ecosystem โ€” MCP server mode, A2A protocol support, GitHub App integration, pluggy-based plugin system, multi-repo workspaces, cluster mode for distributed execution, self-evolution via --evolve.

Full feature matrix: FEATURE_MATRIX.md

How it compares

Feature Bernstein CrewAI AutoGen LangGraph
Orchestrator Deterministic code LLM-driven LLM-driven Graph + LLM
Works with Any CLI agent (18+) Python SDK classes Python agents LangChain nodes
Git isolation Worktrees per agent No No No
Verification Janitor + quality gates No No Conditional edges
Cost tracking Built-in No No No
State model File-based (.sdd/) In-memory In-memory Checkpointer
Self-evolution Built-in No No No
Declarative plans (YAML) Yes Partial No Yes
Model routing per task Yes No No Manual
MCP support Yes No No No
Agent-to-agent chat No Yes Yes No
Web UI No Yes Yes Partial
Cloud hosted option No Yes No Yes
Built-in RAG/retrieval No Yes Yes Yes

Last verified: 2026-04-07. See full comparison pages for detailed feature matrices.

Monitoring

bernstein live       # TUI dashboard
bernstein dashboard  # web dashboard
bernstein status     # task summary
bernstein ps         # running agents
bernstein cost       # spend by model/task
bernstein doctor     # pre-flight checks
bernstein recap      # post-run summary
bernstein trace <ID> # agent decision trace
bernstein explain <cmd>  # detailed help with examples
bernstein dry-run    # preview tasks without executing
bernstein dep-impact # API breakage + downstream caller impact
bernstein aliases    # show command shortcuts
bernstein config-path    # show config file locations
bernstein init-wizard    # interactive project setup
bernstein fingerprint build --corpus-dir ~/oss-corpus  # build local similarity index
bernstein fingerprint check src/foo.py                 # check generated code against the index

Install

Method Command
pip pip install bernstein
pipx pipx install bernstein
uv uv tool install bernstein
Homebrew brew tap chernistry/bernstein && brew install bernstein
Fedora / RHEL sudo dnf copr enable alexchernysh/bernstein && sudo dnf install bernstein
npm (wrapper) npx bernstein-orchestrator

Editor extensions: VS Marketplace ยท Open VSX

Contributing

PRs welcome. See CONTRIBUTING.md for setup and code style.

Support

If Bernstein saves you time: GitHub Sponsors ยท Open Collective

License

Apache License 2.0


"To achieve great things, two things are needed: a plan and not quite enough time." โ€” Leonard Bernstein

Release History

VersionChangesUrgencyDate
v1.8.12## v1.8.12 ### Bug fixes - **persistence:** handle Windows OSError in _pid_alive **Full changelog:** https://github.com/chernistry/bernstein/compare/v1.8.11...v1.8.12 High4/19/2026
v1.8.8## v1.8.8 ### CI / Infrastructure - **release:** strip internal ticket refs from generated notes ### Chores - **deps:** bump docker/login-action from 3.7.0 to 4.1.0 - **deps:** bump actions/setup-python from 5.6.0 to 6.2.0 - **deps:** bump actions/create-github-app-token from 2.2.2 to 3.1.1 - **deps:** bump docker/build-push-action from 6.19.2 to 7.1.0 - **deps:** bump reviewdog/action-actionlint **Full changelog:** https://github.com/chernistry/bernstein/compare/v1.8.7...v1.8.8 High4/19/2026
v1.7.4Patch release. Changes since previous version: fc29ca1e chore: auto-bump to v1.7.4High4/14/2026
v1.7.1Patch release. Changes since previous version: 71b4b034 chore: auto-bump to v1.7.1 1092e9e0 Merge pull request #781 from chernistry/dependabot/npm_and_yarn/packages/vscode/npm_and_yarn-85af2c71bb 4c2445c8 chore(deps-dev): bump follow-redirects 8d82e03d ci: fix npm publish when version already matches tagHigh4/14/2026
v1.6.6## v1.6.6 ### Multi-adapter orchestration Bernstein now runs with **any combination of CLI agents** โ€” no Claude Code dependency required. Configure per-role adapters in `bernstein.yaml`: ```yaml role_model_policy: backend: cli: qwen model: qwen3.6-plus security: cli: gemini model: gemini-3.1-pro-preview ``` The internal scheduler LLM also accepts any adapter (`internal_llm_provider: gemini`). ### 20 critical orchestration bug fixes Deep audit found and fixed 20 severe bHigh4/11/2026
v1.5.4## Highlights **Spawn error classification** โ€” The spawner now categorizes failures (rate limit, missing adapter, permission denied, resource exhausted) and uses the category to decide retry strategy: fail-fast for permanent errors, fallback for transient ones. (#594) **EU AI Act compliance engine** โ€” New compliance module with risk classification, conformity assessment templates, and evidence export for regulated environments. **WebSocket frontend + API versioning** โ€” Live WebSocket updates High4/8/2026

Dependencies & License Audit

Loading dependencies...

Similar Packages

Enterprise-Multi-AI-Agent-Systems-๐Ÿค– Build and deploy scalable Multi-AI Agent systems with LangGraph and Groq LLMs to enhance intelligence across enterprise applications.main@2026-04-21
mcp-audit๐ŸŒŸ Track token consumption in real-time with MCP Audit. Diagnose context bloat and unexpected spikes across MCP servers and tools efficiently.main@2026-04-21
mcp-rag-agent๐Ÿ” Build a production-ready RAG system that combines LangGraph and MCP integration for precise, context-aware AI-driven question answering.main@2026-04-21
solace-agent-meshAn event-driven framework designed to build and orchestrate multi-agent AI systems. It enables seamless integration of AI agents with real-world data sources and systems, facilitating complex, multi-s1.18.40
arifOSArifOS โ€” Constitutional MCP kernel for governed AI execution. AAA architecture: Architect ยท Auditor ยท Agent. Built for the open-source agentic era.v2026.04.07