Autonomous coding that closes the loop: plan, execute, validate, finish.
Grinta is an open-source, local-first autonomous coding agent built for real repository work. It reads code, plans multi-step execution, performs changes and command runs, validates results, and only finishes when completion criteria are satisfied.
Grinta focuses on task completion integrity, not just code generation. The runtime combines orchestration safeguards, local execution policy checks, and durable session state so long-running tasks can recover, self-correct, and stay within clear operating boundaries.
- Autonomous coding workflows and task completion gates
- Session orchestration, retries, stuck detection, and circuit breakers
- Local-first execution with policy-driven safety controls
- Model-agnostic provider routing (cloud and local)
- Context compaction and durable run-state recovery for long sessions
- Task completion, not just file edits.
- Local-first runtime with strong safety guardrails.
- Durable long-session behavior with event-oriented state and recovery.
- Model-agnostic inference with direct provider support and OpenAI-compatible routing.
- Strong stuck detection and circuit-breaker behavior to avoid silent runaway loops.
Grinta currently executes actions on the local host.
hardened_localadds stricter local execution policy checks.hardened_localis not sandboxing and not process isolation.
Use Grinta for trusted local workflows and repositories.
graph TB
User([User]) --> CLI[CLI: backend.cli.entry]
CLI --> Orch[SessionOrchestrator]
Orch --> Engine[Engine\nplanning + tool intent]
Orch --> Pipe[Operation pipeline\nsafety + validation]
Pipe --> Runtime[RuntimeExecutor\nlocal execution]
Runtime --> Obs[Observations]
Obs --> Orch
Orch --> Ledger[EventStream / durability]
Orch --> FinishGate[Task validation\nbefore finish]
See docs/ARCHITECTURE.md for implementation details.
.\START_HERE.ps1- Install dependencies:
uv sync- Create local settings:
cp settings.template.json settings.json- Start the CLI:
uv run python -m backend.cli.entryWindows:
.\start_backend.ps1Cross-platform:
uv run python -m backend.execution.action_execution_server 3000 --working-dir .Main endpoints:
./docker_start.shWindows:
.\DOCKER_START.ps1Minimal config:
{
"llm_provider": "openai",
"llm_model": "openai/gpt-4o-mini",
"llm_api_key": "sk-...",
"llm_base_url": ""
}Common model ids:
openai/gpt-4o-minianthropic/claude-sonnet-4-20250514google/gemini-2.5-proollama/llama3.2
Plan -> execute -> observe -> validate -> finish.
Grinta uses compactor strategies to keep long sessions coherent under context limits.
Stuck detection, retry/recovery flows, and circuit breakers are built into orchestration.
Task validation can block finish calls when tracked work is incomplete.
- User Guide
- Quick Start
- Troubleshooting
- Architecture
- Developer Guide
- Vocabulary
- The Book of Grinta
- API Reference
- Contributing
See CONTRIBUTING.md.
MIT — see LICENSE.

