AletheIA is an operating framework for AI-assisted work.
It helps teams coordinate tasks, context, memory, skills, governance, validation, and learnings without letting raw model output act directly on the system.
In simple terms:
model or agent output -> AletheIA -> execution
The name combines Aletheia β the idea of truth as something brought into the open rather than left hidden β with IA, signaling the framework's focus on AI-assisted work.
Conceptually, the name points to the framework's main intention:
- make reasoning more explicit
- make decisions more reviewable
- make validation and learnings less hidden
- keep AI work from moving straight from output to action without an operating layer
In that sense, AletheIA is not about treating AI output as truth. It is about creating the conditions for AI-assisted work to become more inspectable, governable, and revealable before execution.
Many AI workflows still follow a fragile pattern:
prompt -> output -> execution
That can be fast, but it is often weak in:
- traceability
- scope control
- policy enforcement
- quality gates
- reusable learnings
AletheIA introduces an explicit operating layer between output and action.
Its goal is not to slow teams down for the sake of ceremony.
Its goal is to make AI-assisted work:
- clearer
- safer
- more predictable
- more reusable across projects
AletheIA is:
- a framework
- provider-agnostic by design
- focused on safe and explainable AI-assisted work
- built to be reusable across projects
- designed for structured decision-making before execution
AletheIA is not:
- a chatbot
- an app
- a wrapper around a single LLM
- a product-specific toolkit
- a system that assumes automation is always the right answer
AletheIA works through a controlled loop:
intent -> context -> decision -> execution -> validation -> learning
This means the framework helps answer questions such as:
- What exactly is the task?
- What context is truly needed?
- What decision is being made?
- Why is this allowed, blocked, or escalated?
- What validation happened before closure?
- What should be learned from success or failure?
The current public alpha is meant to prove that:
- an input can become a structured decision
- governance can block unsafe or poorly framed closure
- failed validation can generate reusable learnings
- the framework can stay small, inspectable, and deterministic
- the core can be reused outside its original pilot project
engine/β minimal deterministic kernel, governance, and learnings helpersschemas/β JSON schemas for framework contractspolicies/β governance packs and policy definitionsexamples/β canonical examples and golden fixturestests/β contract, golden, e2e, and learning-oriented checksstarter-pack/β reusable operating guides, checklists, and templatesdocs/β architecture, roadmap, pilot narrative, and migration notes
The alpha starts with a few small examples that make the framework tangible:
hello-worldβ the smallest end-to-end pathlow-confidence-reviewβ when ambiguity should stop direct executionhigh-risk-human-gateβ when risk requires explicit human approvallearning-from-failed-validationβ when failed closure should also produce reusable learninggovernanceβ process-oriented rule evaluation using facts and a policy pack
- Clarity over speed
- Control over automation
- Consistency over convenience
- Reuse before duplication
- Learnings must be reviewable
- The framework should stay inspectable and debuggable
This repository is the first public alpha draft of AletheIA.
It was born from real operational work inside the Crisis Monitor project and then extracted into a standalone reusable framework.
Today, this public draft already contains:
- an Alpha 1 baseline for governance, token discipline, durable decisions, enforcement clarity, quality, and learnings
- an explicit Alpha 2 bridge for self-application, pilot conversion, and project extension, now reinforced by real-world Crisis Monitor validation
- an Alpha 3 adoption baseline for getting started, existing-project application, contribution guidance, and starter-pack reuse
- an Alpha 4 handoff baseline for model-agnostic restart packages, project conventions, and semi-automated handoff capture
- an Alpha 5 experimental baseline for structured risk inference in higher-risk work, including concrete example artifacts
- an Alpha 6 distribution baseline for presets, adapters, adoption modes, and cross-surface delivery mappings
- a current operational-composition baseline for work slices, risk-to-gate mapping, stronger restart-package examples, and optional filesystem context-routing experiments
- an early Alpha 7 bootstrap-and-delivery baseline for future tooling contracts that still remains outside the core
In practical terms, that currently includes:
- core contracts
- a minimal kernel
- a governance pack
- an explicit token policy
- a small executable governance baseline
- a lightweight durable decision discipline
- an explicit boundary between behavioral and technical enforcement
- a quality baseline
- a first learnings path
- a reusable starter-pack slice
- an advisory-only model-strategy guidance slice for task shape, capability profile, reasoning depth, and trust / hosting fit
- a first structured risk inference slice
- a first iterative-maintenance governance slice for round-based continuation, regression-aware gates, reusable learning across rounds, and a clearer proportional pattern of proof, contract, health, alert, investigation, and summary
- concrete example artifacts for bounded semantic-risk scenarios
- a first concrete distribution baseline for packaging shape, delivery surface, adoption depth, and meaning preservation across surfaces
- a lightweight operational-composition baseline for work slices, risk-to-gate mapping, compact handoff continuity, iterative maintenance guidance, and optional filesystem context-routing experiments
- a first Alpha 7 tooling baseline across principles, boundaries, output shapes, generator contract, and output contract
If this is your first time here, start with:
docs/getting-started.mddocs/00-overview.mddocs/architecture.mddocs/governance.mddocs/token-policy.mdstarter-pack/guides/model-strategy-by-task.mddocs/durable-decisions.mddocs/enforcement-boundaries.mddocs/quality.mddocs/self-application.mddocs/pilot-crisis-monitor.mddocs/pilot-conversion.mdexamples/pilot-conversion/crisis-monitor-real-world-validation.mddocs/project-extension-pattern.mddocs/apply-to-existing-project.mdCONTRIBUTING.mdstarter-pack/starter-pack/templates/project-extension-template.mdstarter-pack/templates/project-model-strategy-template.mddocs/agent-handoffs.mdstarter-pack/guides/agent-handoff-generation.mdstarter-pack/templates/agent-handoff-template.mddocs/project-handoff-conventions.mddocs/handoff-capture-pattern.mddocs/work-slice-pattern.mdstarter-pack/templates/work-slice-template.mdstarter-pack/guides/risk-to-gate-mapping.mddocs/structured-risk-inference.mdstarter-pack/templates/inference-artifact-template.mdstarter-pack/guides/inference-trigger-guidance.mdstarter-pack/guides/inference-artifact-generation.mddocs/inference-pilot-scenarios.mdexamples/work-slices/standard-slice/README.mdexamples/handoffs/compact-reviewable-handoff.mdexamples/handoffs/high-stakes-handoff.mdexamples/structured-risk-inference/README.mdexamples/model-strategy/README.mdexamples/
The next steps are:
- keep the current Alpha 6 baseline coherent without losing the Alpha 5 inference baseline, the Alpha 4 handoff baseline, or the Alpha 3 adoption gains
- keep validating the Crisis Monitor pilot and adjacent real slices
- keep converting pilot learnings into framework improvements with the Crisis Monitor pilot as the strongest current Alpha 2 evidence
- keep using AletheIA to improve AletheIA itself
- keep Alpha 5 proportional, selective, and grounded in bounded semantic-risk scenarios
- keep the operational-composition baseline lightweight, teachable, and clearly outside the core contracts
- keep the new model-strategy guidance advisory-only, provider-agnostic in the framework, and translated locally by each project
- keep iterative-maintenance guidance focused on round legibility, regression-aware continuation, reusable learning, and proportional proof/health reads rather than benchmark imitation
- keep experimental workspace context routing optional, inspectable, and clearly non-canonical
- keep Alpha 6 focused on distribution clarity rather than tooling promises
- keep Alpha 7 small, bounded, and tooling-light even as its baseline becomes clearer
- let future domain packs remain future-facing layers rather than near-term core work
- Alpha 1 established the governance and validation baseline.
- Alpha 2 established the pilot, self-application, and conversion bridge.
- Alpha 3 is making the framework easier to adopt and contribute to.
- Alpha 4 is making inter-agent continuity explicit, reusable, and more operational.
- Alpha 5 now provides an experimental baseline for decision-quality in higher-risk work.
- Alpha 6 now provides a first concrete distribution baseline for presets, adapters, adoption modes, and delivery mappings.
- Alpha 7 now includes bootstrap principles, delivery-tooling boundaries, output examples, a first future-facing generator contract, and a first future-facing delivery output contract for optional automation.
- A current operational-composition baseline now makes bounded work, restartable handoffs, risk-sensitive validation, iterative maintenance, and regression-aware continuation more tangible without expanding the core contracts.
- The starter-pack now also includes advisory-only model-strategy guidance for matching task shape, capability profile, reasoning depth, and trust / hosting constraints without turning model choice into framework enforcement.
The first explicit bridge into Alpha 2 is:
docs/self-application.mddocs/pilot-crisis-monitor.mddocs/pilot-conversion.mddocs/project-extension-pattern.md
Together, these documents explain how AletheIA should evolve itself, learn from pilots, and preserve a clear boundary between framework core and local project extensions.
The current Alpha 3 adoption baseline after this bridge is:
docs/getting-started.mddocs/apply-to-existing-project.mdCONTRIBUTING.mdstarter-pack/templates/project-extension-template.md
The first Alpha 4 handoff baseline now adds:
docs/agent-handoffs.mdstarter-pack/guides/agent-handoff-generation.mdstarter-pack/templates/agent-handoff-template.mddocs/project-handoff-conventions.mddocs/handoff-capture-pattern.md
The current Alpha 5 experimental baseline now adds:
docs/structured-risk-inference.mdstarter-pack/templates/inference-artifact-template.mdstarter-pack/guides/inference-trigger-guidance.mdstarter-pack/guides/inference-artifact-generation.mddocs/inference-pilot-scenarios.mdexamples/structured-risk-inference/README.md
The current and next layers should stay clearly separated by role:
- Alpha 5 β an experimental baseline for decision-quality through structured, evidence-oriented inference in higher-risk work
- Alpha 6 β a first concrete distribution baseline for packaging the same framework meaning across environments
- Alpha 7 β optional bootstrap and delivery tooling after the distribution model is already stable
The current Alpha 6 baseline now adds:
docs/distribution-presets-adapters.mddocs/preset-taxonomy.mddocs/adapter-taxonomy.mddocs/adoption-mode-guidance.mddocs/delivery-mapping-examples.md
A separate future track will also shape reusable domain governance packs. The first planned packs in that track are:
- Web App Security & Trust Boundaries Pack
- AI Agent Security & Prompt Injection Pack
These future phases and domain packs are about extending AletheIA's reach and delivery discipline. They are not about redefining the framework core.
The domain packs should sit between the reusable framework core and project-local rules:
AletheIA core -> domain governance pack -> project extension
See also:
docs/structured-risk-inference.mdstarter-pack/templates/inference-artifact-template.mddocs/distribution-presets-adapters.mddocs/preset-taxonomy.mddocs/adapter-taxonomy.mddocs/adoption-mode-guidance.mddocs/delivery-mapping-examples.mddocs/bootstrap-principles.mddocs/delivery-tooling-boundaries.mddocs/bootstrap-output-examples.mddocs/domain-governance-packs.mddocs/web-app-security-trust-boundaries.mddocs/ai-agent-security-prompt-injection.md
If you want to contribute to AletheIA, start with:
CONTRIBUTING.md
This is the fastest way to understand what kinds of changes belong in the framework core, what should stay local, and how to contribute without inflating the project.
Alpha 1 now includes a small executable governance check:
bash scripts/check-governance.sh
This is intentionally modest.
It does not try to be a heavy enforcement layer yet. It only proves that AletheIA can move from governance prose into a small technical baseline.
