freshcrate
Home > MCP Servers > ainativelang

ainativelang

AINL helps turn AI from "a smart conversation" into "a structured worker." It is designed for teams building AI workflows that need multiple steps, state and memory, tool use, repeatable execution, v

Description

AINL helps turn AI from "a smart conversation" into "a structured worker." It is designed for teams building AI workflows that need multiple steps, state and memory, tool use, repeatable execution, validation and control, and lower dependence on long prompt loops. AINL is a compact, graph-canonical, AI-native programming system for (READ: README)

README

AI Native Lang (AINL)

AINL logo

Python 3.10+ Latest tag Conformance status Auto-sync OpenClaw/NemoClaw/Clawflows/Agency-Agents License: Apache-2.0 MCP v1 ZeroClaw Skill: AINL OpenClaw Skill: AINL Hermes Agent: AINL Graph-first deterministic IR AI workflow language

AI-led co-development project, human-initiated by Steven Hooley (x.com/sbhooley, stevenhooley.com, linkedin.com/in/sbhooley). Attribution details: docs/PROJECT_ORIGIN_AND_ATTRIBUTION.md and tooling/project_provenance.json.


Start here — pick your path

Just want something working on your desktop in under 3 minutes?

ArmaraOS is the desktop agent OS built on AI Native Lang (AINL) — download once, install, and your agents are live with a full dashboard. No terminal, no config files, just plug in your API key.

Download ArmaraOS — ainativelang.com macOS · Windows · Linux — free to start

Autonomous agents, 7 pre-built Hands (researcher, lead gen, clip editor, and more), 40 channel adapters (Telegram, Discord, Slack, WhatsApp…), 27 LLM providers, 16 security layers — all in a single ~32 MB binary.


Already have an AI agent? Add AINL in one command.

AINL installs directly into OpenClaw, ZeroClaw, Hermes, Claude Code, and any MCP-compatible agent. After install your agent gets deterministic, reusable workflows — and you get real token savings immediately.

Your agent Install command How-to guide
OpenClaw ainl install-mcp --host openclaw ainativelang.com/install
ZeroClaw zeroclaw skills install https://github.com/sbhooley/ainativelang/tree/main/skills/ainl ainativelang.com/install
Hermes Agent ainl install-mcp --host hermes ainativelang.com/install
Claude Code pip install 'ainativelang[mcp]' → add ainl-mcp to MCP config ainativelang.com/mcp
Any MCP host pip install 'ainativelang[mcp]' → run ainl-mcp (stdio) ainativelang.com/mcp

After install, ask your agent: "Use AINL to build this workflow" — it compiles once, runs many times without re-spending tokens on orchestration.

Token savings at a glance:

Workload Typical savings
Recurring monitors, digests, scheduled jobs 90–95% fewer tokens vs prompt loops — field analysis (OpenClaw monitors, compile-once / run-many)
Multi-step automations and workflows 2–5× reduction per task — Apollo cost report (OpenRouter lifetime usage, architectural efficiency)
Simple one-off tasks Smaller but still positive — early OpenClaw operator field report (Plushify)

The reason: AINL compiles your workflow once. The runtime executes it deterministically — no LLM re-generation on each run, no prompt bloat, no orchestration chatter. The model authors the graph once; the runtime runs it on every invocation.

Token savings breakdown and benchmarks →


Here for the programming language itself?

AINL is a compact, graph-canonical AI workflow language. You write programs in .ainl files, compile them to a deterministic IR graph, and execute them without prompt loops.

Jump to Get Started (3 minutes) ↓ · Docs → · Quick start → · What is AINL? →


This GitHub repo is the technical source of truth for AINL: compiler, runtime, canonical graph IR, CLI, HTTP runner, MCP server, docs, examples, and conformance suite. For the high-level product story, use cases, and commercial/enterprise paths, visit ainativelang.com.

Open-core boundary

Area Status Notes
Core DSL, compiler, runtime, ainl validate/check/inspect/visualize Open (Apache-2.0) Language legitimacy; essential tooling
MCP server / bridge (ainl-mcp, scripts/ainl_mcp_server.py) Open & pluggable Any MCP host; bring your own compliant LLMs
OpenSpace / Lead AI style flows Open via BYO-LLM Implemented via MCP; operators choose their models
Enterprise audit/policy packs, managed ops, deployment kits Paid / optional Governance, SLA-backed support, monitored hosted runtime

Full boundary details: docs/OPEN_CORE_DECISION_SHEET.md

New in v1.4.4

  • PyPI ainativelang 1.4.4: version surfaces aligned (pyproject.toml, RUNTIME_VERSION, CITATION.cff, tooling/bot_bootstrap.json).
  • Solana client emit: emit_solana_client header now uses live RUNTIME_VERSION so generated clients never drift from the installed runtime.

New in v1.4.3

  • core.* builtins expanded: EQ/NEQ/GT/LT/GTE/LTE comparisons, TRIM/STRIP/LSTRIP/RSTRIP whitespace, STARTSWITH/ENDSWITH, KEYS/VALUES, STR/INT/FLOAT/BOOL coercions — all now implemented in runtime/adapters/builtins.py. These verbs were already in the validator contract; now they work at runtime too.
  • MCP ainl_compile returns frame_hints[]: list of {name, type, source} entries so agents can auto-construct the frame parameter before calling ainl_run. Add # frame: name: type comment lines to source for authoritative hints.
  • MCP per-workspace limits: place ainl_mcp_limits.json in the fs.root directory to tune max_steps/max_time_ms/max_adapter_calls per workspace without editing global server config.
  • MCP auto-cache: cache adapter is automatically registered when output/cache.json or cache.json exists in fs.root, making workspace caching zero-config.
  • MCP per-run adapters (from earlier in v1.4.3): ainl_run adapters parameter for scoped http, fs, cache, sqlite per call.
  • Limit alignment: _SERVER_DEFAULT_LIMITS in runtime_runner_service.py raised to match MCP defaults (max_steps: 500000, max_adapter_calls: 50000, max_time_ms: 900000).

New in v1.4.2

  • Host adapter policy: AINL_ALLOW_IR_DECLARED_ADAPTERS relaxes env AINL_HOST_ADAPTER_ALLOWLIST when set; intelligence paths under intelligence/ opt in by default unless AINL_INTELLIGENCE_FORCE_HOST_POLICY=1.
  • CLI / MCP / runner: ainl run registers web, tiktok, queue; MCP ainl_run grants match the HTTP runner; capabilities document host security env (see docs/LLM_ADAPTER_USAGE.md).
  • Compiler + tooling: strict-mode fixes for label-jump (J) graphs; effect_analysis / adapter_manifest coverage for web, tiktok, svc, crm, and expanded core verbs; intelligence examples use R queue Put and valid prelude layout; demo/.ainl-library-skip excludes dev demos from ArmaraOS App Store listings.

New in v1.4.1

  • Offline LLM provider (offline): deterministic AbstractLLMAdapter for config.yaml + register_llm_adapters demos and CI; use in llm.fallback_chain without live API keys (see fixtures/llm_offline.yaml).
  • Wishlist examples + CI: 05b_unified_llm_offline_config.ainl — unified llm path vs llm_query mock in 05_route_then_llm_mock.ainl; parser-compat runs strict wishlist validation + no-network smoke for graphs 01 and 05b.
  • core.GET: real R target on CoreBuiltinAdapter (deep key/index reads via deep_get); strict entries in tooling/effect_analysis.py alongside llm.COMPLETION.
  • LLM runtime: LLMRuntimeAdapter normalizes verb casing so R llm.COMPLETION matches registry verbs (e.g. completion).

New in v1.4.0

  • ArmaraOS host pack (optional — no hard dependency on the ArmaraOS binary): ainl emit --target armaraos (hand package: HAND.toml, <stem>.ainl.json, security.json, README), ainl status --host armaraos (canonical ARMARAOS_* + legacy OPENFANG_* env), ainl install-mcp --host armaraos ([[mcp_servers]] in ~/.armaraos/config.toml, ~/.armaraos/bin/ainl-run, PATH hints).
  • Release / surfaces: version alignment across pyproject.toml, RUNTIME_VERSION, CITATION.cff, tooling/bot_bootstrap.json; ainl serve GET /health reports version from RUNTIME_VERSION.
  • Docs: docs/ARMARAOS_INTEGRATION.md, host hub docs/getting_started/HOST_MCP_INTEGRATIONS.md; integration tests and emitter import path fixes.

New in v1.3.4

  • Enhanced diagnostics (--enhanced-diagnostics): graph context + Mermaid snippets on compile errors.
  • Error highlighting (ainl visualize --highlight-errors): error nodes styled in Mermaid output.
  • Static cost estimates (--estimate on check, inspect, status): per-node token/USD estimates.
  • Audit trail adapter (--enable-adapter audit_trail --audit-sink file:///...): immutable JSONL compliance log (graph must invoke audit_trail.record).
  • Compact syntax preprocessor (ainl_preprocess.py): Python-like compact .ainl authoring alongside opcodes — same IR, fewer surface tokens; see examples/compact/ and AGENTS.md.

Tutorials: Debugging with the Visualizer · Production: Estimates & Audit

Full version history: docs/CHANGELOG.md · docs/RELEASE_NOTES.md

AINL helps turn AI from "a smart conversation" into "a structured worker."

v1.3.3 — Native Solana + prediction markets: See docs/solana_quickstart.md for strict graphs, env vars, dry-run-first flows, and --emit solana-client / blockchain-client usage, plus examples/prediction_market_demo.ainl for a concrete resolution → conditional payout pattern.

v1.3.3 — Native Solana Support for Prediction Markets

  • Deterministic Solana agents: Strict AINL graphs for market creation, Pyth/Hermes resolution monitoring, and on-chain trades/payouts keep behavior explainable and testable.
  • Key verbs: DERIVE_PDA with single-quoted JSON seeds, GET_PYTH_PRICE (legacy + PriceUpdateV2), HERMES_FALLBACK, and INVOKE / TRANSFER_SPL with explicit priority fees (micro-lamports per CU).
  • Dry-run + emit: Full dry-run safety (AINL_DRY_RUN=1) and emitted standalone clients via --emit solana-client so you can rehearse flows before sending real transactions.
  • Start here: docs/solana_quickstart.md and examples/prediction_market_demo.ainl.

All Solana additions are additive-only and preserve full Hyperspace compatibility.

Recommended production stack: AINL graphs + AVM or general agent sandboxes

AINL provides the deterministic, capability-declared graph layer. Pair it with Hyperspace AVM (avmd) or general runtimes (Firecracker microVMs, gVisor, Kubernetes Agent Sandbox, E2B-style runtimes, AVM Codes platform) for stronger isolation.

  • New metadata: optional execution_requirements in compiled IR (avm_policy_fragment, isolation/resource hints).
  • New CLI: ainl generate-sandbox-config <file.ainl> [--target avm|firecracker|gvisor|k8s|general].
  • New unified shim: optional runtime/sandbox_shim.py auto-detects AVM/general sandbox endpoints and falls back cleanly.

It is designed for teams building AI workflows that need multiple steps, state and memory, tool use, repeatable execution, validation and control, and lower dependence on long prompt loops.

Compile-once, run-many: you author (or import) a graph once; the runtime executes it deterministically without re-spending LLM tokens on orchestration each time. Size economics are tracked with tiktoken cl100k_base; the viable subset (e.g. public_mixed) shows about ~1.02× leverage for minimal_emit vs unstructured baselines—see BENCHMARK.md, docs/benchmarks.md, and docs/architecture/COMPILE_ONCE_RUN_MANY.md.

Performance & benchmarks (updated Mar 2026): Size results use tiktoken cl100k_base (billing-aligned for GPT-4o–class models). Reports separate viable subset rows from legacy-inclusive aggregates; minimal_emit fallback stub and emitter compaction (e.g. prisma / react_ts stubs) are documented in the transparency blocks. See BENCHMARK.md for tables; narrative hub docs/benchmarks.md (highlights, commands, CI). Runtime economics and optional reliability batches: tooling/benchmark_runtime_results.json via make benchmark / scripts/benchmark_runtime.py. For long-lived OpenClaw deployments, pair these static benchmarks with live token-budget observability and cost tracking via docs/operations/TOKEN_AND_USAGE_OBSERVABILITY.md and the monitoring components documented in docs/MONITORING_OPERATIONS.md.

CLI polish (OpenClaw, v1.3.3+)

These commands are implemented in cli/main.py and documented in docs/QUICKSTART_OPENCLAW.md. For bots and IDE tools, tooling/bot_bootstrap.json exposes the same surface under openclaw_commands (plus ai_native_lang_example_yml). Project lock example files: aiNativeLang.example.yml (repo root) and tooling/aiNativeLang.example.yml (packaged wheels).

Command What it does
ainl install openclaw --workspace PATH Merges env.shellEnv into <workspace>/.openclaw/openclaw.json, bootstraps SQLite, registers three gold-standard crons, restarts gateway; use --dry-run to preview.
ainl status Unified health: workspace, schema, weekly budget (memory_records fallback), crons, drift, 7d tokens, estimated cost avoided (7d), caps. --json (pretty), --json-summary (one-line JSON), --summary (one-line text for alerts).
ainl doctor --ainl Validates OpenClaw + AINL integration (env, schema, cron names, bootstrap flag).
ainl cron add FILE.ainl Wraps openclaw cron add with message ainl run <path>; --cron or --every; --dry-run prints argv only.
ainl dashboard Runs scripts/serve_dashboard.py (emitted server under tests/emits/server — build first with scripts/run_tests_and_emit.py in a dev checkout); --port, --no-browser.

The CLI fails fast if tests/emits/server/server.py is missing (typical for PyPI-only installs), with the same run_tests_and_emit hint.

Shell shortcut: scripts/setup_ainl_integration.sh delegates to ainl install openclaw (supports --dry-run, --workspace, --verbose).


Get Started (3 minutes)

Requires Python 3.10+. No git clone needed to try it.

# 1. Install the CLI
pip install ainativelang

# 2. Create a new project (generates main.ainl + README)
ainl init my-first-worker
cd my-first-worker

# 3. Check the program compiles cleanly (strict graph semantics)
ainl check main.ainl --strict

# 4. Run it
ainl run main.ainl

# 5. Visualise the control-flow graph
ainl visualize main.ainl --output -    # paste into https://mermaid.live

That's it. Edit main.ainl, add adapter calls (cache, HTTP, LLM, memory), revalidate, run again.

The ainl init command creates a clean, well-commented main.ainl designed for newcomers. It demonstrates core concepts — graph labels (L1: = control-flow node), requests (R cache get = read from the cache adapter), joins (J = return a value and finish the node), and branching — while remaining production-ready. Open main.ainl after scaffolding; the comments tell you exactly what each line does.

Compact syntax (new in v1.3.3)

AINL now supports a human-friendly compact syntax alongside the original opcodes. Both compile to the same IR. Compact is recommended for new code — 66% fewer tokens.

# examples/compact/hello_compact.ainl
adder:
  result = core.ADD 2 3
  out result
# Branching, inputs, adapter calls, cron — all work
classifier:
  in: level message
  severity = llm.classify level message
  if severity == "CRITICAL":
    http.POST ${SLACK_WEBHOOK} {text: message}
    out "alerted"
  out "logged"

See examples/compact/ for more, and AGENTS.md for the full compact syntax reference.

write → check → visualize → run loop

  1. Write — author in compact or opcode syntax, or have an LLM emit a .ainl program.
  2. Validate strict graph semantics: ainl validate your.ainl --strict Or the equivalent: ainl check your.ainl --strict Failures include structured diagnostics (line, suggestion, optional llm_repair_hint). Use --json-diagnostics for CI/machine-readable output.
  3. Visualize control flow as Mermaid:
    ainl visualize your.ainl --output - > graph.mmd — paste into mermaid.live.
  4. Run locally: ainl run your.ainl
  5. Emit to other platforms: ainl emit your.ainl --target langgraph -o graph.py
  6. Serve as HTTP API: ainl serve --port 8080 (POST /validate, /compile, /run)
  7. Inspect canonical IR (for agent/meta-agent loops): ainl inspect your.ainl --strict
  8. Emit JSONL execution tape for grading/evolution: ainl run your.ainl --trace-jsonl run.trace.jsonl
Advanced: Contributing or Custom Build (clone + editable install + CI bootstrap)
# Clone and create an isolated env (match CI: Python 3.10)
git clone https://github.com/sbhooley/ainativelang.git
cd ainativelang

PYTHON_BIN=python3.10 VENV_DIR=.venv-py310 bash scripts/bootstrap.sh
source .venv-py310/bin/activate  # Windows: .venv-py310\Scripts\activate

# Install with dev + web extras
python -m pip install --upgrade pip
python -m pip install -e ".[dev,web]"

# Validate an example
ainl check examples/hello.ainl --strict

# Run the core test suite
python scripts/run_test_profiles.py --profile core

# Environment diagnostics
ainl doctor

# Full conformance matrix
make conformance

# Runner service (for API/orchestrator integration)
python scripts/runtime_runner_service.py
# GET http://localhost:8770/capabilities
# POST http://localhost:8770/run  {"code": "S app api /api\nL1:\nR core.ADD 2 3 ->sum\nJ sum"}

See docs/INSTALL.md and CONTRIBUTING.md for full contributor setup.

Research-loop quickstart (meta-agent)

For self-improving loops (generate -> inspect -> mutate -> evaluate), use:

  1. Inspect canonical IR
    • ainl inspect candidate.ainl --strict
  2. Diff two candidates
    • MCP tool: ainl_ir_diff(file1, file2, strict=true)
  3. Score fitness
    • MCP tool: ainl_fitness_report(file, runs=5, strict=true)
  4. Repair invalid outputs
    • MCP ainl_validate diagnostics include llm_repair_hint

Contract and stable payload fields: docs/operations/MCP_RESEARCH_CONTRACT.md.

For the full conformance matrix in one command (tokenizer, IR canonicalization, strict validation, runtime parity, emitter stability):

make conformance (or SNAPSHOT_UPDATE=1 make conformance when intentionally updating snapshots).

Community & Growth (start here too)

Short primer for stakeholders: docs/WHAT_IS_AINL.md (canonical). WHAT_IS_AINL.md at repo root is a stub that points to the same content.

Getting started with includes

Pull in shared subgraphs from modules/ (paths resolve next to your source file, then CWD, then ./modules/):

include "modules/common/retry.ainl" as retry

L1: Call retry/ENTRY ->out J out

See Includes & modules below for timeout.ainl, strict rules, and the starter table.

AI agents: See AI_AGENT_QUICKSTART_OPENCLAW.md for a full agent onboarding guide, or docs/BOT_ONBOARDING.md for the machine-readable bootstrap path.

Deeper setup: See docs/INSTALL.md for platform-specific install, Docker, and pre-commit setup.


Ecosystem & agent hosts (OpenClaw, ZeroClaw, Hermes)

Import Clawflows-style WORKFLOW.md or Agency-Agents-style personality Markdown into a deterministic .ainl graph (cron trigger, sequential Call steps or agent gates, optional memory / queue hooks for OpenClaw-style bridges). If structured parsing cannot extract steps or agent fields, the importer falls back to a compiling minimal_emit fallback stub (Phase‑1 style) so you still get valid, reviewable graph source.

Host quick links: OpenClaw · ZeroClaw skill (AINL) · Hermes Agent · ArmaraOS — AINL wiring for all four is in docs/getting_started/HOST_MCP_INTEGRATIONS.md (ainl install-mcp --host openclaw|zeroclaw|hermes|armaraos|armaraos).

The same path is exposed over MCP as ainl_list_ecosystem, ainl_import_clawflow, ainl_import_agency_agent, and ainl_import_markdown (stdio ainl-mcp). Weekly auto-sync ( .github/workflows/sync-ecosystem.yml ) refreshes examples/ecosystem/ from upstream public Markdown; community additions use .github/PULL_REQUEST_TEMPLATE/ (workflow / agent templates).

ainl import markdown https://raw.githubusercontent.com/nikilster/clawflows/main/workflows/available/community/check-calendar/WORKFLOW.md \
  --type workflow -o morning.ainl
ainl compile morning.ainl

Agent imports support --personality "…" and optional --generate-soul (writes SOUL.md and IDENTITY.md next to -o). Use --no-openclaw-bridge to emit cache instead of memory / queue.

Shortcuts (fetch five samples from upstream into examples/ecosystem/ — requires network):

ainl import clawflows
ainl import agency-agents

Install AINL as a ZeroClaw skill

zeroclaw skills install https://github.com/sbhooley/ainativelang/tree/main/skills/ainl

This installs the AINL importer, runtime shim, and MCP tools directly into ZeroClaw.

Bootstrap (PyPI self-upgrade, ainl-mcp in ~/.zeroclaw/mcp.json, ~/.zeroclaw/bin/ainl-run, PATH hint): from the skill directory run ./install.sh, or run ainl install-mcp --host zeroclaw (use --dry-run / --verbose as needed). Alternative skill URL (standalone repo, when published): https://github.com/sbhooley/ainl-zeroclaw-skill.

Try in chat: “Import the morning briefing using AINL.” (Then point the agent at a Clawflows URL, a preset from ainl_list_ecosystem, or ainl import markdown ….)

Details: docs/ZEROCLAW_INTEGRATION.md · skill files: skills/ainl/README.md.

Install AINL as an OpenClaw skill

OpenClaw uses npm + openclaw onboard for the host CLI. AINL is added as a skill folder (not via zeroclaw skills install): copy skills/openclaw/ to ~/.openclaw/skills/ or <workspace>/skills/, or install from ClawHub when the skill is listed there.

Bootstrap (PyPI self-upgrade, mcp.servers.ainl in ~/.openclaw/openclaw.json, ~/.openclaw/bin/ainl-run, PATH hint): from the skill directory run ./install.sh, or run ainl install-mcp --host openclaw (use --dry-run / --verbose as needed). install.sh may run npm install -g openclaw@latest when npm is on PATH; set OPENCLAW_SKIP_NPM=1 to skip.

Once bootstrapped, the OpenClaw bridge automatically activates AINL's intelligence layer, including the cap auto-tuner and memory hydration/embedding pilot. This creates a self-managing runtime that continuously adjusts execution caps and prunes caches based on observed token usage — helping sustain 90–95% token savings on high-frequency monitors and digests with zero recurring LLM orchestration cost.

For restricted Python sandboxes (PEP 668 externally-managed environments, common on OpenClaw/Clawbot hosts), no-root install order is: venv first, then --user, then --break-system-packages only as a last resort. skills/ainl/install.sh runs a compatible fallback sequence automatically and continues with MCP setup once ainl is available.

Intelligence layer (after bootstrap): install-mcp / ./install.sh wires MCP and ainl-run; bridge cron, env profiles, and run_intelligence still need the one-time operator pass in docs/operations/OPENCLAW_AINL_GOLD_STANDARD.md. With that wiring, the OpenClaw bridge + intelligence path exposes AINL’s cap auto-tuner, rolling budget → monitor hydration, and optional embedding pilot—a self-managing resource & budget layer (not self-tuning workflow logic: compiled graphs stay deterministic) that adjusts caps and cache pressure from observed usage. See docs/INTELLIGENCE_PROGRAMS.md and docs/operations/TOKEN_AND_USAGE_OBSERVABILITY.md. On typical high-frequency monitoring and digest workloads, that stack helps sustain ~90–95% token savings versus prompt-loop orchestration for the same scheduled work, with near-zero recurring orchestration cost after compile—validate on your host (ainl bridge-sizing-probe, weekly trends).

If you need a clean uninstall in managed sandboxes:

pip3 uninstall -y ainl mcp aiohttp langgraph temporalio
rm -rf /tmp/ainl-repo /data/.openclaw/workspace/skills/ainl /data/.local/lib/python3.13/site-packages/*ainl*

Try in chat: “Import the morning briefing using AINL.”

Details: docs/OPENCLAW_INTEGRATION.md · skill files: skills/openclaw/README.md.

Standalone skill repo (optional, later): publish the contents of skills/openclaw/ as the root of github.com/sbhooley/ainl-openclaw-skill so users can clone or vendor a single-purpose tree (same three files: SKILL.md, install.sh, README.md).

Install AINL for Hermes Agent

Hermes Agent is a skill-native runtime with a closed learning loop. AINL pairs deterministic compiled graphs with Hermes via ainl-mcp (ainl_run) and --emit hermes-skill bundles (agentskills-style SKILL.md + workflow.ainl + ir.json).

Bootstrap: pip install 'ainativelang[mcp]' then ainl install-mcp --host hermes (alias ainl hermes-install), or run skills/hermes/install.sh. Emit skills to ~/.hermes/skills/ainl-imports/<name>/.

Details: docs/HERMES_INTEGRATION.md · hub one-pager: docs/integrations/hermes-agent.md · skill pack: skills/hermes/README.md.

Curated templates with original.md, converted.ainl, and notes: examples/ecosystem/README.md.

Weekly auto-sync: the repo can refresh those sample trees from upstream on a schedule via GitHub Actions — see .github/workflows/sync-ecosystem.yml (Monday 04:00 UTC, plus manual workflow_dispatch). PRs are opened only when examples/ecosystem/** conversions change. That workflow also creates the GitHub labels ecosystem, automation, workflow, and agent if missing (idempotent), then applies ecosystem + automation to the sync PR. If your org disallows GITHUB_TOKEN from opening PRs, set repository secret GH_PAT (PAT with repo + PR access to this repo only); see docs/ECOSYSTEM_OPENCLAW.md — no access to upstream Clawflows/Agency-Agents orgs is required (public raw fetches only).

Contributing workflows & agents

After you push a branch, open a pull request on GitHub and use the template dropdown to choose Submit Workflow (Clawflows-style) or Submit Agent (Agency-Agents-style) (see .github/PULL_REQUEST_TEMPLATE/).
You can also append ?quick_pull=1&template=workflow-submission.md or template=agent-submission.md to your compare URL (mainyour-branch).

Templates: workflow-submission.md · agent-submission.md · default pull_request_template.md.


Ultra Cost-Efficient Mode — modules/efficient_styles.ainl

AINL ships a reusable output style module that routes user-facing responses through a dense, professional prose style and tool/agentic steps through structured JSON — eliminating redundancy at the LLM output layer.

include "modules/efficient_styles.ainl" as style

my_workflow:
  in: question
  # ... do work ...
  result = core.GET work_result
  if is_user_facing:
    out = style.human_dense_response result
  out = style.terse_structured result

human_dense_response — natural English, full paragraphs, examples where helpful. Strips hedging (I think, basically), redundancy, pleasantries. Code, numbers, steps remain 100 % exact and detailed.

terse_structured — JSON-only output for tool steps and internal state. Never use for end-user responses.

When combined with the ArmaraOS Ultra Cost-Efficient Mode input compressor (heuristic input reduction in Rust, under 30 ms target; typical ~40–56 % input savings in Balanced mode on conversational text), using this module on output as well stacks savings on both sides of the LLM. Total bill impact depends on your input/output token mix and model pricing (e.g. Claude Sonnet 4.6 list $3/M input · $15/M output).

Cross-repo reference: docs/operations/EFFICIENT_MODE_ARMARAOS_BRIDGE.md — how CLI env, efficient_styles.ainl, and ArmaraOS docs/prompt-compression-efficient-mode.md relate.

CLI hint for ArmaraOS-hosted graphs (sets AINL_EFFICIENT_MODE env var — Rust compression is on the host):

ainl run my_workflow.ainl --efficient-mode balanced

Note: the --efficient-mode flag only signals the ArmaraOS kernel via environment. No compression runs in Python — implementation is in openfang-runtime (prompt_compressor).


Includes & modules

Reuse battle-tested patterns without copy-pasting whole graphs. Compile-time include merges each module’s labels under alias/LABEL keys (for example retry/ENTRY, retry/EXIT_OK, timeout/WORK).

Syntax (paths resolve next to the source file, then the current working directory, then ./modules/):

include "modules/common/retry.ainl" as retry
include modules/common/timeout.ainl as timeout

Quotes around the path are optional when unambiguous; as alias sets the prefix for every label from that file.

Prelude order: put every include line before the first top-level S or E (service / endpoint). Lines after S / E are not part of the include prelude; modules merged too late will not expose Call alias/… targets. See modules/common/README.md.

include "modules/common/retry.ainl" as retry
include "modules/common/timeout.ainl" as timeout

S app api /api
L1: Call retry/ENTRY ->r J r
L2: Call timeout/ENTRY ->t J t

Subgraphs must define LENTRY: (merged as alias/ENTRY) and at least one LEXIT_*: label in strict mode. Do not declare top-level E / S inside includes—endpoints and services belong in the main program. Use quoted jumps for string payloads (J "ok") so strict dataflow treats them as literals.

Shared modules live in a modules/ directory next to your files (with CWD / ./modules/ fallback—see compiler path resolution).

Starter modules in this repo

Module What it is
modules/common/retry.ainl Minimal ENTRY → EXIT_OK / EXIT_FAIL pattern with sample core.ADD work—copy and extend with your own retry / backoff steps in the parent or module.
modules/common/timeout.ainl Timeout / cancellation shape (ENTRYCall WORK, plus LTIMEOUT for the failure branch). The strict build uses core.SLEEP / core.ECHO stand-ins until real timer adapters are allowlisted—swap the R lines when your runtime supports them.
modules/common/token_cost_memory.ainl Shared deterministic memory helper for monitor workflows using the workflow namespace (WRITE + bounded LIST with metadata and filters).
modules/common/ops_memory.ainl Shared deterministic memory helper for monitor workflows using the ops namespace (WRITE + bounded LIST with metadata and filters).
modules/common/generic_memory.ainl Shared namespace-aware deterministic memory helper for workflows that write/list outside workflow/ops (for example session, long_term, intel).
modules/common/access_aware_memory.ainl Opt-in access metadata on reads/lists/writes: bumps metadata.last_accessed and metadata.access_count via LACCESS_READ, LACCESS_WRITE, LACCESS_LIST, or graph-safe LACCESS_LIST_SAFE (recommended when using graph-preferred execution for list snapshots). See module header and modules/common/README.md.

Starter Modules (in modules/common/)

  • retry.ainl — exponential backoff + max retries
  • timeout.ainl — timeout wrapper with placeholder delay (swap to real timer later)
include "modules/common/timeout.ainl" as timeout

L1:
  Call timeout/ENTRY ->out
  J out

More patterns coming soon (approval gate, circuit breaker, RAG retrieval, etc.). Contract (strict): included subgraphs expose LENTRY: (merged as alias/ENTRY) and at least one LEXIT_*: exit label; call them with Call alias/ENTRY ->out from the parent. Agents benefit from smaller, verified building blocks and stable qualified names (retry/n1, …) in the graph IR. Full behavior, path resolution, and tests: tests/test_includes.py, docs/architecture/GRAPH_INTROSPECTION.md.

See your includes in a diagram: ainl visualize main.ainl -o graph.mmd — each alias becomes a Mermaid subgraph cluster; synthetic Call → edges into alias/ENTRY are annotated in the output.

Reference bots

  • Apollo X promoter — production-shaped strict graph, HTTP executor gateway, OpenClaw/cron entrypoints, executor_request_builder, record_decision, and KV snapshot hooks: apollo-x-bot/README.md. Optional Growth Pack (v1.3) (follow/discovery/like/thread/awareness, env-gated) is documented in the same README under Optional Apollo-X Growth Pack.

Choose Your Path

AINL can be used through three integration surfaces. All three run the same compiler and runtime; they differ in how you connect.

External runtime option (opt-in): AINL can call PTC-Lisp through ptc_runner for reliability overlays and trace interoperability; see docs/adapters/PTC_RUNNER.md.

The canonical "hello world" workflow used below:

S app api /api
L1:
R core.ADD 2 3 ->x
J x

Path A — CLI only (fastest start)

# Validate and inspect IR
ainl-validate examples/hello.ainl --strict --emit ir

# Mermaid diagram (paste into mermaid.live)
ainl visualize examples/hello.ainl --output - > hello.mmd

# Run directly
ainl run examples/hello.ainl --json

No server needed. Good for local development, scripting, and CI.

Path B — HTTP runner (orchestrator integration)

# Start the runner service
ainl-runner-service
# default: http://localhost:8770

# Discover capabilities
curl http://localhost:8770/capabilities

# Execute a workflow
curl -X POST http://localhost:8770/run \
  -H "Content-Type: application/json" \
  -d '{"code": "S app api /api\nL1:\nR core.ADD 2 3 ->x\nJ x", "strict": true}'

Best for container deployments, sandbox controllers, and external orchestrators. See docs/operations/EXTERNAL_ORCHESTRATION_GUIDE.md.

Path C — MCP host (AI coding agents)

# Install with MCP extra
pip install -e ".[mcp]"

# Start the stdio MCP server
ainl-mcp

Then configure your MCP-compatible host (Gemini CLI, Claude Code, Codex, etc.) to use the ainl-mcp stdio transport. The host can call ainl_validate, ainl_compile, ainl_capabilities, ainl_security_report, and ainl_run.

MCP v1 runs with safe defaults (core-only adapters, conservative limits). Operators can scope which tools/resources are exposed via named exposure profiles (AINL_MCP_EXPOSURE_PROFILE) or env-var inclusion/exclusion lists. See section 9 of docs/operations/EXTERNAL_ORCHESTRATION_GUIDE.md for the full quickstart, exposure scoping, and gateway deployment guidance.

Three layers: MCP exposure profile → what tools/resources the host can discover. Security profile / capability grant → what execution is allowed. Policy / limits → what each run is constrained by.

For Claude Code / Cowork / Dispatch users: Treat AINL as a scoped MCP tool provider and deterministic workflow/runtime layer underneath your host. Start with validate_only or inspect_only exposure profiles, and only enable safe_workflow after operators have reviewed security profiles, grants, policies, limits, and adapter exposure. AINL is not the host, not Cowork, not a gateway, and not a control plane. For a concrete validator → inspector → safe-runner walkthrough, see the End-to-end example section of docs/operations/EXTERNAL_ORCHESTRATION_GUIDE.md.

Core first, advanced later. The paths above use the core compiler and runtime only. Advanced/operator-only surfaces (agent coordination, memory migration, OpenClaw extensions) are documented separately under docs/advanced/ and are not the recommended starting point for new users.

For the full getting-started guide, see docs/getting_started/README.md.


Why AINL Exists

Modern AI systems are increasingly powerful at reasoning, but still unreliable and expensive when forced to orchestrate whole workflows through long prompt loops. Long prompt loops create:

  • Prompt bloat and rising token cost
  • Hidden state and brittle orchestration
  • Poor auditability and hard-to-debug tool behavior

AINL addresses this by moving orchestration out of the model and into a deterministic execution substrate:

  1. The model describes the workflow once
  2. The compiler validates it
  3. The runtime executes it deterministically
  4. State lives in variables, cache, memory, databases, and adapters — not in prompt history
flowchart LR
A[AINL Source] --> B[Compiler]
B --> C[Canonical Graph IR]
C --> D[Runtime Engine]
D --> E[Adapters / Tools / State]
D --> F[Optional LLM Calls]
Loading

The graph is the source of truth. The runtime is the orchestrator. The model is a bounded reasoning component inside the system.


What Makes AINL Different

Release History

VersionChangesUrgencyDate
v1.4.6## AINL v1.4.6 — Samples + OpenSpace harness (2026-04-11) **PyPI / runtime:** **ainativelang 1.4.6** — **RUNTIME_VERSION 1.4.6**. - **Samples:** `apollo-x-bot/api-cost-monitor.ainl`, `demo/test_openspace_http.ainl`, and root `run_openspace_test.py` (portable paths) for OpenSpace / promoter experiments. - Full details: [docs/CHANGELOG.md](https://github.com/sbhooley/ainativelang/blob/v1.4.6/docs/CHANGELOG.md) (§ v1.4.6). Install: `pip install -U "ainativelang[mcp]>=1.4.6"` High4/9/2026
v1.4.4## ainativelang 1.4.4 PyPI packaging release; version surfaces aligned. See `docs/CHANGELOG.md` and `docs/RELEASE_NOTES.md`. - **emit_solana_client** discoverability header uses **RUNTIME_VERSION** - **Core builtins** (v1.4.3 line): comparisons, trim/strip, startswith/endswith, keys/values, coercions - **MCP**: `frame_hints`, workspace limits file, auto-cache, docsHigh4/8/2026

Similar Packages

spoon-awesome-skill🚀 Explore high-quality skills for SpoonOS, Web3, AI productivity, and enterprise tooling with 57+ curated Python scripts across multiple challenge tracks.master@2026-04-21
zotero-mcp-lite🚀 Run a high-performance MCP server for Zotero, enabling customizable workflows without cloud dependency or API keys.main@2026-04-21
antigravity-awesome-skills🌌 Explore 255+ essential skills for AI coding assistants like Claude Code and GitHub Copilot to enhance your development workflow.main@2026-04-21
agentic-fleet-hubSelf-hosted orchestration layer for autonomous AI agent teams. Shared memory, heartbeat scheduling, vault-first secrets, and cross-model peer review — one command to deploy.master@2026-04-21
sqltools_mcp🔌 Access multiple databases seamlessly with SQLTools MCP, a versatile service supporting MySQL, PostgreSQL, SQL Server, DM8, and SQLite without multiple servers.main@2026-04-21