GuardianAgent combines a daily-use Second Brain with guarded power-user surfaces for coding, workstation operations, automations, security, network, and cloud operations. The same assistant is available in web, CLI, and Telegram, with approvals and policy boundaries enforced by the runtime.
guardian-agent
Security-first AI agent orchestration system. Built-in agents with predefined capabilities, strict guardrails on what they can and cannot do, and a four-layer defense system that enforces security at
Description
Security-first AI agent orchestration system. Built-in agents with predefined capabilities, strict guardrails on what they can and cannot do, and a four-layer defense system that enforces security at every stage of the message lifecycle.
README
Second Brain (#/) is the default web home.
Todaycenters the day around agenda, quick capture, priority tasks, briefs, notes, and routinesCalendarcombines synced and local events with assistant-aware planning and follow-upTasksprovides a lightweight board for priorities, due dates, and status trackingNoteskeeps searchable, pinnable, and archivable notes in one placeContacts,Library,Briefs, andRoutinesround out the daily-use memory and upkeep workflow- Keep daily context separate from the operator and workstation consoles
- Further reading: Second Brain As-Built Spec
Today is the default Second Brain landing view for agenda, capture, tasks, briefs, notes, and routines.
Calendar |
Tasks |
Notes |
Routines |
Performance(#/performance) for workstation health, editable profiles, live processes, and reviewed cleanup. See Performance Management Spec.Code(#/code) for repo-scoped coding sessions with chat, Monaco editor, diffing, approvals, and terminals. See Coding Workspace Spec.Automations(#/automations) for saved and scheduled Guardian workflows and assistant tasks. See Automation Framework Spec.Security,Network, andCloudfor alerts, posture, diagnostics, and infrastructure oversight. Start with WebUI Design Spec and SECURITY.md.ConfigurationandReference Guidefor setup, integrations, policy, and operator guidance.
- Web, CLI, and Telegram all use the same guarded assistant model
- Local, managed-cloud, and frontier LLM providers are supported, including Ollama, Ollama Cloud, Anthropic, OpenAI, and others
- Built-in tools, integrations, memory, and automations stay behind approval and policy controls
- More detail: WebUI Design Spec, Tools Control Plane Spec
Second Brain screenshots are shown above in Product Overview. The gallery below covers the remaining major Guardian surfaces.
Open the full application gallery
Security, Network, Cloud, Automations, Configuration, Coding Assistant, and Reference Guide.
Security |
Network |
Cloud |
Automations |
Configuration |
Coding Assistant |
Reference Guide |
|
- A daily-use Second Brain for planning, capture, retrieval, and personal context
- Power-user surfaces for performance management, coding, security, network, cloud, and automations
- A shared assistant across Web, CLI, and Telegram
- Multi-provider LLM support with guarded tools, approvals, and policy controls
- Search, integrations, and workflow automation without collapsing everything into raw shell access
- Specs and architecture docs for the deeper implementation detail when you need it
src/— core application runtime, orchestration, tools, channels, prompts, and memory systemsweb/public/— dashboard UI, chat panel, code workspace UI, and browser-side assetsscripts/— setup helpers, test harnesses, and verification scriptsdocs/— architecture notes, specs, guides, research, and supporting documentationdocs/plans/— implementation roadmaps and status trackerspolicies/— rule and policy filesnative/windows-helper/— Windows native helper components
npm run dev— start GuardianAgent in development modenpm run build— compile TypeScript intodist/npm run check— run TypeScript checking without emitting outputnpm test— run the Vitest suitenode scripts/test-code-ui-smoke.mjs— run the web/code UI smoke harnessnode scripts/test-coding-assistant.mjs— run the coding assistant smoke harness
GuardianAgent enforces security at the Runtime level — agents cannot bypass it. Every message, LLM call, tool action, and response passes through mandatory chokepoints.
| Layer | When | What It Does |
|---|---|---|
| 1 — Admission | Before the agent sees input | Prompt injection detection, rate limiting, capability checks, secret/PII scanning, path blocking, SSRF protection |
| 1.5 — Process Sandbox | During tool execution | OS-level isolation via bwrap namespaces (Linux), native helper (Windows), or ulimit/env hardening fallback |
| 2 — Guardian Agent | Before tool execution | Inline LLM evaluates every non-read-only tool action; blocks high/critical risk. Fail-closed by default |
| 3 — Output Guardian | After execution, before delivery or reinjection | Scans LLM responses and tool results, classifies trust (trusted / low_trust / quarantined), redacts secrets/PII, and can suppress raw reinjection |
| 4 — Sentinel Audit | Retrospective (scheduled or on-demand) | Analyzes audit log for anomaly patterns: capability probing, volume spikes, repeated secret detections, error storms |
The built-in chat/planner loop runs in a brokered worker process with no network access. Tools, approvals, trust metadata, and LLM API calls are mediated through broker RPC.
Install-like public package-manager actions are also routed through a dedicated managed path. Guardian uses package_install to stage the requested top-level package artifacts, review them before execution, and surface caution or blocked findings in the unified security workflow instead of treating package installs as ordinary shell commands.
For the full security architecture, threat model, and configuration details, see SECURITY.md.
- Node.js 20 or newer
- A local, managed-cloud, or frontier LLM provider (Ollama, Ollama Cloud, Anthropic, OpenAI, etc.)
SQLite-backed persistence and monitoring are enabled when the Node build includes node:sqlite. Otherwise, assistant memory and analytics run in-memory automatically.
Clone the repository and use the platform start script:
Windows:
.\scripts\start-dev-windows.ps1Linux / macOS:
bash scripts/start-dev-unix.shThese scripts handle dependency installation, build, startup, and the initial configuration bootstrap.
After startup:
- Open the web UI and go to the Configuration Center (
#/config, usuallyhttp://localhost:3000) - Add your LLM provider — select Ollama for local models, or add an API key for Anthropic/OpenAI/etc.
- Open Second Brain at
#/to confirm the default daily-home surface is live and the assistant is ready for task, note, calendar, and people workflows. - Connect Google Workspace or Microsoft 365 if needed — use
Cloud > Connectionswhen you want provider-backed calendar and contacts synced into Second Brain. - Review tool policy — defaults to
on-request/approve_eachfor the main assistant, with a read-only shell allowlist. Mutating tools still require approval, and public package-manager installs should go through the managedpackage_installpath instead ofshell_safe. - Enable optional channels — Telegram bot setup is in
Configuration > Integration System > Telegram Channel - Set web auth — web access defaults to bearer-protected mode; configure it in
Configuration > Integration System > Web Authenticationor with CLI/auth ... - Open the Coding Assistant if needed — go to
#/codefor a project-scoped coding workspace with its own session history, terminals, approvals, and verification surfaces
Most configuration is done through the web UI or CLI commands (/config, /providers, /auth, /tools). Manual config.yaml editing is optional and intended for advanced use.
GuardianAgent is accessible through three channels:
| Channel | Access | Best For |
|---|---|---|
| Web | Browser at the configured port | Second Brain, dashboard/operator surfaces, configuration, monitoring, chat, and coding workspace |
| CLI | Terminal where GuardianAgent is running | Quick commands, scripting, and local development |
| Telegram | Telegram bot (requires setup) | Mobile access and notifications |
What you can do:
- Chat with the built-in AI assistant
- Use Second Brain as the default daily home for tasks, notes, people, routines, and calendar-aware planning
- Use Performance, Security, Network, Cloud, and Automations as dedicated operator surfaces instead of burying everything in chat
- Use the Coding Assistant for repository-scoped work with editor, diffing, approvals, checks, and terminals
- Run guarded tools, integrations, search, and automation workflows across the same assistant
Approvals and safety: Actions may run automatically, wait for approval, or be denied depending on policy, trust level, and tool risk. For the detailed behavior, see SECURITY.md and Tools Control Plane Spec.
The web Code page is a dedicated repo-scoped workspace with its own session context, editor, diffing, approvals, checks, and terminals.
Implementation detail and current limitations are documented in docs/specs/CODING-WORKSPACE-SPEC.md.
- Open Telegram, search for
@BotFather, press Start, run/newbot - Follow prompts for bot name and username (must end with
bot), copy the bot token - Add the token in
Configuration > Integration System > Telegram Channelor through the CLI configuration flow - Restrict access with allowed chat IDs
- Save the channel settings; Telegram changes hot-reload when the token or credential ref and allowlist are valid
For additional native subprocess isolation on Windows:
npm run portable:windows # Portable zip with sandbox helper
npm run installer:windows # Traditional installerSee INSTALLATION.md for the full list of Windows packaging options.
GuardianAgent supports 10 built-in provider families across local, managed-cloud, and frontier tiers:
| Provider | Type | Notes |
|---|---|---|
| Ollama | Local | Runs models locally through the native Ollama path |
| Ollama Cloud | Managed cloud | Ollama-native remote tier between local and frontier providers |
| Anthropic | Frontier hosted | Claude models with prompt caching |
| OpenAI | Frontier hosted | GPT models |
| Groq | Frontier hosted | Fast OpenAI-compatible inference |
| Mistral AI | Frontier hosted | Mistral hosted models |
| DeepSeek | Frontier hosted | DeepSeek hosted models |
| Together AI | Frontier hosted | Open-source model hosting |
| xAI (Grok) | Frontier hosted | Grok models |
| Google Gemini | Frontier hosted | Gemini models through the OpenAI-compatible endpoint |
When both local and external providers are configured, tools automatically route by category:
| Routes to Local model | Routes to External model |
|---|---|
| Filesystem, Shell, Network, System, Memory | Web, Browser, Workspace, Email, Contacts, Search, Automation |
Single-provider setups work without configuration. Smart routing can be toggled off in Configuration > Tools. Per-tool and per-category overrides are available via the LLM column dropdowns.
Inside the external tier, Configuration > AI Providers controls whether Guardian prefers managed-cloud Ollama Cloud or frontier-hosted profiles, and the Model Auto Selection Policy can bind named Ollama Cloud profiles to general, direct, tool-loop, and coding roles.
Quality-based fallback: When the local model produces a degraded response (empty, refusal, or boilerplate), the system automatically retries through the fallback chain.
Most users configure GuardianAgent through the web Configuration Center (#/config) or CLI commands. The config.yaml file at ~/.guardianagent/config.yaml is created and updated automatically by those flows.
Three simplified top-level config aliases cover the most common settings:
sandbox_mode: workspace-write # off | workspace-write | strict
approval_policy: on-request # on-request | auto-approve | autonomous
writable_roots: # merged into allowedPaths + sandbox writePaths
- /home/user/projectsThe default runtime stays brokered with a workspace-write sandbox profile and permissive enforcement. Set sandbox_mode: strict when you want risky subprocess-backed tools to fail closed unless a strong sandbox backend is available.
For detailed configuration documentation:
- SECURITY.md for the security model and trust boundaries
- WebUI Design Spec for page ownership and product-surface design
- Second Brain As-Built Spec for the daily-home experience
- Performance Management Spec for workstation operations
- Coding Workspace Spec for the repo-scoped IDE surface
- Automation Framework Spec for saved and scheduled automation behavior
- Configuration Center Spec for setup, integrations, and policy controls
- docs/ for the full architecture, specs, guides, proposals, and research set
npm test # Run all tests (vitest)
npm run test:verbose # Verbose test output
npm run test:coverage # Run with v8 coverage
npx vitest run src/path/to.test.ts # Run a single test file
npm run check # Type-check only (tsc --noEmit)
npm run build # TypeScript compilation → dist/
npm run dev # Run with tsx (starts CLI channel)For local development, packaging, and platform-specific setup, use the scripts in scripts/ and the architecture/spec documentation linked above.
This software is provided as-is, without warranty of any kind. GuardianAgent implements security controls designed to reduce risk in AI agent systems, but no software can guarantee complete security. The developers and contributors accept no liability for any damages, data loss, credential exposure, financial loss, or other harm arising from the use of this software.
By using GuardianAgent, you acknowledge that:
- AI systems are inherently unpredictable and may produce unexpected outputs
- Security patterns (secret scanning, prompt injection detection) rely on known signatures and heuristics, and may not catch novel or obfuscated attack vectors
- You are solely responsible for the configuration, deployment, and operation of this software in your environment
- You should independently evaluate whether the security controls are sufficient for your use case
- This software should not be used as a sole security control for systems handling sensitive data without additional safeguards
This project is not affiliated with any security certification body and makes no compliance claims.
Apache 2.0
Release History
| Version | Changes | Urgency | Date |
|---|---|---|---|
| main@2026-04-21 | Latest activity on main branch | High | 4/21/2026 |
| 0.0.0 | No release found — using repo HEAD | High | 4/11/2026 |

