freshcrate
Home > AI Agents > guardian-agent

guardian-agent

Security-first AI agent orchestration system. Built-in agents with predefined capabilities, strict guardrails on what they can and cannot do, and a four-layer defense system that enforces security at

Description

Security-first AI agent orchestration system. Built-in agents with predefined capabilities, strict guardrails on what they can and cannot do, and a four-layer defense system that enforces security at every stage of the message lifecycle.

README

GuardianAgent banner

GuardianAgent

Security-first AI assistant with a Second Brain and operator tooling.

GuardianAgent combines a daily-use Second Brain with guarded power-user surfaces for coding, workstation operations, automations, security, network, and cloud operations. The same assistant is available in web, CLI, and Telegram, with approvals and policy boundaries enforced by the runtime.

Version 1.0.0Apache 2.0 LicenseNode.js >= 20Four-Layer DefenseMulti-LLMMulti-Channel

Product Overview

Second Brain

Second Brain (#/) is the default web home.

  • Today centers the day around agenda, quick capture, priority tasks, briefs, notes, and routines
  • Calendar combines synced and local events with assistant-aware planning and follow-up
  • Tasks provides a lightweight board for priorities, due dates, and status tracking
  • Notes keeps searchable, pinnable, and archivable notes in one place
  • Contacts, Library, Briefs, and Routines round out the daily-use memory and upkeep workflow
  • Keep daily context separate from the operator and workstation consoles
  • Further reading: Second Brain As-Built Spec

GuardianAgent Second Brain Today view

Today is the default Second Brain landing view for agenda, capture, tasks, briefs, notes, and routines.

GuardianAgent Second Brain Calendar view
Calendar
GuardianAgent Second Brain Tasks view
Tasks
GuardianAgent Second Brain Notes view
Notes
GuardianAgent Second Brain Routines view
Routines

Power User Capabilities

  • Performance (#/performance) for workstation health, editable profiles, live processes, and reviewed cleanup. See Performance Management Spec.
  • Code (#/code) for repo-scoped coding sessions with chat, Monaco editor, diffing, approvals, and terminals. See Coding Workspace Spec.
  • Automations (#/automations) for saved and scheduled Guardian workflows and assistant tasks. See Automation Framework Spec.
  • Security, Network, and Cloud for alerts, posture, diagnostics, and infrastructure oversight. Start with WebUI Design Spec and SECURITY.md.
  • Configuration and Reference Guide for setup, integrations, policy, and operator guidance.

Shared Assistant

  • Web, CLI, and Telegram all use the same guarded assistant model
  • Local, managed-cloud, and frontier LLM providers are supported, including Ollama, Ollama Cloud, Anthropic, OpenAI, and others
  • Built-in tools, integrations, memory, and automations stay behind approval and policy controls
  • More detail: WebUI Design Spec, Tools Control Plane Spec

Screenshots

Second Brain screenshots are shown above in Product Overview. The gallery below covers the remaining major Guardian surfaces.

Open the full application gallery

Security, Network, Cloud, Automations, Configuration, Coding Assistant, and Reference Guide.

GuardianAgent security view
Security
GuardianAgent network view
Network
GuardianAgent cloud view
Cloud
GuardianAgent automations view
Automations
GuardianAgent configuration view
Configuration
GuardianAgent coding assistant view
Coding Assistant
GuardianAgent reference guide view
Reference Guide

Core Capabilities

  • A daily-use Second Brain for planning, capture, retrieval, and personal context
  • Power-user surfaces for performance management, coding, security, network, cloud, and automations
  • A shared assistant across Web, CLI, and Telegram
  • Multi-provider LLM support with guarded tools, approvals, and policy controls
  • Search, integrations, and workflow automation without collapsing everything into raw shell access
  • Specs and architecture docs for the deeper implementation detail when you need it

Project Structure

  • src/ — core application runtime, orchestration, tools, channels, prompts, and memory systems
  • web/public/ — dashboard UI, chat panel, code workspace UI, and browser-side assets
  • scripts/ — setup helpers, test harnesses, and verification scripts
  • docs/ — architecture notes, specs, guides, research, and supporting documentation
  • docs/plans/ — implementation roadmaps and status trackers
  • policies/ — rule and policy files
  • native/windows-helper/ — Windows native helper components

Development Commands

  • npm run dev — start GuardianAgent in development mode
  • npm run build — compile TypeScript into dist/
  • npm run check — run TypeScript checking without emitting output
  • npm test — run the Vitest suite
  • node scripts/test-code-ui-smoke.mjs — run the web/code UI smoke harness
  • node scripts/test-coding-assistant.mjs — run the coding assistant smoke harness

Security at a Glance

GuardianAgent enforces security at the Runtime level — agents cannot bypass it. Every message, LLM call, tool action, and response passes through mandatory chokepoints.

Layer When What It Does
1 — Admission Before the agent sees input Prompt injection detection, rate limiting, capability checks, secret/PII scanning, path blocking, SSRF protection
1.5 — Process Sandbox During tool execution OS-level isolation via bwrap namespaces (Linux), native helper (Windows), or ulimit/env hardening fallback
2 — Guardian Agent Before tool execution Inline LLM evaluates every non-read-only tool action; blocks high/critical risk. Fail-closed by default
3 — Output Guardian After execution, before delivery or reinjection Scans LLM responses and tool results, classifies trust (trusted / low_trust / quarantined), redacts secrets/PII, and can suppress raw reinjection
4 — Sentinel Audit Retrospective (scheduled or on-demand) Analyzes audit log for anomaly patterns: capability probing, volume spikes, repeated secret detections, error storms

The built-in chat/planner loop runs in a brokered worker process with no network access. Tools, approvals, trust metadata, and LLM API calls are mediated through broker RPC.

Install-like public package-manager actions are also routed through a dedicated managed path. Guardian uses package_install to stage the requested top-level package artifacts, review them before execution, and surface caution or blocked findings in the unified security workflow instead of treating package installs as ordinary shell commands.

For the full security architecture, threat model, and configuration details, see SECURITY.md.


Getting Started

Requirements

  • Node.js 20 or newer
  • A local, managed-cloud, or frontier LLM provider (Ollama, Ollama Cloud, Anthropic, OpenAI, etc.)

SQLite-backed persistence and monitoring are enabled when the Node build includes node:sqlite. Otherwise, assistant memory and analytics run in-memory automatically.

Install & Start

Clone the repository and use the platform start script:

Windows:

.\scripts\start-dev-windows.ps1

Linux / macOS:

bash scripts/start-dev-unix.sh

These scripts handle dependency installation, build, startup, and the initial configuration bootstrap.

First Run

After startup:

  1. Open the web UI and go to the Configuration Center (#/config, usually http://localhost:3000)
  2. Add your LLM provider — select Ollama for local models, or add an API key for Anthropic/OpenAI/etc.
  3. Open Second Brain at #/ to confirm the default daily-home surface is live and the assistant is ready for task, note, calendar, and people workflows.
  4. Connect Google Workspace or Microsoft 365 if needed — use Cloud > Connections when you want provider-backed calendar and contacts synced into Second Brain.
  5. Review tool policy — defaults to on-request / approve_each for the main assistant, with a read-only shell allowlist. Mutating tools still require approval, and public package-manager installs should go through the managed package_install path instead of shell_safe.
  6. Enable optional channels — Telegram bot setup is in Configuration > Integration System > Telegram Channel
  7. Set web auth — web access defaults to bearer-protected mode; configure it in Configuration > Integration System > Web Authentication or with CLI /auth ...
  8. Open the Coding Assistant if needed — go to #/code for a project-scoped coding workspace with its own session history, terminals, approvals, and verification surfaces

Most configuration is done through the web UI or CLI commands (/config, /providers, /auth, /tools). Manual config.yaml editing is optional and intended for advanced use.

Using GuardianAgent

GuardianAgent is accessible through three channels:

Channel Access Best For
Web Browser at the configured port Second Brain, dashboard/operator surfaces, configuration, monitoring, chat, and coding workspace
CLI Terminal where GuardianAgent is running Quick commands, scripting, and local development
Telegram Telegram bot (requires setup) Mobile access and notifications

What you can do:

  • Chat with the built-in AI assistant
  • Use Second Brain as the default daily home for tasks, notes, people, routines, and calendar-aware planning
  • Use Performance, Security, Network, Cloud, and Automations as dedicated operator surfaces instead of burying everything in chat
  • Use the Coding Assistant for repository-scoped work with editor, diffing, approvals, checks, and terminals
  • Run guarded tools, integrations, search, and automation workflows across the same assistant

Approvals and safety: Actions may run automatically, wait for approval, or be denied depending on policy, trust level, and tool risk. For the detailed behavior, see SECURITY.md and Tools Control Plane Spec.

Coding Assistant

The web Code page is a dedicated repo-scoped workspace with its own session context, editor, diffing, approvals, checks, and terminals.

Implementation detail and current limitations are documented in docs/specs/CODING-WORKSPACE-SPEC.md.

Telegram Setup

  1. Open Telegram, search for @BotFather, press Start, run /newbot
  2. Follow prompts for bot name and username (must end with bot), copy the bot token
  3. Add the token in Configuration > Integration System > Telegram Channel or through the CLI configuration flow
  4. Restrict access with allowed chat IDs
  5. Save the channel settings; Telegram changes hot-reload when the token or credential ref and allowlist are valid

Windows Portable Build (Optional)

For additional native subprocess isolation on Windows:

npm run portable:windows     # Portable zip with sandbox helper
npm run installer:windows    # Traditional installer

See INSTALLATION.md for the full list of Windows packaging options.


LLM Providers

GuardianAgent supports 10 built-in provider families across local, managed-cloud, and frontier tiers:

Provider Type Notes
Ollama Local Runs models locally through the native Ollama path
Ollama Cloud Managed cloud Ollama-native remote tier between local and frontier providers
Anthropic Frontier hosted Claude models with prompt caching
OpenAI Frontier hosted GPT models
Groq Frontier hosted Fast OpenAI-compatible inference
Mistral AI Frontier hosted Mistral hosted models
DeepSeek Frontier hosted DeepSeek hosted models
Together AI Frontier hosted Open-source model hosting
xAI (Grok) Frontier hosted Grok models
Google Gemini Frontier hosted Gemini models through the OpenAI-compatible endpoint

Smart Routing

When both local and external providers are configured, tools automatically route by category:

Routes to Local model Routes to External model
Filesystem, Shell, Network, System, Memory Web, Browser, Workspace, Email, Contacts, Search, Automation

Single-provider setups work without configuration. Smart routing can be toggled off in Configuration > Tools. Per-tool and per-category overrides are available via the LLM column dropdowns.

Inside the external tier, Configuration > AI Providers controls whether Guardian prefers managed-cloud Ollama Cloud or frontier-hosted profiles, and the Model Auto Selection Policy can bind named Ollama Cloud profiles to general, direct, tool-loop, and coding roles.

Quality-based fallback: When the local model produces a degraded response (empty, refusal, or boilerplate), the system automatically retries through the fallback chain.


Configuration

Most users configure GuardianAgent through the web Configuration Center (#/config) or CLI commands. The config.yaml file at ~/.guardianagent/config.yaml is created and updated automatically by those flows.

Three simplified top-level config aliases cover the most common settings:

sandbox_mode: workspace-write  # off | workspace-write | strict
approval_policy: on-request    # on-request | auto-approve | autonomous
writable_roots:                # merged into allowedPaths + sandbox writePaths
  - /home/user/projects

The default runtime stays brokered with a workspace-write sandbox profile and permissive enforcement. Set sandbox_mode: strict when you want risky subprocess-backed tools to fail closed unless a strong sandbox backend is available.

For detailed configuration documentation:


Further Reading


Development

npm test                              # Run all tests (vitest)
npm run test:verbose                  # Verbose test output
npm run test:coverage                 # Run with v8 coverage
npx vitest run src/path/to.test.ts   # Run a single test file

npm run check         # Type-check only (tsc --noEmit)
npm run build         # TypeScript compilation → dist/
npm run dev           # Run with tsx (starts CLI channel)

For local development, packaging, and platform-specific setup, use the scripts in scripts/ and the architecture/spec documentation linked above.


Disclaimer

This software is provided as-is, without warranty of any kind. GuardianAgent implements security controls designed to reduce risk in AI agent systems, but no software can guarantee complete security. The developers and contributors accept no liability for any damages, data loss, credential exposure, financial loss, or other harm arising from the use of this software.

By using GuardianAgent, you acknowledge that:

  • AI systems are inherently unpredictable and may produce unexpected outputs
  • Security patterns (secret scanning, prompt injection detection) rely on known signatures and heuristics, and may not catch novel or obfuscated attack vectors
  • You are solely responsible for the configuration, deployment, and operation of this software in your environment
  • You should independently evaluate whether the security controls are sufficient for your use case
  • This software should not be used as a sole security control for systems handling sensitive data without additional safeguards

This project is not affiliated with any security certification body and makes no compliance claims.

License

Apache 2.0

Release History

VersionChangesUrgencyDate
main@2026-04-21Latest activity on main branchHigh4/21/2026
0.0.0No release found — using repo HEADHigh4/11/2026

Dependencies & License Audit

Loading dependencies...

Similar Packages

adk-jsAn open-source, code-first Typescript toolkit for building, evaluating, and deploying sophisticated AI agents with flexibility and control.devtools-v0.6.1
20xSelf-improving Agent orchestrator for all knowledge workv0.0.68
agent-sdk🤖 Build transparent, message-first agents with efficient tool calls, planning, and multi-agent handoffs using the lightweight agent-sdk for Node.js.main@2026-04-21
claw-marketEnable autonomous agents to create, trade, and scale digital products and services across decentralized marketplaces efficiently.main@2026-04-21
claude-memA Claude Code plugin that automatically captures everything Claude does during your coding sessions, compresses it with AI (using Claude's agent-sdk), and injects relevant context back into future sesv12.3.8