freshcrate
Home > Frameworks > agent-brain

agent-brain

Agent ReAct framework with cognitive planning engine β€” five-phase cognitive cycle with nested ReAct loops, dynamic skill acquisition, and interactive user input.

Description

Agent ReAct framework with cognitive planning engine β€” five-phase cognitive cycle with nested ReAct loops, dynamic skill acquisition, and interactive user input.

README

@biosbot/agent-brain

CI npm version License: MIT TypeScript Node.js

Agentic AI framework for building autonomous LLM agents with a human-like cognitive architecture. Features a five-phase cognitive cycle (PERCEIVE β†’ ASSESS β†’ PLAN β†’ EXECUTE β†’ REFLECT), nested ReAct loops, dynamic skill acquisition, four thinking modes, token budget management, and memory-augmented execution.

English | δΈ­ζ–‡

πŸ”Ή Core Features

Feature Description
Five-phase Cognitive Cycle PERCEIVE β†’ ASSESS β†’ PLAN β†’ EXECUTE β†’ REFLECT
Adaptive Fast Path PERCEIVE classifies complexity; simple tasks skip to EXECUTE via Strategy pattern (2 LLM calls)
Nested ReAct Outer task planning + inner per-step execution loops
Dynamic Skill Acquisition Auto-search and install skills during execution
Interactive User Input Pause and wait for user input via ask_user tool
Four Thinking Modes CREATIVE, LOGICAL, EMPATHETIC, STRUCTURAL
Token Budget Context window optimization
Memory Integration Context-aware execution
Security Sandbox Rule-based permission guard (ALLOW / DENY / ASK) for all tool execution

Overview

This framework models an agent's task processing as a five-phase cognitive cycle that simulates human thought processes:

PERCEIVE β†’ ASSESS β†’ PLAN β†’ EXECUTE β†’ REFLECT

The PERCEIVE phase simultaneously classifies task complexity, selecting an execution strategy:

  • Simple tasks (e.g., "find stock-related skills"): FastPathStrategy skips to EXECUTE directly (2 LLM calls)
  • Complex tasks (e.g., "analyze server performance and generate report"): FullCycleStrategy runs the full ASSESS β†’ PLAN β†’ EXECUTE β†’ REFLECT cycle

Dual ReAct Architecture

The framework implements a nested ReAct architecture:

  • Outer ReAct: Five-phase cognitive loop (the brain's macro workflow)
  • Inner ReAct: Per-step execution loop within EXECUTE phase
β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”
β”‚                    Outer ReAct                               β”‚
β”‚  PERCEIVE β†’ ASSESS β†’ PLAN β†’ EXECUTE β†’ REFLECT              β”‚
β”‚                                   β”‚                          β”‚
β”‚                    β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”            β”‚
β”‚                    β”‚     Inner ReAct (per step)β”‚            β”‚
β”‚                    β”‚ Thought β†’ Action β†’ Obs   β”‚            β”‚
β”‚                    β”‚     ↑                β”‚    β”‚            β”‚
β”‚                    β”‚     └──── loop β”€β”€β”€β”€β”€β”€β”˜    β”‚            β”‚
β”‚                    β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜            β”‚
β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜

Features

  • Five-phase cognitive cycle mimicking human thought processes
  • Per-step ReAct loops for granular execution control
  • Dynamic skill acquisition β€” agents can install new skills during execution
  • Interactive user input β€” agents can request user input during execution via ask_user tool
  • Four thinking modes: CREATIVE, LOGICAL, EMPATHETIC, STRUCTURAL
  • Token budget management for context window optimization
  • Memory integration for context-aware execution
  • Security sandbox with rule-based permission control (ALLOW / DENY / ASK) for all tool and skill execution
  • Extensible event system for observability

Installation

npm install @biosbot/agent-brain

Quick Start

import { AgentBrain, OpenAIClient } from '@biosbot/agent-brain';
import { SkillHub } from '@biosbot/agent-skills';
import { MemoryHub } from '@biosbot/agent-memory';

const model = new OpenAIClient({ apiKey: process.env.OPENAI_API_KEY });
const skills = new SkillHub();
const memory = new MemoryHub();

const agent = new AgentBrain({
  model,
  skills,
  memory,
  config: {
    systemPrompt: 'You are a helpful AI assistant.',
    modelContextSize: 128000,
  },
});

const result = await agent.run('Help me analyze the server performance data from last month');
console.log(result.finalAnswer);

Core Concepts

Five-Phase Cognitive Cycle

Phase Description Output
PERCEIVE Understand user input, identify intent Perception
ASSESS Evaluate capabilities, identify gaps Assessment
PLAN Create execution plan Plan
EXECUTE Execute via per-step ReAct loops ExecuteResult
REFLECT Evaluate results, decide on replanning Reflection

Thinking Modes

The framework dynamically adjusts thinking mode weights for each cognitive phase:

  • CREATIVE: Generate novel ideas, make unexpected connections
  • LOGICAL: Reason causality, verify consistency
  • EMPATHETIC: Understand emotions, user needs
  • STRUCTURAL: Decompose tasks, manage dependencies

Skills and Tools

  • Innate Tools: Built-in capabilities (skill management, knowledge CRUD)
  • Skill Packages: Domain-specific tools loaded on-demand

The agent can dynamically acquire new skills during execution using innate tools like skill_install and skill_load_main.

Knowledge Base Operations

The framework provides 5 knowledge base operations as innate tools:

Tool Description
knowledge_list List all entries with filtering (category, limit, offset)
knowledge_add Add new entry (title, content, category, tags, metadata)
knowledge_delete Delete entry by ID (supports soft/hard delete)
knowledge_search Semantic search (query, topK, category, tags, threshold)
knowledge_read Read full entry content by ID

User Input During Execution

The agent can request user input during execution using the ask_user tool. Subscribe to the user:input-request event and call provideUserInput():

const agent = new AgentBrain({
  // ... config
  eventPublisher: {
    publish(type, payload) {
      if (type === 'user:input-request') {
        const { question } = payload;
        const userResponse = await getUserInput(question);
        agent.provideUserInput(userResponse);
      }
    },
  },
});

Use agent.isWaitingForUserInput() to check if the agent is currently waiting for input.

Security Sandbox

The framework provides a built-in SecuritySandbox that guards all tool execution with permission rules:

  • ALLOW: Execute without prompting
  • DENY: Reject immediately (returned to the model as an Observation, allowing fallback)
  • ASK: Prompt the user before executing (default)

Each innate tool self-declares its actionCategory (e.g., fs_read, cmd_exec, web_fetch) and permissionTargetArgs, enabling Open/Closed permission checks without hardcoded mappings. Skill tools default to the skill_exec category.

const agent = new AgentBrain({
  model,
  skills,
  memory,
  sandbox: {
    workingDirectory: './agent-workspace',
    defaultPermission: 'ASK',
    rules: [
      { action: 'fs_read', pattern: '/safe/dir/**', permission: 'ALLOW' },
      { action: 'fs_delete', permission: 'DENY' },
      { action: 'web_fetch', pattern: 'https://api.example.com/*', permission: 'ALLOW' },
    ],
  },
  config: { systemPrompt: 'You are a helpful AI assistant.', modelContextSize: 128000 },
});

Rules are matched last-to-first (later rules take higher priority). Patterns support glob (*, **) and regex (/pattern/).

API Reference

AgentBrain

class AgentBrain {
  constructor(options: AgentBrainOptions);
  run(userInput: string): Promise<TaskResult>;
}

TaskResult

interface TaskResult {
  taskId: string;
  status: TaskStatus;
  finalAnswer?: string;
  terminationReason: TerminationReason;
  steps: StepLog[];
  durationMs: number;
  tokenUsage: TokenUsage;
  cognition: {
    perception: Perception;
    assessment: Assessment;
    plan: Plan;
    reflection?: Reflection;
  };
}

Configuration

Option Default Description
systemPrompt β€” System prompt for role definition
modelContextSize β€” Model context window size (tokens)
maxSteps 15 Max steps per ReAct loop
heartbeatTimeoutMs 60000 Heartbeat timeout threshold
maxConsecutiveFailures 3 Max consecutive failures before termination
maxReplans 2 Max replanning attempts in REFLECT phase
sandbox.workingDirectory os.tmpdir()/.bios-agent Default working directory for all tools
sandbox.defaultPermission ASK Default permission when no rule matches (ALLOW, DENY, ASK)
sandbox.rules [] Initial permission rules ({ action, pattern?, permission })

Requirements

  • Node.js >= 18.0.0
  • An LLM client implementing IModelClient interface

License

MIT

Release History

VersionChangesUrgencyDate
v0.1.2Latest release: v0.1.2High4/13/2026
0.0.0No release found β€” using repo HEADHigh4/9/2026

Dependencies & License Audit

Loading dependencies...

Similar Packages

astackπŸ€– A composable framework for building AI applications.v0.1.1-beta.0
ralph-claude-codeNo descriptionmain@2026-04-21
CodexSkillManagerπŸ›  Manage Codex and Claude Code skills on macOS with this SwiftUI app. Browse, import, and delete skills effortlessly while viewing detailed info.main@2026-04-21
claude-blockerπŸ›‘οΈ Block distracting websites when Claude Code is in use, ensuring focused work sessions and minimizing interruptions.main@2026-04-21
opencode-cowork-pluginsProvide standalone Windows CLI plugins for OpenCode to research, draft, and analyze sales, marketing, and data tasks efficiently.main@2026-04-21