Agentic AI framework for building autonomous LLM agents with a human-like cognitive architecture. Features a five-phase cognitive cycle (PERCEIVE β ASSESS β PLAN β EXECUTE β REFLECT), nested ReAct loops, dynamic skill acquisition, four thinking modes, token budget management, and memory-augmented execution.
English | δΈζ
| Feature | Description |
|---|---|
| Five-phase Cognitive Cycle | PERCEIVE β ASSESS β PLAN β EXECUTE β REFLECT |
| Adaptive Fast Path | PERCEIVE classifies complexity; simple tasks skip to EXECUTE via Strategy pattern (2 LLM calls) |
| Nested ReAct | Outer task planning + inner per-step execution loops |
| Dynamic Skill Acquisition | Auto-search and install skills during execution |
| Interactive User Input | Pause and wait for user input via ask_user tool |
| Four Thinking Modes | CREATIVE, LOGICAL, EMPATHETIC, STRUCTURAL |
| Token Budget | Context window optimization |
| Memory Integration | Context-aware execution |
| Security Sandbox | Rule-based permission guard (ALLOW / DENY / ASK) for all tool execution |
This framework models an agent's task processing as a five-phase cognitive cycle that simulates human thought processes:
PERCEIVE β ASSESS β PLAN β EXECUTE β REFLECT
The PERCEIVE phase simultaneously classifies task complexity, selecting an execution strategy:
- Simple tasks (e.g., "find stock-related skills"):
FastPathStrategyskips to EXECUTE directly (2 LLM calls) - Complex tasks (e.g., "analyze server performance and generate report"):
FullCycleStrategyruns the full ASSESS β PLAN β EXECUTE β REFLECT cycle
The framework implements a nested ReAct architecture:
- Outer ReAct: Five-phase cognitive loop (the brain's macro workflow)
- Inner ReAct: Per-step execution loop within EXECUTE phase
βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
β Outer ReAct β
β PERCEIVE β ASSESS β PLAN β EXECUTE β REFLECT β
β β β
β βββββββββββββββ΄ββββββββββββββ β
β β Inner ReAct (per step)β β
β β Thought β Action β Obs β β
β β β β β β
β β βββββ loop βββββββ β β
β βββββββββββββββββββββββββββββ β
βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
- Five-phase cognitive cycle mimicking human thought processes
- Per-step ReAct loops for granular execution control
- Dynamic skill acquisition β agents can install new skills during execution
- Interactive user input β agents can request user input during execution via
ask_usertool - Four thinking modes: CREATIVE, LOGICAL, EMPATHETIC, STRUCTURAL
- Token budget management for context window optimization
- Memory integration for context-aware execution
- Security sandbox with rule-based permission control (ALLOW / DENY / ASK) for all tool and skill execution
- Extensible event system for observability
npm install @biosbot/agent-brainimport { AgentBrain, OpenAIClient } from '@biosbot/agent-brain';
import { SkillHub } from '@biosbot/agent-skills';
import { MemoryHub } from '@biosbot/agent-memory';
const model = new OpenAIClient({ apiKey: process.env.OPENAI_API_KEY });
const skills = new SkillHub();
const memory = new MemoryHub();
const agent = new AgentBrain({
model,
skills,
memory,
config: {
systemPrompt: 'You are a helpful AI assistant.',
modelContextSize: 128000,
},
});
const result = await agent.run('Help me analyze the server performance data from last month');
console.log(result.finalAnswer);| Phase | Description | Output |
|---|---|---|
| PERCEIVE | Understand user input, identify intent | Perception |
| ASSESS | Evaluate capabilities, identify gaps | Assessment |
| PLAN | Create execution plan | Plan |
| EXECUTE | Execute via per-step ReAct loops | ExecuteResult |
| REFLECT | Evaluate results, decide on replanning | Reflection |
The framework dynamically adjusts thinking mode weights for each cognitive phase:
- CREATIVE: Generate novel ideas, make unexpected connections
- LOGICAL: Reason causality, verify consistency
- EMPATHETIC: Understand emotions, user needs
- STRUCTURAL: Decompose tasks, manage dependencies
- Innate Tools: Built-in capabilities (skill management, knowledge CRUD)
- Skill Packages: Domain-specific tools loaded on-demand
The agent can dynamically acquire new skills during execution using innate tools like skill_install and skill_load_main.
The framework provides 5 knowledge base operations as innate tools:
| Tool | Description |
|---|---|
knowledge_list |
List all entries with filtering (category, limit, offset) |
knowledge_add |
Add new entry (title, content, category, tags, metadata) |
knowledge_delete |
Delete entry by ID (supports soft/hard delete) |
knowledge_search |
Semantic search (query, topK, category, tags, threshold) |
knowledge_read |
Read full entry content by ID |
The agent can request user input during execution using the ask_user tool. Subscribe to the user:input-request event and call provideUserInput():
const agent = new AgentBrain({
// ... config
eventPublisher: {
publish(type, payload) {
if (type === 'user:input-request') {
const { question } = payload;
const userResponse = await getUserInput(question);
agent.provideUserInput(userResponse);
}
},
},
});Use agent.isWaitingForUserInput() to check if the agent is currently waiting for input.
The framework provides a built-in SecuritySandbox that guards all tool execution with permission rules:
- ALLOW: Execute without prompting
- DENY: Reject immediately (returned to the model as an Observation, allowing fallback)
- ASK: Prompt the user before executing (default)
Each innate tool self-declares its actionCategory (e.g., fs_read, cmd_exec, web_fetch) and permissionTargetArgs, enabling Open/Closed permission checks without hardcoded mappings. Skill tools default to the skill_exec category.
const agent = new AgentBrain({
model,
skills,
memory,
sandbox: {
workingDirectory: './agent-workspace',
defaultPermission: 'ASK',
rules: [
{ action: 'fs_read', pattern: '/safe/dir/**', permission: 'ALLOW' },
{ action: 'fs_delete', permission: 'DENY' },
{ action: 'web_fetch', pattern: 'https://api.example.com/*', permission: 'ALLOW' },
],
},
config: { systemPrompt: 'You are a helpful AI assistant.', modelContextSize: 128000 },
});Rules are matched last-to-first (later rules take higher priority). Patterns support glob (*, **) and regex (/pattern/).
class AgentBrain {
constructor(options: AgentBrainOptions);
run(userInput: string): Promise<TaskResult>;
}interface TaskResult {
taskId: string;
status: TaskStatus;
finalAnswer?: string;
terminationReason: TerminationReason;
steps: StepLog[];
durationMs: number;
tokenUsage: TokenUsage;
cognition: {
perception: Perception;
assessment: Assessment;
plan: Plan;
reflection?: Reflection;
};
}| Option | Default | Description |
|---|---|---|
systemPrompt |
β | System prompt for role definition |
modelContextSize |
β | Model context window size (tokens) |
maxSteps |
15 | Max steps per ReAct loop |
heartbeatTimeoutMs |
60000 | Heartbeat timeout threshold |
maxConsecutiveFailures |
3 | Max consecutive failures before termination |
maxReplans |
2 | Max replanning attempts in REFLECT phase |
sandbox.workingDirectory |
os.tmpdir()/.bios-agent |
Default working directory for all tools |
sandbox.defaultPermission |
ASK |
Default permission when no rule matches (ALLOW, DENY, ASK) |
sandbox.rules |
[] |
Initial permission rules ({ action, pattern?, permission }) |
- Node.js >= 18.0.0
- An LLM client implementing
IModelClientinterface
MIT
