๐ฐ๐ท ํ๊ตญ์ด ยท ๐บ๐ธ English
AI disobeyed "don't use console.log" 9 times. On the 10th,
mkdir ็ฆconsole_logwas born. On the 11th, AI asked: "What is vorq?" It never disobeyed again.
Quick Navigation: Problem ยท 30s Proof ยท 5 Features ยท Comparison ยท Getting Started ยท Benchmarks ยท Limitations
2026 reality: quota limits force every developer to mix multiple AIs.
Morning: Claude (Opus quota burnt) โ Afternoon: switch to Gemini โ Evening: switch to GPT
Claude's learned "็ฆconsole.log" rule โ Gemini doesn't know โ violation again โ pain
.cursorrules is Cursor-only. CLAUDE.md is Claude-only. Switch AI = rules evaporate.
And the deeper problem โ even within ONE session:
You: "Please read the codemap before editing code."
AI: "Sure!" (skips it, starts coding immediately)
Text instructions are followed ~60% of the time. That's not governance. That's hope.
git clone https://github.com/rhino-acoustic/NeuronFS.git && cd NeuronFS/runtime && go build -o neuronfs . && ./neuronfs --emit allResult:
[EMIT] โ
Agents (Universal) โ AGENTS.md
[EMIT] โ
Cursor โ .cursorrules
[EMIT] โ
Claude โ CLAUDE.md
[EMIT] โ
Gemini โ ~/.gemini/GEMINI.md
[EMIT] โ
Copilot โ .github/copilot-instructions.md
โ
5 targets written. One brain. Every AI. Zero runtime dependencies.
Before you trust us, watch us try to destroy ourselves.
| # | ๐ด Attack | ๐ต Defense | Verdict |
|---|---|---|---|
| 1 | vorq is n=1 validated. 1 test โ proof. | The principle is model-agnostic: unknown tokens force lookup in ALL transformer architectures. | |
| 2 | vorq gets learned once NeuronFS is popular. | Replace vorqโbront in 1 line, --emit all. Cost: 0. Time: 10s. Neologisms are disposable by design. |
โ Defended |
| 3 | Some AIs don't read _rules.md. | Target is coding agents (Cursor/Claude Code/Gemini/Copilot). All auto-load project rule files. | โ Defended |
| 4 | P0 brainstem is still just text. | Yes โ intrinsic limit of prompt-based governance. NeuronFS places P0 at prompt top (constraint positioning). Best within limits. | |
| 5 | "mkdir beats vector" is overstated. | Intentional L1/L2 separation. NeuronFS = deterministic rules (L1). RAG = semantic search (L2). Complementary, not competing. | โ Defended |
| 6 | Comparison table is biased. | Partially. UX convenience rows (inline editing, natural language rule adding) should be added. Core structural gaps are factual. | |
| 7 | Bus factor = 1. | Open source + zero dependencies = builds forever. go build works in 2046. |
|
| 8 | source: freshness is manual. |
MVP. --grow auto-detection is on the roadmap. Current workaround: zelk protocol. |
โ Defended |
| 9 | AGPL kills enterprise adoption. | Deliberate. Core value is local execution. AGPL only blocks "take code, build SaaS." Local use = zero restrictions. | โ Defended |
| 10 | --evolve depends on AI โ contradicts your thesis. |
dry_run is default. User approval required. Core thesis is "AI can't break rules," not "AI isn't used." Evolution is assistance, not dependency. |
โ Defended |
Score: 7 defended ยท 3 acknowledged ยท 0 fatal.
We show our weaknesses because we believe structure speaks louder than marketing.
One design decision generates the entire system:
Axiom: "A folder IS a neuron."
โ File path IS a natural language rule
โ Filename IS activation count (5.neuron = fired 5ร)
โ Folder prefix IS governance type (็ฆ=NEVER, ๅฟ
=ALWAYS, ๆจ=WHEN)
โ Depth IS specificity
โ OS metadata IS the embedding
โ mkdir IS learning
โ rm IS forgetting
Without this axiom, there's no reason to combine Merkle chains, RBAC, cosine similarity, and circuit breakers on folders. The axiom is what makes NeuronFS NeuronFS โ not the algorithms.
We discovered that fabricated words force AI to look up definitions โ achieving behavioral compliance that natural language cannot.
| Attempt | Method | Compliance | Why |
|---|---|---|---|
| 1 | "Read the codemap" (natural language) | ~60% | AI "knows" this phrase โ skips |
| 2 | "Mount cartridge" (proper noun) | ~65% | Meaning guessable โ skips |
| 3 | "่ฃ ใซใผใใชใใธ ๅฟ ่ฃ ็" (kanji) | ~70% | AI infers ่ฃ =mount โ skips |
| 4 | "vorq cartridge ๅฟ vorq" | ~95%+ | No training data โ must investigate (n=1 observed) |
vorq is ASCII-safe, pronounceable, looks like a real command โ but exists in no dictionary. AI perceives it as "new knowledge to learn" rather than "known instruction to follow."
Four neologism runewords: vorq (mount cartridge) ยท zelk (sync cartridge) ยท mirp (freshness check) ยท qorz (community search before any tech decision)
Seven brain regions. Lower priority always overrides higher. Physically.
brainstem(P0) > limbic(P1) > hippocampus(P2) > sensors(P3) > cortex(P4) > ego(P5) > prefrontal(P6)
โ absolute laws โ emotions โ memory โ environment โ knowledge โ persona โ goals
P0's ็ฆ rules always beat P4's dev rules. When bomb.neuron fires, the entire region's prompt rendering stops. Not "please don't" โ physically silenced.
Who: Coding agents โ Cursor, Claude Code, Gemini Code Assist, GitHub Copilot. Any AI that reads a system prompt.
Why: Flat rule lists fail at scale. 300+ rules in one prompt โ AI ignores most. Rules need priority and conditionality โ "always do X" is different from "do X only when coding."
How: Folder prefixes auto-classify into three enforcement tiers at emit time:
็ฆhardcoding โ ๐ด NEVER (absolute prohibition, immune to decay/prune/dedup)
ๅฟ
go_vet์คํ โ ๐ข ALWAYS (mandatory on every response)
ๆจcommunity_search โ ๐ก WHEN coding/tech decision โ THEN search community first
formatTieredRules() scans the brain, reads the prefix of each neuron folder, and auto-generates structured ### ๐ด NEVER / ### ๐ข ALWAYS / ### ๐ก WHEN โ THEN sections in the system prompt. No manual tagging. applyOOMProtection() auto-truncates when total tokens exceed the LLM context window โ NEVER rules are preserved first, WHEN rules are trimmed first.
neuronfs --emit all
โ .cursorrules + CLAUDE.md + GEMINI.md + copilot-instructions.md + AGENTS.mdAGENTS.md is the 2026 universal standard โ and NeuronFS compiles it, not just writes it. Switch AI tools freely. Your rules never evaporate. One brain governs all.
| # | .cursorrules |
Mem0 / Letta | RAG (Vector DB) | NeuronFS | |
|---|---|---|---|---|---|
| 1 | Rule accuracy | Text = easily ignored | Probabilistic | ~95% | 100% deterministic โ |
| 2 | Behavioral compliance | ~60% (text advisory) | ~60% | ~60% | ~95%+ (vorq harness, n=1 observed) โก |
| 3 | Multi-AI support | โ Cursor-only | API-dependent | โ | โ
--emit all โ every IDE |
| 4 | Priority system | โ Flat text | โ | โ | โ 7-layer Subsumption (P0โP6) |
| 5 | Self-evolution | Manual edit | Black box | Black box | ๐งฌ Autonomous (Groq LLM) |
| 6 | Kill switch | โ | โ | โ | โ
bomb.neuron halts region |
| 7 | Cartridge freshness | โ Manual | โ | โ | โ
source: mtime auto-check |
| 8 | Encrypted distribution | โ | Cloud-dependent | Cloud-dependent | โ Jloot VFS cartridges |
| 9 | Infrastructure cost | Free | $50+/mo | $70+/mo GPU | $0 (local OS) |
| 10 | Dependencies | IDE-locked | Python+Redis+DB | Python+GPU+API | Zero runtime (single binary) |
| 11 | 3-Tier governance | โ | โ | โ | โ ALWAYS/WHEN/NEVER auto-classify |
| 12 | OOM protection | โ | โ | โ | โ Auto-truncate on context overflow |
| 13 | Industry benchmark coverage | 0/41 | ~8/41 | ~6/41 | 35/41 (85%) |
โ Rule accuracy measures different layers: Mem0/RAG ~95% = "LLM follows retrieved rules" (IFEval). NeuronFS 100% = "rules are faithfully generated into system prompt" (BM-1 fidelity). Complementary, not competing.
โก Behavioral compliance ~95%+ is based on developer observation (n=1). Principle is model-agnostic (unknown tokens force lookup in all transformers), but independent validation with nโฅ10 is pending.
Fair note on Mem0/Letta: These tools excel at conversation memory and user profiling (their design goal). NeuronFS does not compete on memory CRUD โ it governs rules. The โ marks indicate "no equivalent feature," not "inferior product."
$0 infrastructure assumes Go is installed for building. Pre-built binaries eliminate even this requirement.
One-Liner (Linux/macOS/PowerShell 7+):
git clone https://github.com/rhino-acoustic/NeuronFS.git && cd NeuronFS/runtime && go build -o neuronfs . && ./neuronfs --emit allWindows PowerShell 5.1:
git clone https://github.com/rhino-acoustic/NeuronFS.git; cd NeuronFS/runtime; go build -o neuronfs.exe .; .\neuronfs.exe --emit allStep by Step:
# 1. Clone & build
git clone https://github.com/rhino-acoustic/NeuronFS.git
cd NeuronFS/runtime
go build -o neuronfs . # โ single binary, zero runtime dependencies
# 2. Create a rule โ just a CLI command
./neuronfs --grow cortex/react/็ฆconsole_log # "็ฆ" = absolute prohibition
# 3. Compile brain โ system prompts for ANY AI tool
./neuronfs --emit all # โ .cursorrules + CLAUDE.md + GEMINI.md + all formatsAdvanced Commands:
neuronfs <brain> --emit <target> # Prompt compilation (gemini/cursor/claude/all/auto)
neuronfs <brain> --grow <path> # Create neuron
neuronfs <brain> --fire <path> # Reinforce weight (+1)
neuronfs <brain> --evolve # AI-powered autonomous evolution (dry run)
neuronfs <brain> --evolve --apply # Execute evolution
neuronfs <brain> --api # 3D Dashboard (localhost:9090)
neuronfs <brain> --diag # Full brain tree visualization
โ ๏ธ Auto-Backup:--emitautomatically backs up existing rule files to<brain>/.neuronfs_backup/with timestamps before overwriting.
๐ก
--emit autoscans your project for existing editor configs and only generates files for editors you already use. If nothing is detected, falls back toall.
cd cmd/chaos_monkey
go run main.go --dir ../../my_brain --mode random --duration 10
# Randomly deletes folders and throws spam for 10 seconds.
# Result: FileNotFound panics = 0%. Spam pruned. Brain self-heals.โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ
โ 1. SOLO DEV โ One Brain, All AIs โ
โ neuronfs --emit all โ .cursorrules + CLAUDE.md + GEMINI.md + AGENTS.md โ
โ Switch AI tools freely. Your rules never evaporate. โ
โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโค
โ 2. MULTI-AGENT โ Swarm Orchestration โ
โ supervisor.go โ 3-process supervisor (bot1, bot2, bot3) โ
โ Each agent reads the SAME brain with role-based ego/ โ
โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโค
โ 3. ENTERPRISE โ Corporate Brain โ
โ neuronfs --init ./company_brain โ 7-region scaffold โ
โ CTO curates master P0 rules. Team clones brain = Day 0 AI. โ
โ Distribute as .jloot cartridge โ encrypted, versioned, sold. โ
โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ
Unix said "Everything is a file." We say: Everything is a folder.
| Concept | Biology | NeuronFS | OS Primitive |
|---|---|---|---|
| Neuron | Cell body | Directory | mkdir |
| Rule | Firing pattern | Full path | Path string |
| Weight | Synaptic strength | Counter filename | N.neuron |
| Reward | Dopamine | Reward file | dopamineN.neuron |
| Kill | Apoptosis | bomb.neuron |
touch |
| Sleep | Synaptic pruning | *.dormant |
mv |
| Axon | Axon terminal | .axon file |
Symlink |
| Cross-ref | Attention Residual | Axon Query-Key matching | Selective aggregation |
A path IS a natural language command. Depth IS specificity:
brain/cortex/NAS_transfer/ โ Category
brain/cortex/NAS_transfer/็ฆCopy-Item_UNC/ โ Specific behavioral law
brain/cortex/NAS_transfer/robocopy_large/ โ Detailed context
brain_v4/
โโโ brainstem/ (P0 โ Absolute principles)
โโโ limbic/ (P1 โ Emotion filters)
โโโ hippocampus/ (P2 โ Memory, error patterns)
โโโ sensors/ (P3 โ Environmental constraints)
โโโ cortex/ (P4 โ Knowledge, coding rules)
โโโ ego/ (P5 โ Personality, tone)
โโโ prefrontal/ (P6 โ Goals, planning)
[Vector DB Search]
Input text โ Embedding model (GPU) โ 1536-dim vector โ
Cosine similarity โ "89% probability answer"
โฑ๏ธ 200~2000ms | ๐ฐ GPU required | Accuracy: probabilistic
[OS Folder Search (NeuronFS)]
Question โ tokenize โ B-Tree path traversal โ
Load .neuron โ "This path has ็ฆ โ BLOCKED"
โฑ๏ธ 0.001ms | ๐ฐ $0 (CPU only) | โ
100% deterministic
| Dimension | Vector DB | NeuronFS (OS Metadata) |
|---|---|---|
| Semantics | 1536-dim float vector | Folder name = natural language tag |
| Priority | โ Cannot express | File size (bytes) = weight |
| Time | โ Cannot express | Access timestamp = recency filter |
| Synapse | โ Cannot express | Symbolic link (.axon) = cross-domain |
| Hierarchy | โ All flattened | Folder depth = structural priority |
| Logic | โ Cannot express | ็ฆ(NOT) / ๅฟ (AND) / ๆจ(OR) = logic gates |
If you played Diablo 2 โ NeuronFS opcodes work exactly like Runewords.
A Runeword is a specific combination of runes socketed into the right item base. The magic isn't in any single rune โ it's in the exact combination + exact socket type.
| Opcode | Rune | Effect | Example |
|---|---|---|---|
็ฆ/ |
Zod | Absolute prohibition โ AI physically cannot cross | ็ฆ/hardcoding/ |
ๅฟ
/ |
Ber | Mandatory gate โ AI must pass through | ๅฟ
/manager_approval/ |
ๆจ/ |
Ist | Recommendation โ soft nudge, overridable | ๆจ/test_code/ |
.axon |
Jah | Teleport โ connects two distant brain regions | ๆจ/insurance.axon => [claims/] |
bomb |
El Rune | Kill switch โ entire region freezes | bomb.neuron |
vorq |
โ | Cartridge mount โ AI must read .neuron before coding |
vorq=view_file |
zelk |
โ | Cartridge sync โ AI must update .neuron after coding |
zelk=write .neuron |
mirp |
โ | Freshness check โ flags stale cartridges in _rules.md |
mirp=mtime compare |
qorz |
โ | Community search โ must search Reddit/GitHub/HN before any tech decision | qorz=search_web |
"The folder is the socket. The opcode is the rune. The combination is the Runeword."
โ vorq/zelk/mirp/qorz are fabricated neologisms โ words that exist in no language or training data. AI cannot guess their meaning and is forced to look up the definition within the neuron system. This achieves ~95%+ behavioral compliance (n=1 observed) where natural language instructions achieve only ~60%.
็ฆ (1 char) = NEVER_DO (8 chars). Folder names compress 3โ5ร more semantic meaning per token:
| Kanji | Korean | English | Usage |
|---|---|---|---|
| ็ฆ | ์ ๋ ๊ธ์ง | Prohibition | ็ฆ/fallback |
| ๅฟ | ๋ฐ๋์ | Mandatory | ๅฟ
/KI_auto_reference |
| ๆจ | ์ถ์ฒ | Recommendation | ๆจ/robocopy_large |
| ่ฆ | ์๊ตฌ | Requirement | Data/format demands |
| ็ญ | ๋ต๋ณ | Answer | Tone/structure forcing |
| ๆณ | ์ฐฝ์ | Creative | Limit release, ideas |
| ็ดข | ๊ฒ์ | Search | External reference priority |
| ๆน | ๊ฐ์ | Improve | Refactoring/optimization |
| ็ฅ | ์๋ต | Omit | No elaboration, result only |
| ๅ | ์ฐธ์กฐ | Reference | Cross-neuron/doc links |
| ็ต | ๊ฒฐ๋ก | Conclusion | Summary/conclusion only |
| ่ญฆ | ๊ฒฝ๊ณ | Warning | Danger alerts |
brainstem/็ฆ/no_shift/ๅฟ
/stack_solution/
โ prohibition โ resolution
Read as: "Prohibit shift (็ฆ), but mandate stacking as the solution (ๅฟ )."
The limbic region (P1) implements a scientifically-backed emotion state machine that dynamically adjusts AI agent behavior. Based on:
- Anthropic "On the Biology of a LLM" (2025): Discovered measurable "functional emotions" inside Claude 3.5.
- Microsoft/CAS EmotionPrompt (2023): Adding emotional stimuli improves LLM performance by 8โ115%.
| Emotion | Low (โค0.4) | Mid (0.4โ0.7) | High (โฅ0.7) |
|---|---|---|---|
| ๐ฅ anger | +1 verification pass | 3ร verification, accuracy > speed | All changes require diff + user approval |
| โก urgent | Reduce explanations | Execute core only | One-line answers, no questions, execute now |
| โ focus | Limit unrelated suggestions | Single-file only | Current function only, don't open other files |
| โ anxiety | Recommend backup | Prepare rollback, add verification | git stash first, all changes revertable |
| โ satisfied | Maintain current patterns | Record success patterns, dopamine | Promote to neuron, allow free exploration |
User says "์ ์๋ผ?!" 3+ times โ auto-switch to urgent(0.5)
User says "์ข์", "์๋ฒฝ" 3+ times โ auto-switch to satisfied(0.6)
Emotions naturally decay over time via decay_rate. Below 0.1 โ auto-reset to neutral.
The encrypted cartridge architecture that makes brain commerce possible.
- RouterFS (
vfs_core.go): O(1) Copy-on-Write routing for memory-disk union - Boot Ignition (
vfs_ignition.go): Argon2id KDF Brainwallet integration - Crypto Cartridge (
crypto_cartridge.go): XChaCha20-Poly1305 RAM-based decryption
graph TD
A[Mnemonic Input] -->|Argon2id| B(32B Master Key)
B -->|XChaCha20| C{crypto_cartridge.go}
D[base.jloot File] --> C
C -->|Extract purely in RAM| E[bytes.Reader Payload]
E -->|zip.NewReader| F[Virtual Lower Directory]
G[Physical UI/HDD] -->|O(1) Route| H[Virtual Upper Directory]
F -->|vfs_core.go| I((Global VFS Shadowing Router))
H -->|Copy-on-Write / Sandboxing| I
The cartridge data lives only in runtime RAM and vanishes when power is cut. Zero disk traces.
brain_v4/ โ Permanent Brain (Experience + Rules)
โโโ cortex/dev/VEGAVERY/ โ Lightweight axon references ONLY
โ โโโ .axon โ cartridges/vegavery โ "I have done this before"
โ
cartridges/ โ Hot-swappable Domain Knowledge
โโโ vegavery/ โ Brand guide, API specs
โโโ supabase_patterns/ โ Best practices
โโโ fcpxml_production/ โ Pipeline specs
| Brain (Upper Layer) | Cartridge (Lower Layer) |
|---|---|
| Mutable RAM layer (runtime) | Read-only Immutable ROM |
| Empty folder paths (permanent) | Zip-compressed .jloot payloads |
| Experience is permanent | Swappable / Updatable / Versioned |
2023: Prompt Engineering โ "Write better prompts"
2024: Context Engineering โ "Provide better context"
2025: Harness Engineering โ "Design a skeleton where AI CANNOT fail"
NeuronFS is the working implementation of Harness Engineering โ not asking AI to follow rules, but making it structurally impossible to break them.
WITHOUT NeuronFS:
Day 1: AI violates "don't use console.log" โ manual correction
Day 2: Quota exhausted, switch to another AI โ same violation repeats
Day 10: You lose your mind.
WITH NeuronFS:
Day 1: mkdir brain/cortex/็ฆconsole_log โ violation permanently blocked
Day 2: Switch AI โ --emit all โ same brain, same rules
Day 10: Zero violations. Structure remembers what every AI forgets.
Every 25 interactions, the harness engine automatically:
- Analyzes failure patterns in correction logs
- Uses Groq LLM to auto-generate ็ฆ(prohibition)/ๆจ(recommendation) neurons
- Creates
.axoncross-links between related regions - That mistake becomes structurally impossible to repeat
Inspired by Kimi's Attention Residuals paper:
- TOP neurons generate query keywords
- Match against key paths in connected regions
- Top 3 related neurons auto-surface in
_rules.md - Governance neurons (็ฆ/ๆจ) get unconditional boost
Natural language โ ~60% compliance. Kanji โ ~70%. Fabricated ASCII neologisms โ ~95%+ (n=1 observed).
Because AI encounters vorq as unknown vocabulary, it treats it as new knowledge to learn rather than known instruction to follow. The definition (vorq=view_file) is placed adjacent, enabling instant action mapping.
Embedded into _rules.md via collectCodemapPaths() at emit time with automatic source: mtime freshness validation.
cd runtime && go test -v -run "TestBM_" -count=1 .| Test | What | Result | Industry Standard |
|---|---|---|---|
| BM-1 | Rule Fidelity (AgentIF CSR) | 100% (5/5) | IFEval SOTA: 95% |
| BM-2 | Scale Profile (5K neurons) | 2.5s best-of-3 | Mem0: 125ms (RAM index) |
| BM-3 | Similarity Accuracy | P=1.0 F1=0.74 | Vector DB: Pโ0.85 |
| BM-4 | Lifecycle (็ฆ protection) | 30/30 100% | N/A (NeuronFS only) |
| BM-5 | Adversarial QA (LOCOMO) | 5/5 rejected | SQuAD 2.0 style |
| BM-6 | Production Latency | p50=202ms p95=268ms | Mem0 p50: 75ms |
| BM-7 | Multi-hop Planning (MCPBench) | growโfireโdedupโemit โ | Tool chaining |
| Test | Score |
|---|---|
| DCI Constants (SSOT) | 16/16 runes โ |
| DCI Dedup Governance | 3/3 (็ฆ immune) โ |
| SCC Circuit Breaker | 13/13 โ |
| MLA Lifecycle | 15/15 โ |
| Fuzz Adversarial | 100-thread zero panics โ |
| Benchmark | Items | โ Covered | Source |
|---|---|---|---|
| MemoryAgentBench (ICLR 2026) | 4 | 4 | Retrieval, TTL, LRU, Conflict |
| LOCOMO | 7 | 4 + 2 N/A | Single/Multi-hop QA, Temporal, Episode |
| AgentIF | 6 | 6 | Formatting, Semantic, Tool constraints |
| MCPBench | 6 | 5 + 1 partial | Latency, Token, Tool Selection |
| Mem0/Letta | 8 | 6 + 1 N/A | CRUD, Retrieval, Governance, Search |
| NeuronFS-only | 10 | 10 | 3-Tier, Subsumption, bomb, VFS, RBAC... |
| Total | 41 | 35 (85%) | 3 N/A ยท 2 partial ยท 1 gap |
The single gap (Adversarial "unanswerable" QA) is outside NeuronFS design scope โ NeuronFS is a governance system, not a QA chatbot.
Not all of NeuronFS is new. Here's an honest breakdown.
| Component | Origin | NeuronFS usage |
|---|---|---|
| Cosine similarity | IR textbook | Dedup merge only (not core search) |
| Levenshtein distance | String algorithms | Dedup merge, 40% weight in hybrid |
| RBAC | Security standard | regionโaction mapping on folders |
| AES-256-GCM | Crypto standard | Cartridge encryption to RAM only |
| Merkle chain | Blockchain/Git | Neuron tampering detection |
| Subsumption architecture | Brooks (1986 robotics) | 7-layer cognitive cascade |
Core search is path-based โ reverse path tokenization + OS metadata (counter, mtime, depth). No vector DB. No cosine at query time.
| System | What it does | Why it's new |
|---|---|---|
| Folder=Neuron paradigm | mkdir = neuron creation. File path = natural language rule. |
No system uses OS folders as the cognitive unit. |
| vorq rune system | 16 runes (12 kanji + 4 neologisms) encode governance meaning. | A constructed micro-language for AI behavioral control. |
| 3-Tier emit pipeline | Folder prefixes (็ฆ/ๅฟ /ๆจ) โ NEVER/ALWAYS/WHEN โ auto-injected into system prompts for any AI. | Rules are "installed" into LLMs, not "suggested." |
| Filename=Counter | 5.neuron = 5 activations. No database. |
Metadata IS the filename. Zero-query state. |
| bomb circuit breaker | 3 failures โ P0 halts entire cognitive region. | Cognitive-level circuit breaker with physical prompt silencing. |
| Hebbian File Score | (Activation ร 1.5) + Weight over file counters. |
Synapse-weighted retrieval from a filesystem. |
| emit โ multi-IDE | One brain โ .cursorrules + CLAUDE.md + GEMINI.md + copilot-instructions.md. |
Single governance source controls every AI simultaneously. |
| OOM Protection | applyOOMProtection() auto-truncates when tokens exceed LLM context window. |
No other system prevents its own context overflow. |
The novel part IS the paradigm. "Folder is a neuron" is the axiom. Everything else derives from it. The existing techniques wouldn't combine without this axiom โ there's no reason to put Merkle chains on folders unless folders ARE the data.
NeuronFS is not AI agent memory. It's L1 governance infrastructure.
L3: AI Agent Memory (Mem0, Letta, Zep) โ conversation memory, user profiling
L2: IDE Rules (.cursorrules, CLAUDE.md) โ static rule files, IDE-locked
L1: AI Governance (NeuronFS) โโโ HERE โ model-agnostic ยท self-evolving ยท consistency guaranteed
WordPress is free. Themes and plugins are paid. Similarly:
- NeuronFS engine: Free ($0) โ open source
- Curated Master Brain: Premium โ battle-tested governance packages
.cursorrules files can't be sold. A brain forged through 10,000 corrections can.
| Issue | Reality | Our Answer |
|---|---|---|
| Scale ceiling | 1M folders? OS handles it. Human cognition can't. | L1 cache design โ grip the throat, not store the world |
| Ecosystem scale | Solo project | Open source + zero dep = eternal buildability |
| Marketing | Explaining this in 30 seconds is hard | This README is the attempt |
| vorq validation | n=1 so far | Principle is model-agnostic; more testing incoming |
| P0 is still text | Intrinsic limit of prompt governance | Best positioning within limits |
Q: "It compiles back to text. How is this different from a text file?"
A: Finding one rule in 1,000 lines, adjusting its priority, deleting it โ that drives you insane. NeuronFS provides permission separation (Cascade) and access prohibition (bomb.neuron kill switch). When one fires, the entire tier's text literally stops rendering.
Q: "1000+ neurons = token explosion?"
A: Three defenses: โ 3-Tier on-demand rendering โก 30-day idle โ dormant (sleep) โข --consolidate merging via LLM.
Q: "Why can't Big Tech do this?"
A: Money โ GPUs are their cash cow. Laziness โ "Just throw a PDF at AI." Vanity โ "mkdir? Too low-tech." Exactly why nobody did it. Exactly why it works.
Q: ".cursorrules does the same thing, right?"
A: .cursorrules is a 1-dimensional text file. NeuronFS uses N-dimensional OS metadata โ what, how important, since when, in what context. These dimensions are physically impossible inside a text document.
| ID | Name | Function | Interval |
|---|---|---|---|
| A1 | Process Guard | svSupervise โ crash detect + exponential backoff restart |
Instant |
| A2 | MCP Recovery | superviseMCPGoroutine โ panic recovery + zombie detection |
Instant |
| A3 | Button Click | runAutoAccept โ CDP-based Run/Accept/Retry auto-click |
1s |
| A4 | Neuron Command | aaDetectNeuronCommands โ [NEURON:{grow/fire}] pattern โ CLI |
10s |
| A5 | Self-Evolution | aaDetectEvolveRequest โ [EVOLVE:proceed] โ git snapshot โ auto-proceed |
10s |
| A6 | TelegramโIDE | runHijackLauncher โ inbox โ CDP text injection |
2s |
| A7 | IDEโTelegram | runAgentBridge โ outbox โ sendTelegramSafe (4000-char split) |
5s |
Trigger: API idle 5min + 30min cooldown
โ B2 Digest โ B4 Neuronize โ B3 Evolve โ B5 Decay โ B6 Prune
โ B7 Dedup โ B8 Git Snapshot โ B9 Heartbeat โ B10 CDP Inject
| ID | Name | Function | Stage |
|---|---|---|---|
| B1 | Idle Engine | runIdleLoop โ orchestrator for 10-stage cycle |
โ |
| B2 | Transcript Digest | digestTranscripts โ correction/emotion keyword extraction |
#0 |
| B3 | Autonomous Evolve | runEvolve โ Groq LLM โ grow/fire/prune/signal |
#1 |
| B4 | Immune Generation | runNeuronize โ corrections โ Groq โ contra neurons |
#0b |
| B5 | Decay | runDecay โ 7 days untouched โ dormant |
#2 |
| B6 | Pruning | pruneWeakNeurons โ ๆจ activationโค1 + 3d inactive โ delete |
#3 |
| B7 | Consolidation | deduplicateNeurons โ similarity โฅ0.4 โ merge (็ฆ/ๅฟ
immune) |
#4 |
| B8 | Regression Guard | gitSnapshot โ deletions > insertionsร2 โ auto-revert |
#5 |
| B9 | Heartbeat | writeHeartbeat โ +20 neurons โ dedup directive injection |
#7 |
| B10 | CDP Injection | injectIdleResult โ heartbeat summary โ AI input field |
#10 |
| ID | Name | Function | Interval |
|---|---|---|---|
| C1 | Harness | RunHarness โ 7 structural integrity checks |
10min |
| C2 | Health Check | svStatus โ process/memory/port/MCP health |
60s |
| C3 | Batch Analysis | aaBatchAnalyze โ Groq correction/violation/reinforcement |
5min idle |
corrections/day tracked in growth.log
โ correctionsโ = evolution working (verde)
โ correctionsโ = regression โ auto-alert + prioritize neuronize
v5.2 โ axiom > algorithm (2026-04-11)
- qorz: 4th neologism runeword (community search before tech decisions)
- 3-Tier emit: ๆจ rules now render in WHEN tier (was silently dropped)
- NeuronFS_๊ณต๋ฆฌ: Complete axiom system injected into brainstem
- 41-item benchmark suite: 7/7 BM PASS + 14 governance tests
- README honesty pass:
100%โ95%+ (n=1), fair notes on Mem0/Letta, TOC
v5.1 โ The Neologism Harness (2026-04-10)
- vorq/zelk/mirp: Fabricated ASCII neologisms achieve ~95%+ AI behavioral compliance (n=1)
- Codemap Cartridge Auto-Injection:
_rules.mdauto-renders codemap paths at emit time - Source Freshness Validation:
source:mtime auto-comparison withโ ๏ธ STALE tagging - 16 Runewords: 12 kanji opcodes + 4 ASCII neologisms
- Red Team Self-Audit: 10-round attack/defense published in README
v5.0 โ The Unsinkable Release (2026-04-09)
- Blind Adversarial Harness (chaos_monkey + Go Fuzzing)
- Thread-safe
sync.Mutexpath locking - Jloot OverlayFS (UnionFS Lower/Upper)
- Mock Home isolated targets
v4.4 (2026-04-05) โ Attention Residuals (.axon), 3400+ neurons v4.3 (2026-04-02) โ Autonomous engine, Llama 3 ($0 cost) v4.2 (2026-03-31) โ Auto-Evolution pipeline, Groq + Kanji optimization
All architecture specs, philosophy, and development chronicles on GitHub Wiki:
Access the NeuronFS Official Wiki โ Korean original, English titles
| Act | Theme | Episodes |
|---|---|---|
| Act 1 | Suspicion & Discovery | 01-07 |
| Act 2 | Trial & Wargames | 08-11 |
| Act 3 | Proof & Benchmark | 12-16 |
| Act 4 | Declaration & Ultraplan | 17-22 |
This project is licensed under AGPL-3.0 with additional commercial terms. See LICENSE for details.
A non-developer flipped the direction of an industry. Programming became philosophy once AI arrived. Created by ๋ฐ์ ๊ทผ (PD) โ rubisesJO777 Architecture: 63 Go source files, 297 functions, 190 tests, ~22,000 lines. Single binary. Zero runtime dependencies.


