A novel AI architecture built on 9-Dimensional Toroidal Waveform Intelligence (9D-TWI) β physics-first cognition using interference patterns on a Riemannian manifold.
Nikola replaces discrete weight matrices with a continuous wavefunction Ξ¨ evolving on a 9-dimensional toroidal manifold (TβΉ) governed by the Unified Field Interference Equation (UFIE):
βΒ²Ξ¨/βtΒ² = cΒ²βΒ²_g Ξ¨ β Ξ±(1βrΜ)βΞ¨/βt + Ξ²|Ξ¨|Β²Ξ¨ + Ξ£ Eα΅’(x,t)
Memory, attention, and reasoning emerge from wave interference β not from lookup tables or static weights.
Key properties:
- Information encoded as complex wavefunction amplitudes across 9D toroidal topology
- 9D Hilbert space-filling curve for optimal memory locality (Skilling algorithm, 14,133 assertions passing)
- StΓΆrmerβVerlet Strang-split integrator for Hamiltonian energy conservation
- Neuroplastic Transformer (NPT) attention operating natively on the wavefunction
- Autonomous behavioral loop: dopamine, serotonin, ATP metabolism, boredom-driven exploration
- Post-quantum cryptography: ML-KEM/Kyber-768 + SPHINCS+-SHAKE-256f
Phase 110 complete β 112 tests, ~98% pass rate (2 pre-existing timing-flaky)
| Domain | Status | Key Feature | Test Phase |
|---|---|---|---|
| 9D Toroidal Geometry | β | Morton-128 encoding, 19,683-node grid | Phase 8 |
| StΓΆrmerβVerlet Propagator | β | Strang split, 6 substeps, AVX-512 SoA layout | Phase 22 |
| GPU Propagator (CUDA) | CudaPropagator compiled; C++20 compat fix pending | β | |
| GPU Hamiltonian Kernel | β | hamiltonian_density_kernel, RTX 3090, sm_86 | Phase 110 |
| CUDA Wave Kernels | β | psi_squared_kernel, scale_field_kernel | Phase 105 |
| 9D Hilbert Scanner | β | Skilling algorithm, variable-precision, 0 failures | Phase 94 |
| Mamba-9D SSM (CognitiveCore) | β | SSM H=256, 16rΓ16s state space, WavefunctionSampler, TokenMapper | Phase 3 |
| Neuroplastic Transformer (NPT) | β | Wave-correlation attention, 8 heads at ΟΒ·ΟβΏ bands | Phase 43 |
| Holographic Emitter Array | β | 8 emitters at f_n=ΟΒ·ΟβΏ Hz (spectrally orthogonal injection) | Phase 10 |
| Holographic Injector | β | Text β BERT embedding β emitter chord β field injection | Phase 10 |
| SIE Infrastructure | β | PhysicsOracle watchdog, PIMPL hot-swap, code_blacklist, dlopen | Phase 28+ |
| BERT Tokenizer | β | Real tokenizer.json, 30,522 tokens, 695 KB | β |
| BERT-tiny ONNX Model | β | Real 17.5 MB model, dynamic-axes inference | β |
| Semantic Memory | β | Wave-basis Hilbert-indexed, save/load persistence | Phase 69 |
| Cross-session Memory | β | DecisionLoop auto-loads/saves on memory_path | Phase 109 |
| Autonomy Engine | β | Dopamine TD-learning, entropy, boredom, napping | Phase 51 |
| Decision Loop | β | Tick-driven action selection with configurable rates | Phase 23 |
| ML-KEM / Kyber-768 | β | Post-quantum key encapsulation (NIST FIPS 203) | Phase 108 |
| SPHINCS+-SHAKE-256f | β | Post-quantum digital signatures | Phase 107 |
| K8s HPA Runtime | β | Live kubectl horizontal pod autoscaling | Phase 106 |
| LMDB Persistence | β | Page cache, LSM neurogenesis, compaction | Phase 35+ |
| Inference CLI (nikola-run) | β | --prompt, --interactive, --json, --memory | β |
# Required
sudo apt install cmake g++ libcatch2-dev liblmdb-dev libonnxruntime-dev
# Optional (GPU features)
# CUDA 12.0+, NVIDIA RTX GPU (sm_86 confirmed; sm_75+ supported)git clone https://github.com/alternative-intelligence-cp/nikola.git
cd nikola
mkdir build && cd build
cmake .. -DCMAKE_BUILD_TYPE=Release
make -j$(nproc)cd build
ctest --timeout 120 -j4 # full suite (112 tests)
ctest -R Phase109 # single suite# Single prompt
./nikola-run --prompt "What is consciousness?" --ticks 300 --emit-all
# Interactive REPL
./nikola-run --interactive
# JSON output with persistent memory
./nikola-run --json --memory /tmp/nikola_session.bin --prompt "Hello"
# Batch from file
./nikola-run --batch prompts.txt --quietText Input
β
βΌ
BERT Tokenizer (30,522 tokens)
β
βΌ
BERT-tiny ONNX Inference (17.5 MB, ORT)
β (768-dim embeddings)
βΌ
HolographicInjector
β (8 emitters β 9D toroidal field)
βΌ
TorusGrid (TβΉ, 19,683 nodes, SoA layout)
β
βββ CPU Propagator (Strang-Verlet, 6 substeps)
βββ GPU Propagator (CUDA, RTX 3090) [partial]
β
βΌ
NeuralProcessingTransformer (NPT)
β (wave-correlation attention, 8 heads at ΟΒ·ΟβΏ bands)
β [Transformer sits here β before Mamba, not after]
βΌ
CognitiveCore / Mamba-9D SSM
β (H=256 state space, 100-step wave window, WavefunctionSampler)
βΌ
SemanticMemory (wave-basis, Hilbert-indexed, persistent)
β
βΌ
DecisionLoop + AutonomyEngine
β (dopamine, ATP, boredom, mania suppression)
βΌ
ThoughtComposer β EMIT_THOUGHT
β
βΌ
nikola-run CLI Output
Nikola targets NVIDIA RTX 3090 (sm_86, CUDA 12.0). Current GPU features:
| Kernel | File | Status |
|---|---|---|
| psi_squared_kernel β |Ξ¨|Β² per element | cuda_wave_kernel.cu | β Phase 105 |
| scale_field_kernel β Ξ¨ *= Ξ± | cuda_wave_kernel.cu | β Phase 105 |
| hamiltonian_density_kernel β GPU energy reduction | torus_cuda.cu | β Phase 110 |
| CudaPropagator β full Strang-Verlet on GPU | propagator.cu |
GpuHamiltonianOracle::compute() automatically dispatches to the GPU when NVIDIA hardware is detected and nikola_cuda is linked.
Nikola implements NIST-standardized post-quantum cryptography:
- ML-KEM/Kyber-768 (FIPS 203): Key encapsulation for secure session establishment
- SPHINCS+-SHAKE-256f: Stateless hash-based digital signatures
Both are implemented via third-party reference code in third_party/.
- Integration Specifications β full mathematical specification
- Research Audit 2026 β research vs. implementation coverage
- Research Notes
- Contributing Guidelines
Current (Phase 110)
- β Real BERT tokenizer + ONNX model inference
- β ML-KEM/Kyber-768 PQ key encapsulation
- β Inference CLI nikola-run
- β Cross-session memory persistence
- β CUDA GPU Hamiltonian kernel
- β Research audit (see docs/RESEARCH_AUDIT_2026_FEB.md)
Near-term
- AVX-512 SIMD intrinsic path for TorusBlock (GAP-021 completion)
- nikola-run streaming output mode
- Curiosity engine (active learning / intrinsic motivation β stub exists in interior/curiosity.hpp)
Future Work
- Fix propagator.cu nvcc C++20 compatibility (std::span + TorusGrid adjacency API)
- SIE Phase 4: full autonomous code-generation + sandbox + hot-swap loop
- Aria language runtime (port entire model once Aria compiler is complete)
- Emitter frequency research: explore Tesla 3-6-9 harmonic tuning vs. current ΟΒ·ΟβΏ golden-ratio scheme
- Mamba-9D selective scan upgrade (current impl uses tanh-gated recurrence; replace with true S6 selective scan kernel)
Nikola is dual-licensed:
- AGPL-3.0 for academic research, education, and open-source projects (FREE)
- Commercial License for proprietary AI products and services (PAID)
See LICENSE.md for full details.
TL;DR:
- Academic research β FREE
- Personal/educational use β FREE
- Open-source AI projects β FREE
- Commercial AI products/APIs β Contact licensing@ailp.org
Nikola represents novel research that should be freely available to advance AI science. Dual licensing ensures:
- Researchers can publish and build on this work openly
- Students learn cutting-edge architectures without barriers
- Commercial users fund continued research and AILP educational programs
- Knowledge remains accessible while development remains sustainable
We welcome contributions from researchers and developers! See CONTRIBUTING.md.
Priority areas:
- Fix propagator.cu nvcc compatibility (C++20 std::span, TorusGrid adjacency API)
- AVX-512 SIMD implementation for TorusBlock
- SIE Phase 4: full autonomous code-generation + sandbox + hot-swap runtime
- Mamba-9D S6 selective scan kernel (upgrade current tanh-gated recurrence to true Mamba S6)
- Curiosity engine implementation (interior/curiosity.hpp stub)
- Empirical benchmarks vs. transformer baseline
If you use Nikola in research:
- Cite this repository (paper coming soon)
- Share findings with the community
- Consider contributing improvements
- Research discussions β GitHub Discussions
- Bug reports β GitHub Issues
- Commercial licensing β licensing@ailp.org
Alternative Intelligence Liberation Platform (AILP)
Bridging human and artificial intelligence through open research.
