Local AI server with persistent memory. Zero cloud. Full control.
I've reached the minimum viable product for the real world β but feedback is still missing. π
Documentation Β· Install Β· Architecture Β· Releases
- The Story
- Screenshots
- Why Server Nexe?
- Quick Start
- Backends
- Available Models by RAM Tier
- Architecture
- Plugin System
- AI-Ready Documentation
- Security
- Platform Support
- Requirements
- Testing
- Roadmap
- Limitations
- Contributing
- Acknowledgments
- Disclaimer
Server Nexe started as a learning-by-doing experiment: "What would it take to have your own local AI with persistent memory?" Since I wasn't going to build an LLM, I started picking up pieces to assemble a useful lego for myself and my day-to-day work. One thing led to another β inference backends, RAG pipelines, vector search, plugin systems, security layers, a web UI, an installer with hardware detection.
This entire project β code, tests, audits, documentation β has been built by one person orchestrating different AI models, both local (MLX, Ollama) and cloud (Claude, GPT, Gemini, DeepSeek, Qwen, Grok...), as collaborators. The human decides what to build, designs the architecture, reviews lines and runs tests. The AIs write, audit, and stress-test under human direction.
What began as a prototype has turned into a genuinely useful product: 4842 tests, security audits, encryption at rest, a macOS installer with hardware detection, and a plugin system. It's not done β there's a roadmap full of ideas β but it already does what it set out to do: run an AI server on your machine, with memory that persists, and zero data leaving your device.
This is not trying to compete with ChatGPT or Claude. But it can be complementary for less demanding tasks. It's an open-source tool for people who want to own their AI infrastructure. Built by one person in Barcelona, with AI as co-pilot, music, and stubbornness.
More technically: what was a giant spaghetti monster ended up distilling, refactor after refactor, into a minimal, agnostic, modular core β where security and memory are solved at the base so building on top is fast and comfortable, in humanβAI collaboration. Whether that worked is for the community to say (the AI says yes, but what did you expect π€ͺ).
Web UI β light mode |
Web UI β dark mode |
System tray menu (NexeTray.app) |
SwiftUI installer wizard (DMG) |
Your conversations, documents, embeddings, and model weights stay on your machine. Always. Server Nexe combines LLM inference with a persistent RAG memory system β your AI remembers context across sessions, indexes your documents, and never phones home.
|
Every conversation, document, and embedding stays on your device. No telemetry, no external calls, no cloud dependency. Not even a server to spy on you. |
Remembers context across sessions using Qdrant vector search with 768-dimensional embeddings across 3 specialized collections. Ingest documents, recall knowledge. |
|
The model extracts facts from conversations automatically β names, jobs, preferences, projects β and stores them to memory inside the same LLM call, with zero extra latency. Trilingual intent detection (ca/es/en), semantic deduplication, and deletion by voice ("forget that..."). |
Switch between MLX (Apple Silicon native), llama.cpp (GGUF, universal), or Ollama β one config change, same OpenAI-compatible API. |
|
Auto-discovered plugins with independent manifests. Security, web UI, RAG, backends β everything is a plugin. Add capabilities without touching the core. NexeModule protocol with duck typing, no inheritance. |
DMG with guided wizard that detects your hardware, picks the right backend, recommends models for your RAM, and gets you running in minutes. |
|
Upload |
4842 tests (~85% coverage), security audits, i18n in 3 languages, comprehensive API. What started as an experiment is being built with production practices. |
Download the latest Install Nexe.dmg from Releases. The wizard handles everything: hardware detection, backend selection, model download, and configuration.
git clone https://github.com/jgoy-labs/server-nexe.git
cd server-nexe
./setup.sh # guided installation (detects hardware, picks backend & model)
nexe go # start server on port 9119Once running:
nexe chat # interactive chat
nexe chat --rag # chat with RAG memory
nexe memory store "Barcelona is the capital of Catalonia"
nexe memory recall "capital Catalonia"
nexe status # system statuspython -m installer.install_headless --backend ollama --model qwen3.5:latest
nexe goEndpoints at http://localhost:9119:
| Endpoint | Description |
|---|---|
/v1/chat/completions |
OpenAI-compatible chat API |
/ui |
Web UI (chat, file upload, sessions) |
/health |
Health check |
/docs |
Interactive API documentation (Swagger) |
Authentication via
X-API-Keyheader. Key is generated during installation and stored in.env.
| Backend | Platform | Best for |
|---|---|---|
| MLX | macOS (Apple Silicon) | Recommended for Mac β native Metal GPU acceleration, fastest on M-series |
| llama.cpp | macOS / Linux | Universal β GGUF format, Metal on Mac, CPU/CUDA on Linux |
| Ollama | macOS / Linux | Bridge to existing Ollama installations, easiest model management |
The installer auto-detects your hardware and recommends the best backend. You can switch anytime in personality/server.toml.
The installer organizes the 16 catalog models by the RAM available on your machine (4 tiers):
| Tier | Models | Origin |
|---|---|---|
| 8 GB | Gemma 3 4B, Qwen3.5 4B, Qwen3 4B | Google, Alibaba |
| 16 GB | Gemma 4 E4B, Salamandra 7B, Qwen3.5 9B, Gemma 3 12B | Google, BSC/AINA, Alibaba |
| 24 GB | Gemma 4 31B, Qwen3 14B, GPT-OSS 20B | Google, Alibaba, OpenAI |
| 32 GB | Qwen3.5 27B, Gemma 3 27B, DeepSeek R1 32B, Qwen3.5 35B-A3B, ALIA-40B | Alibaba, Google, DeepSeek, Spanish Government |
In addition, you can use any Ollama model by name or any GGUF model from Hugging Face.
server-nexe/
βββ core/ # FastAPI server, endpoints, CLI, config, metrics, resilience
β βββ endpoints/ # REST API (v1 chat, health, status, system)
β βββ cli/ # CLI commands & i18n (ca/es/en)
β βββ resilience/ # Circuit breaker, rate limiting
βββ personality/ # Module manager, plugin discovery, server.toml
β βββ loading/ # Plugin loading pipeline (find, validate, import, lifecycle)
β βββ module_manager/ # Discovery, registry, config, sync
βββ memory/ # Embeddings, RAG engine, vector memory, document ingestion
β βββ embeddings/ # Chunking, embedding generation
β βββ rag/ # Retrieval-augmented generation pipeline
β βββ memory/ # Persistent vector store (Qdrant)
βββ plugins/ # Auto-discovered plugin modules
β βββ mlx_module/ # MLX backend (Apple Silicon)
β βββ llama_cpp_module/ # llama.cpp backend (GGUF)
β βββ ollama_module/ # Ollama bridge
β βββ security/ # Auth, injection detection, CSRF, rate limiting, input sanitization
β βββ web_ui_module/ # Browser-based chat UI with file upload
βββ installer/ # Guided installer, headless mode, hardware detection, model catalog
βββ knowledge/ # Indexed documentation for RAG (ca/es/en)
βββ tests/ # Integration & e2e test suites
flowchart LR
A[Request] --> B[Auth<br/>X-API-Key]
B --> C[Rate Limit<br/>slowapi]
C --> D[validate_string_input<br/>context parameter]
D --> E[RAG Recall<br/>3 collections]
E --> F[_sanitize_rag_context<br/>injection filter]
F --> G[LLM Inference<br/>MLX/Ollama/llama.cpp]
G --> H[Stream Response<br/>SSE markers]
H --> I[MEM_SAVE Parsing<br/>fact extraction]
I --> J[Response<br/>to client]
Server Nexe uses a duck typing protocol (NexeModule Protocol) β no class inheritance, no BasePlugin. Each plugin is a directory under plugins/ with a manifest.toml and a module.py.
5 active plugins:
| Plugin | Type | Key features |
|---|---|---|
| mlx_module | LLM Backend | Apple Silicon native, prefix caching (trie), Metal GPU |
| llama_cpp_module | LLM Backend | Universal GGUF, LRU ModelPool, CPU/GPU |
| ollama_module | LLM Backend | HTTP bridge to Ollama, auto-start, VRAM cleanup |
| security | Core | Dual-key auth, 6 injection detectors + NFKC, 47 jailbreak patterns, rate limiting, RFC5424 audit logging |
| web_ui_module | Interface | Web chat, sessions, document upload, MEM_SAVE, RAG sanitization, i18n |
The knowledge/ folder contains 13 thematic documents Γ 3 languages = 39 files, structured with YAML frontmatter for RAG ingestion:
API, Architecture, Use Cases, Errors, Identity, Installation, Limitations, Plugins, RAG, README, Security, Testing, Usage.
Point any AI assistant at this repo and it can understand the complete architecture.
| Language | Link |
|---|---|
| English | knowledge/en/README.md |
| Catalan | knowledge/ca/README.md |
| Spanish | knowledge/es/README.md |
Server Nexe includes a security module enabled by default:
- API key authentication on all endpoints
- CSP headers (script-src 'self', no unsafe-inline)
- CSRF protection with token validation
- Rate limiting per IP
- Input sanitization β 6 injection detectors + Unicode normalization
- Jailbreak detection β 47 pattern speed-bump detector
- Upload denylist β blocks accidental upload of API keys, PEM keys
- Memory injection protection β tag stripping on all input paths
- RAG injection sanitization β
[MEM_SAVE:],[MEM_DELETE:],[OLVIDA|OBLIT|FORGET:],[MEMORIA:]neutralized at ingest and retrieval (v0.9.9) - Pipeline enforcement β all chat through canonical endpoints only
- Encryption at rest β AES-256-GCM, SQLCipher,
autodefault, fail-closed (v0.9.2+) - Trusted host middleware
Note: This project has not been tested in production with real users. Security testing has been performed by AI, not by professional auditors. See SECURITY.md for full disclosure and vulnerability reporting.
| Platform | Status | Backends |
|---|---|---|
| macOS Apple Silicon (M1+) | Supported β all 3 backends | MLX, llama.cpp, Ollama |
| macOS Intel | Not supported since v0.9.9 | β |
| macOS 13 Ventura or earlier | Not supported since v0.9.9 (requires macOS 14 Sonoma+) | β |
| Linux x86_64 | Partial β unit tests pass, CI green, NOT tested in production | llama.cpp, Ollama |
| Linux ARM64 | Not directly tested | llama.cpp, Ollama (theoretical) |
| Windows | Not supported | β |
Since v0.9.9, server-nexe requires macOS 14 Sonoma+ with Apple Silicon (M1 or later). The pre-built wheels in the DMG are
arm64exclusive. Linux with the llama.cpp and Ollama backends should work, but the full compatibility audit is on the roadmap.
| Minimum | Recommended | |
|---|---|---|
| OS | macOS 14 Sonoma (Apple Silicon only) | macOS 14+ (Apple Silicon) |
| CPU | Apple Silicon M1 | Apple Silicon M2 / M3 / M4 |
| Python | 3.11+ | 3.12+ |
| RAM | 8 GB | 16 GB+ (for larger models) |
| Disk | 10 GB free | 20 GB+ free |
Intel Macs and macOS 13 Ventura are no longer supported. Apple Silicon only (arm64). Linux: Works with llama.cpp and Ollama backends. Full Linux compatibility audit is on the roadmap.
4842 tests collected (of 4990 total, 148 deselected by default markers) with ~85% code coverage. CI runs the full suite on every push.
# Unit tests
pytest core memory personality plugins -m "not integration and not e2e and not slow" \
--cov=core --cov=memory --cov=personality --cov=plugins \
--cov-report=term --tb=short -q
# Integration tests (requires Ollama running)
NEXE_AUTOSTART_OLLAMA=true pytest -m "integration" -qServer Nexe is actively developed. Here's what's coming:
- Persistent memory with RAG (v0.9.0)
- Encryption at rest β AES-256-GCM (v0.9.0)
- macOS code signing & notarization (v0.9.0)
- Security hardening β jailbreak detection, upload denylist, pipeline enforcement (v0.9.1)
- Encryption default
auto, fail-closed (v0.9.2) - Embeddings on ONNX (
fastembed), PyTorch removed (v0.9.3) - Multimodal VLM β 4 backends (Ollama, MLX, llama.cpp, Web UI) (v0.9.7)
- Precomputed KB embeddings (~10.7x faster startup) (v0.9.8)
- RAG injection sanitization (MEM tags neutralized at ingest and retrieval) (v0.9.9)
- Offline install bundle β all wheels + embedding model in DMG (~1.2 GB, post-v0.9.9)
- Thinking toggle endpoint β
PATCH /session/{id}/thinking(post-v0.9.9) - Native macOS app (SwiftUI, replaces Python tray)
- Configurable inference parameters via UI
- Community forum
See CHANGELOG.md for version history.
Honest disclosure of what server Nexe does not do or does not do well:
- Local models < cloud β Local models are less capable than GPT-4 or Claude. That's the trade-off for privacy.
- RAG is not perfect β Homonymy, negations, cold start (empty memory), and contradictory information across time periods.
- Partially OpenAI-compatible API β
/v1/chat/completionsworks. Missing:/v1/embeddings,/v1/models, function calling, and multimodal. - Single user β Mono-user by design. No multi-device sync, no accounts.
- No fine-tuning β You cannot train or fine-tune models.
- New encryption β Added in v0.9.0 (default
autosince v0.9.2, fail-closed). Not battle-tested. If you lose the master key, data cannot be recovered (see MEK fallback: file β keyring β env β generate). - Single developer, single real user β Personal open-source project, not an enterprise product.
See knowledge/en/LIMITATIONS.md for full detail.
See CONTRIBUTING.md for setup instructions and guidelines.
server-nexe is built on the shoulders of these amazing open-source projects:
AI & Inference
- MLX β Apple Silicon native ML framework
- llama.cpp β Efficient GGUF model inference
- Ollama β Local model management and serving
- fastembed β ONNX-based text embeddings (replaced
sentence-transformerssince v0.9.3, saves ~600 MB) - sentence-transformers β Historical: original embedding backend, replaced by
fastembedin v0.9.3 - Hugging Face β Model hub and transformers library
Infrastructure
- Qdrant β Vector search engine powering RAG memory
- FastAPI β High-performance async web framework
- Uvicorn β Lightning-fast ASGI server
- Pydantic β Data validation
Tools & Libraries
- Rich β Beautiful terminal formatting
- marked.js β Markdown rendering in web UI
- PyPDF β PDF text extraction for RAG
- rumps β macOS menu bar integration
Security & Monitoring
- Prometheus β Metrics and monitoring
- SlowAPI β Rate limiting
Also built with: Python, NumPy, httpx, tenacity, Click, Typer, Colorama, python-dotenv, PyYAML, toml, structlog, starlette-csrf, python-multipart, psutil, PyObjC, and Linux.
20% of Enterprise sponsorships go directly to supporting these projects.
Built with AI collaboration Β· Barcelona
This software is provided "as is", without warranty of any kind. Use it at your own risk. The author is not responsible for any damage, data loss, security incidents, or misuse arising from the use of this software.
See LICENSE for details.
Version 1.0.2-beta Β· Apache 2.0 Β· Made by Jordi Goy in Barcelona




