Search results for "notes"
Every meeting, every idea, every voice note — searchable by your AI. Open-source, privacy-first conversation memory layer.
Local knowledge graph for AI agents. Hybrid search + MCP server for Obsidian vaults.
EdgeCrab 🦀 A Super Powerful Personal Assistant inspired by NousHermes and OpenClaw — Rust-native, blazing-fast terminal UI, ReAct tool loop, multi-provider LLM support, ACP protocol, gateway adapters
Token-efficient navigation substrate for AI coding agents. Index code once and retrieve only the symbols you need.
Data transformation framework for AI. Ultra performant, with incremental processing. 🌟 Star if you like it!
LeanKG: Stop Burning Tokens. Start Coding Lean.
Markdown and OFM SDK w/ MCP server that transforms your Obsidian vault into an intelligent knowledge system
Distributed AI/LLM for the people. Share compute privately or publicly to power your agents and chat.
A polyglot document intelligence framework with a Rust core. Extract text, metadata, images, and structured information from PDFs, Office documents, images, and 91+ formats. Available for Rust, Python
Fast, small, and fully autonomous AI personal assistant infrastructure, ANY OS, ANY PLATFORM — deploy anywhere, swap anything 🦀
BioMCP: Biomedical Model Context Protocol
An AI agent for teams, communities, and multi-user environments.
Your machine's AI brain. One 20MB binary gives every tool, script, and cron job shared AI memory + 114 API routes. Desktop app, CLI, Telegram — all connected. Rust-powered.
A high-performance, in-memory vector database written in Rust, designed for semantic search and top-k nearest neighbor queries in AI-driven applications, with binary file persistence for durability.
Frontier self improving AI intern / coworker
Next-Gen AI-Aware Git
The graph-native hybrid retrieval engine for AI and GraphRAG. Graph + Vector + Full-Text in a single transactional engine.
Local AI anywhere, for everyone — LLM inference, chat UI, voice, agents, workflows, RAG, and image generation. No cloud, no subscriptions.
A self-hosted software factory — define goals, agents decompose and execute them through enforced pipelines, with humans in the loop where it matters.
⚡💾 Vectro — Compress LLM embeddings 🧠🚀 Save memory, speed up retrieval, and keep semantic accuracy 🎯✨ Lightning-fast quantization for Python + Mojo, vector DB friendly 🗄️, and perfect for RAG pip
High-performance crystal structure modeling and DFT/MD file preparation. Native desktop app fusing a Rust/C++ physics kernel, a GPU-accelerated Metal/Vulkan renderer, and an AI-driven command bus for
Enable AI agents with fast, local semantic memory to search and recall knowledge from text files without servers or complex setup.
