Search results for "acceleration"
Every meeting, every idea, every voice note — searchable by your AI. Open-source, privacy-first conversation memory layer.
A portable accelerated SQL query, search, and LLM-inference engine, written in Rust, for data-grounded AI apps and agents.
SeekStorm: vector & lexical search - in-process library & multi-tenancy server, in Rust.
Qdrant - High-performance, massive-scale Vector Database and Vector Search Engine for the next generation of AI. Also available in the cloud https://cloud.qdrant.io/
A high-performance, in-memory vector database written in Rust, designed for semantic search and top-k nearest neighbor queries in AI-driven applications, with binary file persistence for durability.
Local AI anywhere, for everyone — LLM inference, chat UI, voice, agents, workflows, RAG, and image generation. No cloud, no subscriptions.
⚡💾 Vectro — Compress LLM embeddings 🧠🚀 Save memory, speed up retrieval, and keep semantic accuracy 🎯✨ Lightning-fast quantization for Python + Mojo, vector DB friendly 🗄️, and perfect for RAG pip
Wrap concurrent code in Pony reference capabilities for data-race freedom
Generate OTP supervision trees and fault-tolerance scaffolding
Extract state machines from code and model-check with TLA+/PlusCal
