freshcrate

Search results for "mac"

Clear filters
14 results found (Rust)
agent-desktop📁0.1.13🌿 Growing69

AI agent tool for observing and controlling desktop applications via native OS accessibility trees

spiceai📁v2.0.0-rc.3🌳 Mature2,880

A portable accelerated SQL query, search, and LLM-inference engine, written in Rust, for data-grounded AI apps and agents.

oramacore📁v1.2.38🌿 Growing249

OramaCore is the complete runtime you need for your projects, answer engines, copilots, and search. It includes a fully-fledged full-text search engine, vector database, LLM interface, and many more u

qdrant📁v1.17.1🏛️ Flagship30,532

Qdrant - High-performance, massive-scale Vector Database and Vector Search Engine for the next generation of AI. Also available in the cloud https://cloud.qdrant.io/

minutes📁v0.13.3🌳 Mature1,116

Every meeting, every idea, every voice note — searchable by your AI. Open-source, privacy-first conversation memory layer.

moltis📁20260421.05🌳 Mature2,584

A secure persistent personal agent server in Rust. One binary, sandboxed execution, multi-provider LLMs, voice, memory, Telegram, WhatsApp, Discord, Teams, and MCP tools. Secure by design, runs on you

zeroclaw📁v0.7.3🏛️ Flagship30,422

Fast, small, and fully autonomous AI personal assistant infrastructure, ANY OS, ANY PLATFORM — deploy anywhere, swap anything 🦀

CortexaDB📁v1.0.1🌱 Seedling40

It is a simple, fast, and hard-durable embedded database designed specifically for AI agent memory. It provides a single-file-like experience (no server required) but with native support for vectors,

carapace📁v0.7.0🌱 Seedling43

A secure, stable Rust alternative to openclaw/moltbot/clawdbot

DreamServer📁v2.0.0🌿 Growing443

Local AI anywhere, for everyone — LLM inference, chat UI, voice, agents, workflows, RAG, and image generation. No cloud, no subscriptions.

RustClaw📁v0.5.0🌱 Seedling2

Lean Rust AI agent: 6MB binary, 7.9MB RAM. OpenClaw replacement. Telegram + Discord + GitHub auto-PR. Ollama/Anthropic support.

llm-ls📁0.5.3⚰️ Archived865

LSP server leveraging LLMs for code completion (and more?)