Search results for "qdrant"
RAPTOR (Robust AI-Powered Toolkit for Operational Robots) is an AI-native Content Insight Engine that transforms passive media storage into an intelligent knowledge platform through automated analysis
Nextcloud MCP Server
RAGLight is a modular framework for Retrieval-Augmented Generation (RAG). It makes it easy to plug in different LLMs, embeddings, and vector stores, and now includes seamless MCP integration to connec
Custom plugins for hermes-agent β goal management, inter-agent bridge, model selection, cost control
Open source platform for AI Engineering: OpenTelemetry-native LLM Observability, GPU Monitoring, Guardrails, Evaluations, Prompt Management, Vault, Playground. ππ» Integrates with 50+ LLM Providers,
RAG (Retrieval Augmented Generation) Framework for building modular, open source applications for production by TrueFoundry
Production-ready RAG Framework (Python/FastAPI). 1-line config swaps: 6 Vector DBs (Weaviate, Pinecone, Qdrant, ChromaDB, pgvector, MongoDB), 5 LLMs (Gemini, OpenAI, Claude, Ollama, OpenRouter). OpenA
Dragon Brain β persistent long-term memory for AI agents via MCP (Model Context Protocol). Knowledge graph (FalkorDB) + vector search (Qdrant) + CUDA GPU embeddings. Works with Claude, Gemini CLI, Cur
LightAgent: Lightweight AI agent framework with memory, tools & tree-of-thought. Supports multi-agent collaboration, self-learning, and major LLMs (OpenAI/DeepSeek/Qwen). Open-source with MCP/SSE prot
Enterprise-ready vector database toolkit for building searchable knowledge bases from multiple data sources. Supports multi-project management, automatic ingestion from Confluence/JIRA/Git, intelligen
AgenticX is a unified, production-ready multi-agent platform β Python SDK + CLI (agx) + Studio server + Machi desktop app. Features Meta-Agent orchestration, 15+ LLM providers, MCP Hub, hierarchical m
Brain-inspired knowledge graph: spreading activation, Hebbian learning, memory consolidation.
Benchmark for vector databases.
Search your files by talking to them - 100% offline
One API for 20+ LLM providers, your databases, and your files β self-hosted, open-source AI gateway with RAG, voice, and guardrails.
Framework for benchmarking vector search engines
Self-hosted personal AI agent that lives in your DMs. Describe any workflow: triage Gmail, pull a Giphy feed, build a Slack bot, monitor markets. It writes the code, runs it, schedules it, and saves i
Unified framework for building enterprise RAG pipelines with small, specialized models
RAG (Retrieval-augmented generation) ChatBot that provides answers based on contextual information extracted from a collection of Markdown files.
Lightweight semantic code search engine β 2-stage vector + FTS + RRF fusion + MCP server for Claude Code
Assistant IA avancΓ© (RAG, outils, LΓ©gifrance, OCR, skills, export de fichiers, historique) conΓ§u principalement pour un usage avec AlbertAPI (DiNum)
β‘ Lightweight offline AI agent for local models. No cloud, no API keys β just your GPU.
Local AI server with persistent memory, RAG, and multi-backend inference (MLX / llama.cpp / Ollama). Runs entirely on your machine β zero data sent to external services.
The highest-scoring AI memory system ever benchmarked that isn't reliant on LLM reranking. And it's free & burns less tokens.
A thing that uses AI to write perfect applications. For those who want to know how: a governance runtime enforcing immutable constitutional rules on AI coding agents.
Deploy a local, multi-user RAG system to query PDF and DOCX documents using a local LLM without cloud or API dependencies.
π Learn from diverse sources with OmniLearnAI, an intelligent platform that combines documents, videos, and more, all with reliable citations.
OpenTelemetry Qdrant instrumentation
Client library for the Qdrant vector search engine
