Search results for "bm25"
The memory system your AI agent deserves. 4-stage hybrid retrieval β Vector + BM25 + Knowledge Graph + Neural Reranker β in <150ms. Self-hosted, $0/query, built for agents that need to actually rememb
Open-source persistent memory for AI agent pipelines (LangGraph, CrewAI, AutoGen) and Claude. REST API + knowledge graph + autonomous consolidation.
The leading, most token-efficient MCP server for GitHub source code exploration via tree-sitter AST parsing
RAPTOR (Robust AI-Powered Toolkit for Operational Robots) is an AI-native Content Insight Engine that transforms passive media storage into an intelligent knowledge platform through automated analysis
The implementation for SIGIR 2026: Learning to Retrieve from Agent Trajectories.
Cognithor - Agent OS: Local-first autonomous agent operating system. 16 LLM providers, 17 channels, 112+ MCP tools, 5-tier memory, A2A protocol, knowledge vault, voice, browser automation, Computer-us
A modular RAG (Retrieval-Augmented Generation) system with MCP Server architecture. Using Skill to make AI follow each step of the spec and complete the code 100% by AI.
OpenClaw reimagined in pure Python β autonomous AI agent with memory, RAG, skills, web dashboard, voice input, daemon, and multi-channel support.
AgenticX is a unified, production-ready multi-agent platform β Python SDK + CLI (agx) + Studio server + Machi desktop app. Features Meta-Agent orchestration, 15+ LLM providers, MCP Hub, hierarchical m
Open-Source Intelligent Command Layer
Crawl4AI MCP Server: Extract content from web pages, PDFs, Office docs, YouTube videos with AI-powered summarization. 17 tools, token reduction, production-ready.
AutoRAG: An Open-Source Framework for Retrieval-Augmented Generation (RAG) Evaluation & Optimization with AutoML-Style Automation
RAGLight is a modular framework for Retrieval-Augmented Generation (RAG). It makes it easy to plug in different LLMs, embeddings, and vector stores, and now includes seamless MCP integration to connec
The highest-scoring AI memory system ever benchmarked that isn't reliant on LLM reranking. And it's free & burns less tokens.
Production-ready RAG Framework (Python/FastAPI). 1-line config swaps: 6 Vector DBs (Weaviate, Pinecone, Qdrant, ChromaDB, pgvector, MongoDB), 5 LLMs (Gemini, OpenAI, Claude, Ollama, OpenRouter). OpenA
An easy-to-use framework for modular RAG
Local-first Agentic Memory Layer for MCP Agents β’ 25 tools β’ Hybrid search (FTS5 + vector + MMR) β’ GDPR β’ 100% local
π Implement hybrid search using Vespa and FastAPI, blending BM25 and dense semantic retrieval for efficient, accurate information retrieval.
π Enable smart document and data search with AI-powered chat, vector search, and SQL querying across multiple file formats.
π¦Ύ A productionβready research outreach AI agent that plans, discovers, reasons, uses tools, autoβbuilds cited briefings, and drafts tailored emails with toolβchaining, memory, tests, and turnkey Dock
