Search results for "distributed"
Enterprise-grade distributed AI agent framework | Develop β Deploy β Observe | K8s-native | Dynamic DI | Auto-failover | Multi-LLM | Python + Java + TypeScript
Open source platform for AI Engineering: OpenTelemetry-native LLM Observability, GPU Monitoring, Guardrails, Evaluations, Prompt Management, Vault, Playground. ππ» Integrates with 50+ LLM Providers,
A framework for building, orchestrating and deploying AI agents and multi-agent workflows with support for Python and .NET.
Secure, Fast, and Extensible Sandbox runtime for AI agents.
RAPTOR (Robust AI-Powered Toolkit for Operational Robots) is an AI-native Content Insight Engine that transforms passive media storage into an intelligent knowledge platform through automated analysis
Multi-agent memory consistency platform. We're hiring contributorsβcheck HIRING.md
Cognithor - Agent OS: Local-first autonomous agent operating system. 16 LLM providers, 17 channels, 112+ MCP tools, 5-tier memory, A2A protocol, knowledge vault, voice, browser automation, Computer-us
Knowledge Engine for AI Agent Memory in 6 lines of code
SDK libraries for Modal
OllamaFreeAPI: Free Distributed API for Ollama LLMs Public gateway to our managed Ollama servers with: - Zero-configuration access to 50+ models - Auto load-balanced across global nodes - Free tier w
Agentic RAG R1 Framework via Reinforcement Learning
SRE Agent - CNCF Sandbox Project
AgenticX is a unified, production-ready multi-agent platform β Python SDK + CLI (agx) + Studio server + Machi desktop app. Features Meta-Agent orchestration, 15+ LLM providers, MCP Hub, hierarchical m
Official MCP Servers for AWS
A sovereign cognitive architecture with IIT 4.0 integrated information, residual-stream affective steering (CAA), Global Workspace Theory, active inference, and 72 consciousness modules β running loca
Automatically Update LLM-Agent Papers Daily using Github Actions (Update Every 12th hours)
Dragon Brain β persistent long-term memory for AI agents via MCP (Model Context Protocol). Knowledge graph (FalkorDB) + vector search (Qdrant) + CUDA GPU embeddings. Works with Claude, Gemini CLI, Cur
A high-throughput and memory-efficient inference and serving engine for LLMs
Control Gmail, Google Calendar, Docs, Sheets, Slides, Chat, Forms, Tasks, Search & Drive with AI - Comprehensive Google Workspace / G Suite MCP Server & CLI Tool
A Claude Code skill that turns your Obsidian vault into a living second brain β autonomous writes, thinking tools, knowledge ingestion, scheduled agents, and _CLAUDE.md for cross-surface context.
Enterprise-ready MCP Gateway & Registry that centralizes AI development tools with secure OAuth authentication, dynamic tool discovery, and unified access for both autonomous AI agents and AI coding a
Ship customer-facing AI with isolation, spend controls, and provenance.
Open-Sable is a local-first autonomous agent framework with AGI-inspired cognitive subsystems (goals, memory, metacognition, tool use). It can run continuously on your machine, integrate with chat int
The Pinecone Python client
OpenAI-compatible HTTP LLM proxy / gateway for multi-provider inference (Google, Anthropic, OpenAI, PyTorch). Lightweight, extensible Python/FastAPIβuse as library or standalone service.
JSON Agents - A universal JSON-native standard for describing AI agents, their capabilities, tools, runtimes, and governance in a portable, framework-agnostic format. Based on RFC 8259, JSON Schema 2
Python ETL framework for stream processing, real-time analytics, LLM pipelines, and RAG.
Local-first AI agent framework with GUI, memory, web search, personality constructs, speech i/o, tools, skills, CLI & Telegram features β fully self-hosted via Ollama.
DSL and compiler framework for automated finite-differences and stencil computation
β‘ Optimize vector searches with a hyper-efficient cache that uses machine learning for faster, smarter data access and reduced costs.
Search and analyze medical literature across PubMed, ClinicalTrials.gov, and Europe PMC using AI to support clinical and research decisions.
