Search results for "queue"
Framework for AI Backend. Build and run AI agents like microservices - scalable, observable, and identity-aware from day one.
Fastest enterprise AI gateway (50x faster than LiteLLM) with adaptive load balancer, cluster mode, guardrails, 1000+ models support & <100 Β΅s overhead at 5k RPS.
βΎοΈ Private Agent Fleet with Spec Coding. Each agent gets their own GPU-accelerated desktop. Run Claude, Codex, Gemini and open models on a full private AI Stack βΎοΈ
Lightweight CLI to build self-managing AI agent teams. Define roles, skills & projects in Markdown+YAML β agents run autonomously on a heartbeat schedule, talk to each other via inbox, and delegate ta
#1 Terminal Benchmark 2.0 β AI that ships your tickets.
LLM-powered framework for deep document understanding, semantic retrieval, and context-aware answers using RAG paradigm.
Zero trust LLM gateway. OpenAI-compatible proxy with semantic routing and load balancing across OpenAI, Anthropic, Ollama, vLLM, and any compatible backend. Identity-based access, virtual A
The Maestro App Factory: a highly-opinionated multi-agent orchestration tool for app development that emulates the workflow of high-functioning human development teams using AI agents
Zero-dependency Web Application Firewall in Go. Single binary. Three deployment modes. Tokenizer-based detection.
Apache Arrow Flight clustered vector cache for high throughput Agent memory sharing
Multi-LLM agent orchestration TUI β parallel Claude/Gemini/Codex sessions, 126 MCP tools
