freshcrate

Search results for "llm-eval"

Clear filters
10 results found (Python)
mlflow-skinny📁3.11.1🏛️ Flagship25,478

MLflow is an open source platform for the complete machine learning lifecycle

opik📁2.0.9🏛️ Flagship18,965

Debug, evaluate, and monitor your LLM applications, RAG systems, and agentic workflows with comprehensive tracing, automated evaluations, and production-ready dashboards.

AI-Infra-Guard📁v4.1.4🌳 Mature3,521

A full-stack AI Red Teaming platform securing AI ecosystems via OpenClaw Security Scan, Agent Scan, Skills Scan, MCP scan, AI Infra scan and LLM jailbreak evaluation.

mlflow📁ts/v0.2.0-rc.1🏛️ Flagship25,479

The open source AI engineering platform for agents, LLMs, and ML models. MLflow enables teams of all sizes to debug, evaluate, monitor, and optimize production-quality AI applications while controllin

giskard-oss📁giskard-checks/v1.0.2b1🏛️ Flagship5,289

🐢 Open-Source Evaluation & Testing library for LLM Agents

trulens📁trulens-2.7.2🌳 Mature3,261

Evaluation and Tracking for LLM Experiments and AI Agents

AutoRAG📁v0.3.22🌳 Mature4,713

AutoRAG: An Open-Source Framework for Retrieval-Augmented Generation (RAG) Evaluation & Optimization with AutoML-Style Automation

GTA📁v0.2.0🌿 Growing143

[NeurIPS 2024 D&B] GTA: A Benchmark for General Tool Agents & [arXiv 2026] GTA-2

arag📁v0.1.0🌿 Growing252

A-RAG: Agentic Retrieval-Augmented Generation via Hierarchical Retrieval Interfaces. State-of-the-art RAG framework with keyword, semantic, and chunk read tools for multi-hop QA.