Search results for "ai-safety"
Internal Safety Collapse: Turning the LLM or an AI Agent into a sensitive data generator.
ArifOS β Constitutional MCP kernel for governed AI execution. AAA architecture: Architect Β· Auditor Β· Agent. Built for the open-source agentic era.
ArifOS β Constitutional MCP kernel for governed AI execution. AAA architecture: Architect Β· Auditor Β· Agent. Built for the open-source agentic era.
One API for 20+ LLM providers, your databases, and your files β self-hosted, open-source AI gateway with RAG, voice, and guardrails.
A thing that uses AI to write perfect applications. For those who want to know how: a governance runtime enforcing immutable constitutional rules on AI coding agents.
MoralStack is a governance and safety layer for LLM applications. It analyzes user requests before generation, evaluates risk and intent, and decides whether the AI should answer normally, answer safe
A self-improving AI agent that learns from experience. Runs entirely on a local 9B model. Security by absence β dangerous capabilities were never built.
A curated, daily-updated list of awesome resources, tools, SDKs, papers, and projects for Anthropic & Claude AI
Block AI agent access to sensitive macOS paths and log all actions to protect private data during command execution.
Enforce zero-trust rules for AI agents to prevent hallucinations, unsafe actions, and policy bypasses
Protect AI agents by detecting and blocking prompt, command injection, Unicode bypass, and social engineering attacks with customizable security controls.
A structured reasoning and decision architecture for stable, interpretable, and hallucinationβresistant AI systems. An open standard for humanβAI collaboration and autonomous systems.
