Search results for "ai-safety"
AI agent security plugin for OpenClaw: prompt injection detection, PII sanitization, and monitoring dashboard
The Execution Security Layer for the Agentic Era. Providing deterministic "Sudo" governance and audit logs for autonomous AI agents.
Internal Safety Collapse: Turning the LLM or an AI Agent into a sensitive data generator.
The open agent control plane. Govern autonomous AI agents with pre-execution policy enforcement, approval gates, and audit trails. Works with LangChain, CrewAI, MCP, and any framework.
ArifOS โ Constitutional MCP kernel for governed AI execution. AAA architecture: Architect ยท Auditor ยท Agent. Built for the open-source agentic era.
Self-improving agent governance: ๐/๐ โ Pre-Action Gates that block repeat AI mistakes. Stop paying for the same mistake twice.
ArifOS โ Constitutional MCP kernel for governed AI execution. AAA architecture: Architect ยท Auditor ยท Agent. Built for the open-source agentic era.
AI Constraint Engine by Sandeep Roy โ stops AI from breaking what you locked. 100/100 on Claude's adversarial test suite. 42 MCP tools. Works with Bolt.new, Lovable, Claude Code, Cursor. Free & open s
Persistent Claude Code agents with scheduling, sessions, memory, and Telegram.
One API for 20+ LLM providers, your databases, and your files โ self-hosted, open-source AI gateway with RAG, voice, and guardrails.
mkdir beats vector DB. B-tree NeuronFS: 0-byte folders govern AI โ โฉ0 infrastructure, ~200x token efficiency. OS-native constraint engine for LLM agents.
MCP plugin that intercepts AI agent edits in RAM, validates them (TypeScript compiler + gopls + pyright), auto-heals missing imports, and commits atomically. If anything breaks, disk stays untouched
A thing that uses AI to write perfect applications. For those who want to know how: a governance runtime enforcing immutable constitutional rules on AI coding agents.
MoralStack is a governance and safety layer for LLM applications. It analyzes user requests before generation, evaluates risk and intent, and decides whether the AI should answer normally, answer safe
A self-improving AI agent that learns from experience. Runs entirely on a local 9B model. Security by absence โ dangerous capabilities were never built.
AI ๅฉๆ็ๆจก็ตๅ่ฝๅๆกๆถ๏ผ่จๆถใ้ฒ็ฆฆใ่จบๆทใๅ่ณช็ฉฉๅฎ | Modular capability framework for AI assistants | Claude Code / Cursor / Any LLM
Operating framework for AI-assisted work with decision, governance, validation, and learnings before execution.
๐ Simplify your research workflow with Claude Scholar, the complete configuration for Claude Code in data science, AI, and academic writing.
A curated, daily-updated list of awesome resources, tools, SDKs, papers, and projects for Anthropic & Claude AI
๐ฆ Prevents outdated Rust code suggestions from AI assistants. This MCP server fetches current crate docs, uses embeddings/LLMs, and provides accurate context via a tool call.
Block AI agent access to sensitive macOS paths and log all actions to protect private data during command execution.
Enforce zero-trust rules for AI agents to prevent hallucinations, unsafe actions, and policy bypasses
๐ Define your architecture with System Constitution to keep your AI coding agents in check, ensuring stability and compliance as your project evolves.
The deterministic UI contract and relational interface substrate for the Riverbraid cluster.
The central directory and Merkle Root mapping for the 17-petal Riverbraid v1.5.0 substrate.
The identity anchor and sovereign GPG verification petal for the Riverbraid organization.
Scan AI artifacts like agent skills and config files for security risks, privacy issues, and instruction-level attacks with a Python CLI tool.
Add provably safe ethical constraints to AI agents via Phronesis
Governed vision input and perception contract surface for Riverbraid.
Governed audio input and output contract surface for Riverbraid.
Governed action execution surface for Riverbraid.
Temporal contracts and governed time based state logic for Riverbraid.
Cognitive architecture and meaning processing layer adjacent to the Riverbraid core.
Deterministic governance engine for AI agents. Enforce rules defined in .md governance files across AI systems.
Meaning scoped persistence and state retention rules for Riverbraid.
Cluster manifest, orchestration, and stationary state verification for Riverbraid.
Cryptographic integrity layer for Riverbraid seals, hashes, and signatures.
Riverbraid v1.5.0 | Resonant Intelligence Architecture
Deterministic refusal and boundary enforcement layer for Riverbraid.
Foundational invariants and verification surfaces for Riverbraid.
Protect AI agents by detecting and blocking prompt, command injection, Unicode bypass, and social engineering attacks with customizable security controls.
Organization profile and public entry surface for Riverbraid.
A structured reasoning and decision architecture for stable, interpretable, and hallucinationโresistant AI systems. An open standard for humanโAI collaboration and autonomous systems.
ASAN: A conceptual architecture for a self-creating (autopoietic), energy-efficient, and governable multi-agent AI system.
