provnai
The Trust Infrastructure for Autonomous AI.
Open-source infrastructure to secure, verify, govern, and audit
the next generation of autonomous AI agents.
Development Maturity
ProvnAI is being developed as an open infrastructure initiative. We distinguish clearly between production-ready infrastructure, functional previews, and our long-term research roadmap.
Live Now
- McpVanguard Security Proxy
- Official Documentation Portal
- Railway Deployment Flow
- Core Rust VEX-Kernel v0.1.6
Technical Preview
- VEX Evidence Capsule Verifier
- Forensic Logic Explorer
- Execution boundary hardening
- Cross-system integration preview
Research direction
- ZK-VEX (STARK-based redaction)
- Multi-agent Cognitive Routing
- VEP Standardization (RFC Phase)
- Unified Cognitive Identity
Solving the
Black Box Problem
Autonomous agents are inherently opaque. ProvnAI replaces blind trust with cryptographically verifiable traces of every decision, action, and state transition.
Proof of Execution.
ProvnAI transforms ephemeral agent logs into permanent cryptographic evidence. Every decision becomes a verifiable artifact.
Traditional Log
- Plaintext trace
- Easily modified
- No cryptographic link
Verifiable Receipt
- Context-bound proof
- Hardware-rooted attest
- Standards compliant
Evidence Portability
Portable receipts allow agents to carry their own proof, eliminating the need for blind trust between untrusted parties.
The Safety Stack
From hardware-level attestation to active tool-layer proxies. We consolidate fragmented agent security into a unified defense architecture.
McpVanguard
A security proxy for AI agents that use MCP (Model Context Protocol). It interposes between the agent and the host system, inspects every tool call, and blocks attacks before they reach your underlying servers.
Rules Engine
50+ YAML signatures — block path traversal, reverse shells, prompt injection, and SSRF attacks instantly.
Semantic Scorer
LLM-based intent scoring via OpenAI, DeepSeek, Groq, or Ollama to detect heuristic evasion attempts.
Behavioral
Shannon entropy and sliding-window anomaly detection. Stateful monitoring of conversational context.
Evidence Capsule[VEP]
Execution evidence is packaged into portable verification artifacts tied to the runtime context and trust boundary.
Adversarial Logic[RBD]
Cognitive verification protocol. Red/Blue systems evaluate the logic of a proposed action to ensure alignment with governing policies.
Cognitive Routing[A2A]
Secure transport layer for multi-agent negotiation. Preserves intent integrity and prevents context manipulation in autonomous swarms.
VEX Protocol
Verification infrastructure for autonomous AI. VEX is a logic-enforcement kernel that ensures a zero-trust security posture and mandatory auditability for agentic systems.
Governed Execution
ProvnAI enforces a verifiable separation between proposal and execution. Highly sensitive actions are designed so they are not self-authorized by the same agent that proposes them.
Externalized Authorization
Highly sensitive actions are evaluated against external authorization and execution-context controls before they can proceed.
Evidence Capsules
Execution evidence is packaged into portable verification artifacts tied to the runtime context and trust boundary.
Inference Proposes.
Governance Decides.
ProvnAI is co-authoring the .capsule Verifiable Agent Receipt specification alongside CHORA. We are defining the shared protocol for how autonomous agents prove their intent, authority, and identity across distributed ecosystems.
See It Run
We ran a 10x scale test pipeline using DeepSeek v3. The results verify VEX's concurrency model handles high-throughput agent swarms with minimal latency overhead.
Latency Comparison (Lower is Better)
VEX
Explorer.
Verify the cryptographic integrity of VEX Evidence Capsules locally. Client-side execution evaluation. Independent cryptographic verification.
Live Logic Trail
Watch intent mapping as it happens.
Semantic Scorer
LLM-based intent scoring via OpenAI, DeepSeek, Groq, or Ollama to detect heuristic evasion attempts.
Infrastructure Map
Tracking the maturity levels of independent ProvnAI components.
Operational / Usable Now
Active Build / Technical Preview
Research / Experimental
Where It Started
Before VEP. Before CHORA. Before Evidence Capsules.
VEXEvolve ran 29 autonomous agents for a full month — 480 articles researched, 158 published, 150 anchored to Solana. No human intervention.
That was VEX v0.1.4. A proof that verifiable autonomous agents work in the real world.
About the initiative
ProvnAI is being developed as an open infrastructure initiative. Public components are released under MIT or Apache 2.0 licenses, while core repositories are temporarily private during IP and filing work.