[CORE_KERNEL_v0.1.6]
AUTH_SESSION_ROOT

provnai

The Trust Infrastructure for Autonomous AI.

Open-source infrastructure to secure, verify, govern, and audit the next generation of autonomous AI agents.

System Status

Development Maturity

ProvnAI is being developed as an open infrastructure initiative. We distinguish clearly between production-ready infrastructure, functional previews, and our long-term research roadmap.

Production Ready

Live Now

  • McpVanguard Security Proxy
  • Official Documentation Portal
  • Railway Deployment Flow
  • Core Rust VEX-Kernel v0.1.6
Functional Baseline

Technical Preview

  • VEX Evidence Capsule Verifier
  • Forensic Logic Explorer
  • Execution boundary hardening
  • Cross-system integration preview
Experimental

Research direction

  • ZK-VEX (STARK-based redaction)
  • Multi-agent Cognitive Routing
  • VEP Standardization (RFC Phase)
  • Unified Cognitive Identity
Systematic Vulnerability

Solving the
Black Box Problem

Autonomous agents are inherently opaque. ProvnAI replaces blind trust with cryptographically verifiable traces of every decision, action, and state transition.

Opaque Logic
INSPECTED
Mutable Logs
IMMUTABLE
Policy Drift
GOVERNED
Identity Gap
ATTESTED

Proof of Execution.

ProvnAI transforms ephemeral agent logs into permanent cryptographic evidence. Every decision becomes a verifiable artifact.

Type

Traditional Log

Status
Mutable
Payload Structure
Format: .json / .log
ID: 0x9d25bd5166085c76690098134dbc8a18f41f3d4f
  • Plaintext trace
  • Easily modified
  • No cryptographic link
Artifact_Audit_V1
Type

Verifiable Receipt

Status
Immutable
Payload Structure
Format: .attest / .capsule
ID: 0x9d25bd5166085c76690098134dbc8a18f41f3d4f
  • Context-bound proof
  • Hardware-rooted attest
  • Standards compliant
Artifact_Audit_V1
Signers: 2/2

Evidence Portability

Portable receipts allow agents to carry their own proof, eliminating the need for blind trust between untrusted parties.

Ecosystem Safety & Safeguards

The Safety Stack

From hardware-level attestation to active tool-layer proxies. We consolidate fragmented agent security into a unified defense architecture.

Sim_Environment :: attestation
Live Preview
TPM 2.0 INTERACTORFETCHING PCR STATE...
QUOTE_SIG: 0x0000...000000000000000000000000000000
MetricTPM 2.0 / vTPM
Status Optimal
PYTHON PROXYv1.6.0

McpVanguard

A security proxy for AI agents that use MCP (Model Context Protocol). It interposes between the agent and the host system, inspects every tool call, and blocks attacks before they reach your underlying servers.

L1

Rules Engine

< 2ms

50+ YAML signatures — block path traversal, reverse shells, prompt injection, and SSRF attacks instantly.

L2

Semantic Scorer

async

LLM-based intent scoring via OpenAI, DeepSeek, Groq, or Ollama to detect heuristic evasion attempts.

L3

Behavioral

stateful

Shannon entropy and sliding-window anomaly detection. Stateful monitoring of conversational context.

v0.3 SPEC

Evidence Capsule[VEP]

Execution evidence is packaged into portable verification artifacts tied to the runtime context and trust boundary.

ZK-READY

Adversarial Logic[RBD]

Cognitive verification protocol. Red/Blue systems evaluate the logic of a proposed action to ensure alignment with governing policies.

TEMPORAL

Cognitive Routing[A2A]

Secure transport layer for multi-agent negotiation. Preserves intent integrity and prevents context manipulation in autonomous swarms.

TECHNICAL PREVIEWv0.1.4

VEX Protocol

Verification infrastructure for autonomous AI. VEX is a logic-enforcement kernel that ensures a zero-trust security posture and mandatory auditability for agentic systems.

Enforcement Logic Flow
1.
Proposal
Agent logic proposes a target action.
2.
Boundary Validation
Runtime environment evaluates technical context.
3.
External Authorization
Independent governance controls are consulted.
4.
Execution or Halt
Action proceeds under evidence constraints or is terminated.
Core Principle

Governed Execution

ProvnAI enforces a verifiable separation between proposal and execution. Highly sensitive actions are designed so they are not self-authorized by the same agent that proposes them.

Externalized Authorization

Highly sensitive actions are evaluated against external authorization and execution-context controls before they can proceed.

Evidence Capsules

Execution evidence is packaged into portable verification artifacts tied to the runtime context and trust boundary.

Toward a Shared Verification Standard

Inference Proposes.
Governance Decides.

ProvnAI is co-authoring the .capsule Verifiable Agent Receipt specification alongside CHORA. We are defining the shared protocol for how autonomous agents prove their intent, authority, and identity across distributed ecosystems.

Intent
Logic path audit
Governance
Policy alignment
Provenance
Context bonding
VERIFIABLE_LOGIC_ARTIFACT
PROOF OBJECTIVEVERIFICATION PATH
Context BondingVerified
AuthorizationExternal
EnvironmentHardware Anchored
Verified Finality
Performance Verified

See It Run

We ran a 10x scale test pipeline using DeepSeek v3. The results verify VEX's concurrency model handles high-throughput agent swarms with minimal latency overhead.

1.6s
Single Agent Baseline
3.0s
Concurrent (5x)
7.7s
Sequential (5x)

Latency Comparison (Lower is Better)

Single Agent1,616 ms
VEX Concurrent (5x)3,042 ms
🚀 2.5x Faster than Sequential
Python Sequential (5x)7,768 ms
DATA SOURCE: scale_test_results.json VERIFIED
Local Verification

VEX
Explorer.

Verify the cryptographic integrity of VEX Evidence Capsules locally. Client-side execution evaluation. Independent cryptographic verification.

Live Logic Trail

Watch intent mapping as it happens.

Semantic Scorer

LLM-based intent scoring via OpenAI, DeepSeek, Groq, or Ollama to detect heuristic evasion attempts.

EXPLORER.PROVNAI.COM
TECHNOLOGY PREVIEW

Where It Started

Before VEP. Before CHORA. Before Evidence Capsules.

VEXEvolve ran 29 autonomous agents for a full month — 480 articles researched, 158 published, 150 anchored to Solana. No human intervention.

That was VEX v0.1.4. A proof that verifiable autonomous agents work in the real world.

About the initiative

ProvnAI is being developed as an open infrastructure initiative. Public components are released under MIT or Apache 2.0 licenses, while core repositories are temporarily private during IP and filing work.

Initial Research Commit13 December 2025