Skip to main content

3. Security & Governance (For the CISO)

Deterministic PII Masking

AgentID prevents sensitive data from ever reaching third-party LLM providers. Using deterministic entity detection, PII (Personally Identifiable Information) can be masked locally via the SDK before the prompt is transmitted over the network. Because this masking is deterministic and pattern-based (rather than relying on a secondary LLM), it is stable, explainable, and inherently fast. By default, enforcement authority for prompt injection, DB access, code execution, and PII leakage still lives in the backend guard; client-side checks are reserved for opt-in fast-fail and fail-close outage fallback.

Layered Runtime Enforcement

The guard path is intentionally layered so the blocking decision does not depend on one detector class.

  1. Cheap deterministic preflight blockers catch explicit prompt exfiltration, DB access, code execution, and strict PII leakage.
  2. Rust/WASM policy packs enforce multilingual phrase/regex policy coverage and organization-specific rules.
  3. Synchronous local ML classifiers catch semantic prompt injection, jailbreak paraphrases, and code-risk variants that deterministic rules may miss.
  4. The final guard verdict is written before model execution.

Concrete synchronous hard-block classes remain authoritative. Later enrichment is allowed to refine generic labels, but it should not downgrade or overwrite a more specific synchronous finding such as db_access, code_execution, or pii_leakage.

Domain-Aware Forensic Audit

When AI analysis is enabled, AgentID also runs an asynchronous Tier-2 forensic audit after the guard event is stored.

  • The audit prompt receives the system domain_context selected during onboarding.
  • That context includes domain, sensitivity, allowed topics, blocked topics, and domain terms.
  • The resulting evidence is tailored for auditor review rather than runtime latency.

This async layer enriches the event with:

  • ai_clean_summary
  • ai_intent
  • ai_threat_analysis
  • ai_attack_sophistication
  • ai_detected_signals
  • evaluation_metadata.forensic_audit

Operationally, this means the hot path stays fast while the stored event becomes materially better for ISO 42001, internal audit, and incident-review workflows.

Immutable Audit Trails

AgentID is designed as an evidentiary ledger.

  • Append-Only Event Store: Event records undergo lifecycle metadata expansion rather than destructive overwrites.
  • Replay Protection: Correlation IDs and freshness checks on ingest flows prevent duplicate event inflation or malicious replay attacks.
  • Forensic Defensibility: Every action, from the exact prompt text to the specific user ID, risk classification, synchronous signals, async forensic explanation, and latency, is locked into an auditable timeline. This allows CISOs to export high-confidence evidence bundles for incident response.

Compliance Scores & QMS Modules

AgentID goes beyond runtime security to offer a full Quality Management System (QMS) tailored for modern AI regulations like the EU AI Act.

  • Sectioned Compliance Model: Organizations can track compliance completion per system across defined regulatory sections.
  • Portfolio Aggregation: CISOs can view an org-wide compliance score, which represents the aggregate completion of governance coverage across all deployed AI systems.
  • Artifact Generation: The platform natively supports Incident logging, CAPA (Corrective and Preventive Actions), and the generation of downloadable compliance annexes.

Whole compliance modules are described here: The Ultimate Guide to AI Compliance with Agent ID (EU AI Act).