OWASP Artificial Intelligence Security Verification Standard AISVS Docs

The AI Security Verification Standard (AISVS) provides developers, architects, testers, and security professionals with a structured checklist for reviewing the security and safety posture of AI-enabled systems. Modeled after OWASP verification standards such as ASVS, AISVS is being developed as a practical set of requirements covering:

  1. Training Data Governance & Bias Management
  2. User Input Validation
  3. Model Lifecycle Management & Change Control
  4. Infrastructure, Configuration & Deployment Security
  5. Access Control & Identity
  6. Supply Chain Security for Models, Frameworks & Data
  7. Model Behavior, Output Control & Safety Assurance
  8. Memory, Embeddings & Vector Database Security
  9. Autonomous Orchestration & Agentic Action Security
  10. MCP Security
  11. Adversarial Robustness & Attack Resistance
  12. Privacy Protection & Personal Data Management
  13. Monitoring, Logging & Anomaly Detection
  14. Human Oversight and Trust

Road Map

This site is the public documentation wrapper for the main OWASP/AISVS content repository.

Phase Status Focus
Phase 1: Research and Category List Creation Done Establish the research base and define the AISVS category structure.
Phase 2: Requirement Creation Current Phase Create requirements for each category and refine them with community, partner, and subject matter expert input.
Phase 3: Beta Release and Pilot Testing Planned Release a beta version of AISVS and gather feedback from early adopters using it on real-world AI applications.
Phase 4: Final 1.0 Release Planned Incorporate pilot feedback and publish Version 1.0 with full documentation and a lightweight checklist.
Phase 5: Continuous Improvement Ongoing Maintain AISVS as an open source project and update it to address emerging threats, new AI approaches, and regulatory change.

Example

Put whatever you like here: news, screenshots, features, supporters, or remove this file and don’t use tabs at all.