The Artificial Intelligence Security Verification Standard (AISVS) is an open catalogue of testable security requirements for AI-enabled systems. It helps developers, architects, security engineers, and auditors design, build, test, and verify AI applications throughout their lifecycle, from data collection and model training to deployment, monitoring, and retirement.
Every requirement is verifiable, testable, and implementable.
This site is the public documentation wrapper for the main OWASP/AISVS content repository.
How to use AISVS
- Design. Use it as a security checklist when architecting AI systems.
- Development. Integrate it into CI/CD pipelines, code reviews, and tests.
- Assessment. Apply it as a verification framework for pen testing and audits.
- Procurement. Reference specific requirements when evaluating AI vendors and third-party models.
Verification Levels
Each requirement is assigned a level (1, 2, or 3) indicating depth of assurance.
| Level | Description | When to use |
|---|---|---|
| 1 | Essential baseline controls every AI system should implement. | All AI applications, including internal tools and low-risk systems. |
| 2 | Standard controls for systems handling sensitive data or making consequential decisions. | Production systems, customer-facing AI, systems processing personal data. |
| 3 | Advanced controls for high-assurance environments facing sophisticated threats. | Critical infrastructure, safety-critical AI, regulated industries. |
Most production systems should aim for at least Level 2.
Requirement Chapters
- Training Data Governance & Bias Management
- User Input Validation
- Model Lifecycle Management & Change Control
- Infrastructure, Configuration & Deployment Security
- Access Control & Identity
- Supply Chain Security for Models, Frameworks & Data
- Model Behavior, Output Control & Safety Assurance
- Memory, Embeddings & Vector Database Security
- Autonomous Orchestration & Agentic Action Security
- MCP Security
- Adversarial Robustness & Attack Resistance
- Privacy Protection & Personal Data Management
- Monitoring, Logging & Anomaly Detection
- Human Oversight and Trust
Appendices
- Appendix A: Glossary
- Appendix B: References
- Appendix C: AI-Assisted Secure Coding
- Appendix D: AI Security Controls Inventory
Road Map
| Phase | Status | Focus |
|---|---|---|
| Phase 1: Research and Category List Creation | Done | Establish the research base and define the AISVS category structure. |
| Phase 2: Requirement Creation | Current Phase | Create requirements for each category and refine them with community, partner, and subject matter expert input. |
| Phase 3: Beta Release and Pilot Testing | Planned | Release a beta version of AISVS and gather feedback from early adopters using it on real-world AI applications. |
| Phase 4: Final 1.0 Release | Planned | Incorporate pilot feedback and publish Version 1.0 with full documentation and a lightweight checklist. |
| Phase 5: Continuous Improvement | Ongoing | Maintain AISVS as an open source project and update it to address emerging threats, new AI approaches, and regulatory change. |
Example
Put whatever you like here: news, screenshots, features, supporters, or remove this file and don’t use tabs at all.