OWASP Top 10 for Large Language Model Applications
The OWASP Top 10 for Large Language Model Applications Project aims to educate developers, designers, architects, managers, and organizations about the potential security risks when deploying and managing Large Language Models (LLMs) and Generative AI applications. The project provides a range of resources. Most notably the OWASP Top 10 list for LLM applications listing the top 10 most critical vulnerabilities often seen in LLM applications, highlighting their potential impact, ease of exploitation, and prevalence in real-world applications.
Examples of vulnerabilities include prompt injections, data leakage, inadequate sandboxing, and unauthorized code execution, among others. The goal is to raise awareness of these vulnerabilities, suggest remediation strategies, and ultimately improve the security posture of LLM applications.
📢 The 2025 List is Available:
Download OWASP Top 10 for LLMs List for 2025 Full Version.
Download Additional Resources from our Website including:
- Security & Governance Checklist v1.0 Essential guidance for CISOs managing the rollout of Gen AI technology.
- Guide for Preparing and Responding to DeepFakes
- 2025 AI Security Solutions Directory and Guide
Localized versions are also available.
- Security & Governance Checklist v1.0 - also now available in French and Japanese
Want to Contribute your Expertise? Join us.
- We have a working group channel on the OWASP Slack, so please sign up and then join us on the #project-top10-for-llm channel.
- The working group is collaborating on our wiki
- Want to stay updated on periodic progress? Subscribe to our newsletter or Follow our project LinkedIn page
Just Want to Learn About LLM Security
New to LLM Application security? Check out our resources page to learn more.
Become a Project Suppoter or Sponsor Sponsorship
We are a not for profit open source community driven project. If you are interested in supporting the project with reasources or become a sponsor to help us ensure we can continue to sustain the community efforts, offsetting operational, and outreach costs. Visit the Sponsor Section on our website.
Thank you to our Current Sponsors and Supporters
OWASP Top 10 for Large Language Model Applications version 1.1
LLM01: Prompt Injection
Manipulating LLMs via crafted inputs can lead to unauthorized access, data breaches, and compromised decision-making.
LLM02: Insecure Output Handling
Neglecting to validate LLM outputs may lead to downstream security exploits, including code execution that compromises systems and exposes data.
LLM03: Training Data Poisoning
Tampered training data can impair LLM models leading to responses that may compromise security, accuracy, or ethical behavior.
LLM04: Model Denial of Service
Overloading LLMs with resource-heavy operations can cause service disruptions and increased costs.
LLM05: Supply Chain Vulnerabilities
Depending upon compromised components, services or datasets undermine system integrity, causing data breaches and system failures.
LLM06: Sensitive Information Disclosure
Failure to protect against disclosure of sensitive information in LLM outputs can result in legal consequences or a loss of competitive advantage.
LLM07: Insecure Plugin Design
LLM plugins processing untrusted inputs and having insufficient access control risk severe exploits like remote code execution.
LLM08: Excessive Agency
Granting LLMs unchecked autonomy to take action can lead to unintended consequences, jeopardizing reliability, privacy, and trust.
LLM09: Overreliance
Failing to critically assess LLM outputs can lead to compromised decision making, security vulnerabilities, and legal liabilities.
LLM10: Model Theft
Unauthorized access to proprietary large language models risks theft, competitive advantage, and dissemination of sensitive information.