OWASP Top 10 for Large Language Model Applications
About This Repository
This is the repository for the OWASP Top 10 for Large Language Model Applications. However, this project has now grown into the comprehensive OWASP GenAI Security Project - a global initiative that encompasses multiple security initiatives beyond just the Top 10 list.
OWASP GenAI Security Project
The OWASP GenAI Security Project is a global, open-source initiative dedicated to identifying, mitigating, and documenting security and safety risks associated with generative AI technologies, including large language models (LLMs), agentic AI systems, and AI-driven applications. Our mission is to empower organizations, security professionals, AI practitioners, and policymakers with comprehensive, actionable guidance and tools to ensure the secure development, deployment, and governance of generative AI systems.
Learn more about our mission and charter: Project Mission and Charter
Visit our main project site: genai.owasp.org
Latest Top 10 for LLM Applications
The OWASP Top 10 for Large Language Model Applications continues to be a core component of our work, identifying the most critical security vulnerabilities in LLM applications.
Access the latest Top 10 for LLM: https://genai.owasp.org/llm-top-10/
Project Background and Growth
The project has evolved significantly since its inception. From a small group of security professionals addressing an urgent security gap in 2023, it has grown into a global community with over 600 contributing experts from more than 18 countries and nearly 8,000 active community members.
Read our full project background: Introduction and Background
Get Involved
Contribute to the Project
We welcome all expert ideas, contributions, suggestions, and remarks from security professionals, researchers, developers, and anyone passionate about AI security.
Learn how to contribute: https://genai.owasp.org/contribute/
Join Our Meetings
Participate in our bi-weekly sync meetings and stay connected with the community.
Meeting information: https://genai.owasp.org/meetings/
Connect with the Community
- Join our working group channel on the OWASP Slack - sign up and join us on the
#project-top10-for-llm
channel - Follow our project LinkedIn page
- Subscribe to our newsletter for periodic updates
Project Support
We are a not-for-profit, open-source, community-driven project. If you are interested in supporting the project with resources or becoming a sponsor to help us sustain community efforts and offset operational and outreach costs, visit the Sponsor Section on our website.
Thank you to our current Sponsors and Supporters
Educational Resources
New to LLM Application security? Check out our resources page to learn more.
OWASP Top 10 for Large Language Model Applications version 1.1
LLM01: Prompt Injection
Manipulating LLMs via crafted inputs can lead to unauthorized access, data breaches, and compromised decision-making.
LLM02: Insecure Output Handling
Neglecting to validate LLM outputs may lead to downstream security exploits, including code execution that compromises systems and exposes data.
LLM03: Training Data Poisoning
Tampered training data can impair LLM models leading to responses that may compromise security, accuracy, or ethical behavior.
LLM04: Model Denial of Service
Overloading LLMs with resource-heavy operations can cause service disruptions and increased costs.
LLM05: Supply Chain Vulnerabilities
Depending upon compromised components, services or datasets undermine system integrity, causing data breaches and system failures.
LLM06: Sensitive Information Disclosure
Failure to protect against disclosure of sensitive information in LLM outputs can result in legal consequences or a loss of competitive advantage.
LLM07: Insecure Plugin Design
LLM plugins processing untrusted inputs and having insufficient access control risk severe exploits like remote code execution.
LLM08: Excessive Agency
Granting LLMs unchecked autonomy to take action can lead to unintended consequences, jeopardizing reliability, privacy, and trust.
LLM09: Overreliance
Failing to critically assess LLM outputs can lead to compromised decision making, security vulnerabilities, and legal liabilities.
LLM10: Model Theft
Unauthorized access to proprietary large language models risks theft, competitive advantage, and dissemination of sensitive information.