Crafty inputs can manipulate a Large Language Model, causing unintended actions. Direct injections overwrite system prompts, while indirect ones manipulate inputs from external sources.
Insecure Output Handling
This vulnerability occurs when an LLM output is accepted without scrutiny, exposing backend systems. Misuse may lead to severe consequences like XSS, CSRF, SSRF, privilege escalation, or remote code execution.
Training Data Poisoning
This occurs when LLM training data is tampered, introducing vulnerabilities or biases that compromise security, effectiveness, or ethical behavior. Sources include Common Crawl, WebText, OpenWebText, & books.
Model Denial of Service
Attackers cause resource-heavy operations on Large Language Models leading to service degradation or high costs. The vulnerability is magnified due to the resource-intensive nature of LLMs and unpredictability of user inputs.
Supply Chain Vulnerabilities
LLM application lifecycle can be compromised by vulnerable components or services, leading to security attacks. Using third-party datasets, pre- trained models, and plugins can add vulnerabilities.
Sensitive Information Disclosure
LLM’s may reveal confidential data in its responses, leading to unauthorized data access, privacy violations, and security breaches. It’s crucial to implement data sanitization and strict user policies to mitigate this.
Insecure Plugin Design
LLM plugins can have insecure inputs and insufficient access control. This lack of application control makes them easier to exploit and can result in consequences like remote code execution.
LLM-based systems may undertake actions leading to unintended consequences. The issue arises from excessive functionality, permissions, or autonomy granted to the LLM-based systems.
Systems or people overly depending on LLMs without oversight may face misinformation, miscommunication, legal issues, and security vulnerabilities due to incorrect or inappropriate content generated by LLMs.
Frequently Asked Questions
What is the OWASP Top 10 for Large Language Models (LLMs)?
The OWASP Top 10 for LLMs is a list of the most critical vulnerabilities found in applications utilizing LLMs. It was created to provide developers, data scientists, and security experts with practical, actionable, and concise security guidance to navigate the complex and evolving terrain of LLM security
Who is the primary audience for the OWASP Top 10 for LLMs?
The primary audience is developers, data scientists, and security experts tasked with designing and building applications and plug-ins leveraging LLM technologies.
How does the OWASP Top 10 for LLMs relate to other OWASP Top 10 lists?
While the list shares DNA with vulnerability types found in other OWASP Top 10 lists, it does not simply reiterate these vulnerabilities. Instead, it delves into the unique implications these vulnerabilities have when encountered in applications utilizing LLMs. The goal is to bridge the divide between general application security principles and the specific challenges posed by LLMs.
How was the OWASP Top 10 for LLMs created?
The creation of the OWASP Top 10 for LLMs list was a major undertaking, built on the collective expertise of an international team of nearly 500 experts, with over 125 active contributors. The team brainstormed and proposed potential vulnerabilities, refined these proposals down to a concise list of the ten most critical vulnerabilities, and each vulnerability was then further scrutinized and refined by dedicated sub-teams and subjected to public review.
Will the OWASP Top 10 for LLMs be updated in the future?
Yes, the first version of the list will not be the last. The team expects to update it on a periodic basis to keep pace with the state of the industry. They will be working with the broader community to push the state of the art, and creating more educational materials for a range of uses.
News & Announcements
LinkedIn Post Announcement
New Release of OWASP Top 10 for LLM Apps
Oct 16th, 2023
What the OWASP Top 10 for LLMs Means for the Future of AI Security
Aug 8th, 2023