Some useful resource links to learn about LLM security issues.

Publication Author Date Title and Link
Wired Matt Burgess 13-Apr-23 The Hacking of ChatGPT Is Just Getting Started
The Math Company Arjun Menon 23-Jan-23 Data Poisoning and Its Impact on the AI Ecosystem
IEEE Spectrum Payal Dhar 24-Mar-23 Protecting AI Models from “Data Poisoning”
AMB Crypto Suzuki Shillsalot 30-Apr-23 Here’s how anyone can Jailbreak ChatGPT with these top 4 methods
Techopedia Kaushik Pal 22-Apr-23 What is Jailbreaking in AI models like ChatGPT?
The Register Thomas Claburn 26-Apr-23 How prompt injection attacks hijack today’s top-end AI – and it’s tough to fix
NCC Group Jose Selvi 5-Dec-22 Exploring Prompt Injection Attacks
Itemis Rafael Tappe Maestro 14-Feb-23 The Rise of Large Language Models ~ Part 2: Model Attacks, Exploits, and Vulnerabilities
Hidden Layer Eoin Wickens, Marta Janus 23-Mar-23 The Dark Side of Large Language Models: Part 1
Hidden Layer Eoin Wickens, Marta Janus 24-Mar-23 The Dark Side of Large Language Models: Part 2
Embrace the Red Wunderwuzzi 29-Mar-23 AI Injections: Direct and Indirect Prompt Injections and Their Implications
Embrace the Red Wunderwuzzi 15-Apr-23 Don’t blindly trust LLM responses. Threats to chatbots
MufeedDVH Mufeed 9-Dec-22 Security in the age of LLMs
Team8 Team8 CISO Village 18-Apr-23 Generative AI and ChatGPT Enterprise Risks
Deloitte Deloitte AI Institute 13-Mar-23 A new frontier in artificial intelligence - Implications of Generative AI for businesses
Arxiv Fabio Perez, Ian Ribeiro 17-Nov-22 Ignore Previous Prompt: Attack Techniques For Language Models
Arxiv Nicholas Carlini, et al 14-Dec-20 Extracting Training Data from Large Language Models
danielmiessler.com Daniel Miessler 15-May-23 The AI Attack Surface Map v1.0
NCC Group Chris Anley 06-Jul-22 Practical Attacks on Machine Learning Systems
CloudSecurityPodcast.tv Ashish Rajan 30-May-23 Can LLMs Be Attacked?