Project Spotlight - AI Security and Privacy Guide
AI applications are on the rise and so are the concerns regarding AI security and privacy. How can AI systems be attacked? How can they be protected? This is why OWASP is now offering the AI security & privacy guide - to provide clear and actionable insights on designing, creating, testing, and procuring secure and privacy-preserving AI systems. By open-sourcing our understanding of the state-of-the-art, we can create consensus and collect ideas from a variety of perspectives.
What’s different for AI
An AI system is mostly just like any other software system, but with some extra properties that require special attention for security and privacy. It for example typically has a larger attack surface because of the added environment to collect, annotate, transform data, and to train machine learning models. Also, there are special techniques that can be used to sabotage AI models, to copy them, or even to reconstruct sensitive training data that was used. From a privacy perspective, there are for example limitations on what data you can collect and for what purpose. Transparency is also an important privacy aspect for AI.
Scope boundaries
The guide also explores boundaries. There are many aspects to AI that are interesting, but they are not per se part of security and privacy, such as algorithmic bias and safety. The general recommendation is to treat AI pragmatically. No need to be philosophical or overwhelmed. AI is software with a few extra aspects that we are becoming increasingly familiar with, through intitiatives like this guide.