The General Data Protection Regulation (GDPR) has been in effect since 2018. AI systems existed before it, of course, but the explosive growth of generative AI has added a new layer of complexity. What does the right to explanation mean when an AI model makes a decision? How do you apply data minimisation to a system that benefits from more data? And when is a Data Protection Impact Assessment (DPIA) mandatory?
This article explains the overlap between GDPR obligations and AI applications, specifically aimed at SMBs that want to deploy AI without taking on legal risks.
Why the GDPR Matters for AI
The GDPR regulates the processing of personal data. The moment your AI system processes information traceable to an individual — names, email addresses, customer behaviour, conversations, facial recognition — you fall under the GDPR.
That applies to almost every business AI application:
- An AI chatbot answering customer questions processes names, emails, and conversation content
- A lead scoring system profiles individuals based on their behaviour
- An AI tool for HR screening CVs processes special category data
- A customer analytics dashboard predicting purchase behaviour creates profiles
The only AI applications outside the GDPR scope are systems that work exclusively with anonymised or synthetic data — and in practice, that is rare. For a broader overview of the regulatory landscape, read our pillar article on the EU AI Act and AI legislation in the Netherlands.
The Five GDPR Principles That Clash with AI
1. Purpose Limitation (Article 5.1.b)
What the GDPR says: Personal data may only be collected for a specific, explicitly defined purpose. Further processing for other purposes is not allowed without an additional legal basis.
Where it clashes with AI: AI models are sometimes trained on data originally collected for a different purpose. Customer data collected for order processing cannot simply be used to train an AI model that predicts purchasing behaviour.
Practical solution:
- Define in advance what you will use the data for
- Obtain explicit consent if the purpose differs from the original collection
- Document your legal basis per processing purpose
2. Data Minimisation (Article 5.1.c)
What the GDPR says: Collect only the data strictly necessary for the purpose.
Where it clashes with AI: Machine learning typically performs better with more data. More training data equals better predictions. That is in tension with the "as little as possible" principle.
Practical solution:
- Use pseudonymisation: replace identifiable data with codes during the training phase
- Anonymise where possible — anonymised data falls outside the GDPR
- Train on aggregated patterns instead of individual profiles
- Document why each data variable is necessary for the model
3. Right to Explanation (Article 22 + Recital 71)
What the GDPR says: When an automated system makes a decision with "significant effects" on a person, that person has the right to an understandable explanation of the logic behind the decision.
Where it clashes with AI: Complex AI models — particularly deep learning networks — are inherently difficult to explain. They produce an outcome, but the "why" is not always transparently communicable.
What counts as "significant effects"?
- Denial of insurance or a loan
- Rejection of a job application
- Price discrimination based on profiles
- Automatic termination of a service
Practical solution:
- Use interpretable models where possible (decision trees, logistic regression) over black-box models
- Implement an explanation layer: even if the underlying model is complex, you can display the key factors ("This decision was based on: invoice history, payment behaviour, and company size")
- Always offer a path to human review
- Read our article on what an AI agent is for more context on how autonomous AI systems make decisions
4. Right to Object and Human Intervention (Articles 21 + 22)
What the GDPR says: Data subjects have the right to object to automated decision-making. For decisions with legal or similarly significant effects, there must be an option for human intervention.
Where it clashes with AI: Fully automated processes — an AI that automatically handles complaints, generates quotes, or creates work schedules — must have a "human in the loop" for decisions that significantly affect individuals.
Practical solution:
- Build an escalation route into every AI system that makes decisions about people
- Make it easy for customers and employees to request human review
- Document how the objection process works and train your team on it
5. Privacy by Design and by Default (Article 25)
What the GDPR says: Data protection must be built into the design of systems, not added after the fact.
Where it clashes with AI: Many AI projects start with a technical prototype that is later "made compliant." That is the wrong order.
Practical solution:
- Include privacy requirements in the project brief, not the delivery phase
- Involve your Data Protection Officer (DPO) or privacy advisor in the design
- Document privacy considerations at each development phase
When Is a DPIA Mandatory?
A Data Protection Impact Assessment (DPIA) is mandatory when processing "is likely to result in a high risk to the rights and freedoms of natural persons." For AI applications, a DPIA is almost always required when:
- The system applies profiling (analysing behaviour to make predictions)
- The system processes personal data systematically and on a large scale
- The data is used for automated decision-making with significant effects
- New technologies are deployed (AI is explicitly mentioned as an example)
- Special category data is processed (health, ethnicity, political opinions)
What goes into a DPIA?
| Component | Content |
|---|---|
| Processing description | What does the AI system do, what data is processed, for what purpose? |
| Necessity and proportionality | Why is this processing needed? Are there less intrusive alternatives? |
| Risk assessment | What risks exist for data subjects? How severe and likely? |
| Mitigation measures | What technical and organisational measures do you take to limit risks? |
| DPO opinion | Advice from the Data Protection Officer |
Costs: A DPIA costs €2,000–€10,000, depending on complexity. It is an investment, not a formality — a well-executed DPIA protects you against fines that can reach up to 4% of annual turnover.
The EU AI Act on Top of the GDPR
With the EU AI Act now in effect, businesses face two overlapping layers of regulation:
| Aspect | GDPR | EU AI Act |
|---|---|---|
| Focus | Protection of personal data | Safety and transparency of AI systems |
| Scope | All processing of personal data | AI systems, regardless of whether personal data is processed |
| Risk classification | Not applicable | Four levels: minimal, limited, high, unacceptable |
| Supervisory authority | Data Protection Authority | National AI supervisory authority |
| Fines | Up to 4% of turnover or €20M | Up to 7% of turnover or €35M |
The overlap: If your AI system processes personal data (GDPR) and is classified as high-risk (AI Act), you must comply with both sets of requirements. That means double documentation, double impact assessments, and double accountability obligations.
Practical implication for SMBs: Most SMB AI applications (chatbots, automation, analytics) fall into the "limited risk" category under the AI Act and are considered standard processing under the GDPR. But once you use AI for HR screening, financial decision-making, or public services, you enter the high-risk domain. Read our comprehensive overview of the EU AI Act for the full classification.
Practical Compliance Checklist for SMBs
Use this checklist before implementing an AI system:
Before starting:
- Legal basis determined (consent, legitimate interest, or contract)
- Purpose limitation documented
- Data minimisation analysis performed
- DPIA necessity assessed
- Data processing agreement with AI vendor signed
- Privacy policy updated
During development:
- Privacy by design principles applied
- Pseudonymisation or anonymisation implemented
- Explainability of decisions ensured
- Human intervention made possible
- Security measures in place
After launch:
- Objection procedure communicated to data subjects
- Monitoring for bias and discrimination set up
- Regular review of data usage and storage
- Incident response plan for data breaches
Want to know if your organisation is ready for AI, including the compliance side? Read our article on whether your business is ready for AI. For a step-by-step approach, also check our AI compliance checklist for SMBs.
Save 8 hours per week on GDPR compliance audits by setting up a structured AI privacy policy upfront
Frequently Asked Questions
Can I use customer data to train an AI model? That depends on your legal basis. If customers have given consent for the specific processing, yes. If you rely on legitimate interest, you must balance your interest against the customer's privacy rights. In both cases: document your choice.
Do I need to tell customers they are talking to an AI? Under the EU AI Act: yes, if the system directly interacts with a person (chatbots, virtual assistants). Under the GDPR: yes, if the system makes automated decisions with significant effects. Transparency is always the safest choice.
What if my AI vendor is outside the EU? Then you need a data processing agreement that guarantees GDPR compliance. If the vendor is in the US, there must be a valid transfer mechanism (currently the EU-US Data Privacy Framework). Note: sending personal data to AI APIs from OpenAI, Google, or Anthropic is a transfer that you must document.
How high are the fines? The GDPR allows fines up to €20 million or 4% of annual global turnover. The EU AI Act goes up to €35 million or 7%. For SMBs, the likelihood of maximum fines is low, but data protection authorities are increasingly issuing fines — including to smaller organisations.
Getting Started with Compliant AI
GDPR compliance and AI implementation are not contradictory goals. The rules force you to think deliberately about what data you collect, what you use it for, and how you protect it. That leads to better AI systems, not worse ones.
The key is: start with the compliance questions before you start building, not after. An AI consulting session can help you create an implementation plan that is both technically and legally sound.
Start small, document everything, and build on a solid foundation. That is the fastest path to AI that works and that you are allowed to use. Want to know who is liable when AI makes a mistake? Read our article on AI risks and liability. For the technical side of AI agents and how to deploy them responsibly, also look at our services.
Learn more about AI consulting?
View service