The AI transparency obligation is the legal requirement for businesses to inform users, customers and affected individuals whenever they interact with an AI system or whenever AI is involved in decisions that affect them. Article 50 of the EU AI Act establishes this obligation for all organisations deploying AI systems in the European Union — regardless of company size.
Key takeaway: From 2 August 2026, you risk fines up to 15 million euros or 3% of global annual turnover for failing to meet transparency requirements. The good news: most obligations are practically achievable, provided you start implementation now.
When Must You Disclose AI Use to Users?
The EU AI Act distinguishes several situations where transparency is mandatory. The threshold is lower than many businesses expect.
Direct Interaction with AI (Article 50, Paragraph 1)
Every time a person communicates directly with an AI system, you must disclose this. Examples include:
- Website chatbots: Visitors must know they are talking to AI, not a human
- Virtual assistants: Phone bots, voice bots and automated response systems
- AI-driven forms: Systems that ask questions and interpret answers
- Automated email responses: When AI generates the content of your replies
The disclosure must happen before the interaction begins — not after. A simple statement such as "You are communicating with an AI assistant" is sufficient.
AI-Generated Content (Article 50, Paragraphs 2-3)
Do you generate content with AI that could be mistaken for authentic? Additional requirements apply:
- Deepfakes and synthetic media: Images, video or audio that imitate real people must be labelled as AI-generated
- AI-generated texts: Texts about matters of public interest (news articles, policy advice) must be marked accordingly
- Synthetic voices: Audio that sounds like a real person requires disclosure
The labelling must be machine-readable — not just visible to users but also detectable by systems. The European AI Office is developing technical standards for this, including digital watermarks.
Emotion Recognition and Biometric Categorisation (Article 50, Paragraph 3)
Do you use AI that analyses emotions or categorises people based on biometric data? You must explicitly inform the individuals involved. Examples:
- Sentiment analysis in customer conversations
- Facial recognition for access control
- Voice analysis that detects emotions
For emotion recognition, you must also inform the individual about the type of data being processed and the purpose of the processing. This overlaps with GDPR requirements — read more about the intersection in our article on GDPR and AI rules.
What Transparency Requirements Apply Per AI System Type?
Not every AI system has the same obligations. The EU AI Act uses a risk-based approach that determines how extensive your documentation and communication must be.
| AI System Type | Risk Level | Transparency Duty | Documentation Duty | Regulator |
|---|---|---|---|---|
| Chatbot (customer contact) | Limited risk | Disclose it is AI | Limited | ACM |
| AI content generation (deepfakes) | Limited risk | Label as AI-generated | Limited | ACM |
| Emotion recognition (workplace) | High risk | Actively inform + explain purpose | Extensive (Art. 13) | ACM + DPA |
| AI decision-making (credit, insurance) | High risk | Explainability + right to object | Extensive (Art. 13) | ACM + DPA + AFM |
| Spam filter / recommendation algorithm | Minimal risk | No legal obligation | None | None |
| AI recruitment tool | High risk | Inform + DPIA | Extensive (Art. 13) | ACM + DPA |
High-Risk Systems: Article 13 Documentation
For high-risk AI systems, extensive documentation requirements apply under Article 13. "Meaningful information" means specifically:
- Intended purpose: What the system is designed for
- Accuracy and limitations: Known error margins and scenarios where the system fails
- Input and output data: What data the system uses and what it produces
- Human oversight: How human control is arranged
- Lifespan and maintenance: How long the system lasts and how it is updated
This is considerably more than the simple "this is AI" disclosure for limited-risk systems. Need help classifying your AI systems? Our AI compliance checklist for SMBs gets you started.
How Does the Transparency Obligation Overlap with the GDPR?
The EU AI Act does not exist in a vacuum. The GDPR already contains transparency obligations that apply to AI systems processing personal data.
GDPR Article 22: Automated Decision-Making
Article 22 GDPR gives individuals the right not to be subject to decisions based solely on automated processing that produce legal effects or similarly significantly affect them. This applies, for example, to:
- Automatic rejection of a loan application
- AI-driven price discrimination
- Automated candidate selection
For automated decision-making, the GDPR requires you to:
- Inform the data subject about the existence of automated decision-making
- Provide meaningful information about the underlying logic
- Explain the significance and expected consequences
- Offer the possibility of requesting human intervention
Combined Obligations
In practice, this means that for many AI systems you must comply with two sets of transparency requirements. The Dutch Data Protection Authority (Autoriteit Persoonsgegevens, AP) and the Authority for Consumers and Markets (ACM) are collaborating on enforcement. The AP focuses on privacy aspects, the ACM on AI Act compliance.
Want to dive deeper into risks and liability? Read our article on AI risks and liability for businesses.
Save 8 hours per week on manual compliance documentation by structuring AI transparency from the start
How Do You Implement AI Transparency in Practice?
Theory is useful, but how do you execute? Here is a concrete implementation approach for SMBs.
Step 1: Inventory All Your AI Systems
Create a list of every AI tool your organisation uses. Do not forget the "invisible" AI:
- CRM systems with AI predictions (HubSpot, Salesforce Einstein)
- Email tools with AI-driven send optimisation (Mailchimp, ActiveCampaign)
- Customer service tools with automatic routing (Zendesk, Intercom)
- Accounting software with automated categorisation (Xero, QuickBooks)
- Marketing tools with AI targeting (Google Ads Smart Bidding, Meta Advantage+)
Step 2: Classify by Risk Level
Use the table above to classify each system. In doubt? Default to the higher risk level — overcompliance is cheaper than a fine.
Step 3: Implement Disclosure Texts
For each AI interaction, you need an appropriate disclosure. Example texts:
Chatbot (limited risk):
"You are chatting with our AI assistant. A team member can take over the conversation if needed."
AI decision-making (high risk):
"We use an AI system that analyses your data when assessing your application. A team member always reviews the AI recommendation before a final decision is made. You have the right to request a fully human assessment."
AI-generated content:
"This image was generated using artificial intelligence."
Step 4: Document for High-Risk Systems
For each high-risk system, create a dossier containing:
- Technical specification of the AI model
- Training data description
- Known limitations and error rates
- Procedures for human oversight
- Log of bias audits
- Contact details of the responsible person
Want to know how to build AI systems with transparency as a design principle? Custom software built compliant from day one prevents costly retrofitting.
Step 5: Train Your Team
Transparency is not just a technical matter. Your staff must:
- Know which systems contain AI
- Be able to explain to customers how AI is used
- Know escalation procedures when customers object
- Recognise signals of AI systems exhibiting unexpected behaviour
What Fines and Enforcement Can You Expect?
The ACM becomes the primary supervisor for the EU AI Act in the Netherlands. The fine authority is substantial:
| Violation | Maximum Fine | Example |
|---|---|---|
| Failing to disclose that a user interacts with AI | EUR 15 million or 3% of global turnover | Chatbot without AI disclosure |
| Missing technical documentation (high risk) | EUR 15 million or 3% of global turnover | AI assessment system without dossier |
| Deploying prohibited AI systems | EUR 35 million or 7% of global turnover | Workplace emotion recognition (impermissible purpose) |
| Providing incorrect information to regulator | EUR 7.5 million or 1.5% of global turnover | Inaccurate or incomplete reporting |
The ACM has announced it will start with "informative enforcement" in 2026 — warnings and guidelines — before moving to fines. But do not wait: businesses that get their transparency in order now carry no risk.
The experience with GDPR enforcement is instructive. The Dutch DPA was relatively mild in the early years but now issues fines up to EUR 3.7 million (such as the OLVG hospital fine in 2022 for insufficient access security of patient records).
Prepare your organisation for the full AI legislation landscape. Our comprehensive article on the EU AI Act and AI legislation in the Netherlands gives you the complete legal framework.
How Does AI Transparency Relate to Discoverability and Trust?
Transparency is not just a legal obligation — it is a competitive advantage. Consumers and business clients have more trust in companies that are open about their AI use. According to Edelman's 2025 Trust Barometer, 67% of consumers are more likely to do business with companies that are transparent about AI.
Search engines — both traditional and AI-powered — also reward websites that clearly communicate about their methods. Transparency about AI use strengthens your E-E-A-T signals (Experience, Expertise, Authoritativeness, Trustworthiness). Read more about discoverability in the AI era in our article on getting found in AI search engines.
Start With AI Transparency Today
The transparency obligation does not have to be a burden. Businesses that invest now in clear communication about their AI use build customer trust and stay ahead of the competition. Start by inventorying your AI systems, classify them by risk level, and implement the right disclosure texts.
Want to know which AI systems in your organisation are subject to transparency obligations and how to make them compliant? Request a free scan — we map your AI landscape and advise on the right approach.
Learn more about AI consulting?
View service