ai-legislationeu-ai-actcompliancesmb

EU AI Act: What It Means for Your Business in the Netherlands

March 5, 202614 min readPixel Management

This article is also available in Dutch

The EU AI Act is the world's first comprehensive AI law — and it applies to your business. Since August 2, 2024, the regulation has been in force, and the first obligations already apply. Whether you run an AI chatbot on your website, use automation in your sales process, or are considering AI agents: you need to know where you stand.

The EU AI Act is a European regulation that classifies AI systems into risk categories and imposes specific obligations on providers and users of those systems per category. The law applies to all businesses that develop or use AI in the EU — including SMBs.

What Is the EU AI Act?

The EU AI Act (officially: Regulation (EU) 2024/1689) is the world's first binding AI legislation. The European Union adopted this regulation to ensure that AI systems in Europe are safe, transparent, and trustworthy — without unnecessarily stifling innovation.

Timeline

  • August 2024: Regulation enters into force
  • February 2025: Prohibited AI practices take effect
  • August 2025: Obligations for general-purpose AI (like ChatGPT-style models)
  • August 2026: Full enforcement for high-risk AI systems

This isn't a distant future concern. The first rules already apply, and the rest follows within eighteen months.

Why it matters

The fines are serious: up to €35 million or 7% of global turnover for the most severe violations. SMBs face proportional fines, but those can still be significant. More importantly: if customers or partners ask about your AI compliance and you don't have an answer, you lose trust.

Businesses already implementing AI should build compliance in from day one.

Who does it apply to?

The AI Act doesn't only apply to companies that build AI systems. It also applies to companies that use AI systems — and that includes most SMBs. If you have an AI chatbot on your website, you're using an AI system. If your employees use ChatGPT for customer communication, you're using an AI system. If you use a tool like HubSpot that does AI-powered lead scoring, you're using an AI system.

The regulation distinguishes between "providers" (who develop the AI system) and "deployers" (who put the system to use). As an SMB, you're almost always a deployer. Your obligations are lighter than a provider's, but they do exist. Ignorance isn't an excuse — just like with GDPR.

Risk Categories Explained

The core of the EU AI Act is a risk-based approach. Not all AI is equal — a spam filter has different rules than an AI that screens job applicants. The law classifies AI systems into four categories:

Unacceptable risk (prohibited)

These AI applications are simply banned in the EU:

  • Social scoring by governments (think China's system)
  • Manipulative AI that exploits vulnerable groups — think AI that targets toy advertisements at children using techniques that bypass their judgment
  • Real-time biometric identification in public spaces (with exceptions for law enforcement)
  • Emotion recognition in the workplace and education — a camera that analyses employees' moods to measure productivity is prohibited
  • Predictive policing based purely on profiling — AI that predicts who will commit a crime based on personal characteristics

For most SMBs, this isn't relevant — but it's worth knowing this boundary exists. It shows the EU is serious about AI regulation.

High risk

This is the category that deserves the most attention. High-risk AI systems are allowed but must meet strict requirements. Examples:

  • AI for recruitment (CV screening, candidate assessment)
  • AI for credit scoring and insurance decisions
  • AI in critical infrastructure (energy, water, transport)
  • AI for education (exam grading, admission decisions)
  • AI for access to essential services (social security, emergency services)

If your business uses AI for decisions that directly impact people, you likely fall into this category.

Concrete SMB example 1: A staffing agency that uses AI to automatically rank CVs and exclude candidates falls under high risk. The AI makes a decision with direct consequences for an individual — whether they get invited for an interview. The agency must document how the AI works, test for bias, and ensure a recruiter can override every rejection.

Concrete SMB example 2: A financial advisory firm using AI software to automatically determine which clients qualify for a loan. Because this directly affects someone's financial opportunities, it's high risk. The firm must be able to explain why the AI rejected an application, and a human advisor must be able to correct the decision.

Concrete SMB example 3: A security company deploying AI cameras that automatically identify individuals through facial recognition. Depending on the context (private premises versus public space), this could be high risk or even prohibited.

Limited risk (transparency obligations)

This is where most SMB AI applications land:

  • Chatbots — users must know they're interacting with AI
  • AI-generated content — must be labelled as such
  • Deepfakes — must be clearly labelled

If you have an AI chatbot on your website, you need to inform visitors they're communicating with AI. That's it. No extensive compliance documentation, no audits — just transparency. Read our detailed article on AI transparency requirements for businesses for the full explanation of Art. 50 obligations. Curious about what a chatbot costs? Read our article on AI chatbot costs in 2026.

Concrete SMB example: An e-commerce store with an AI-powered chatbot that helps customers with product recommendations. The only obligation: a clear notice like "You're chatting with our AI assistant, not a staff member." No audits, no documentation — just be honest with your customers. If the chatbot also makes product recommendations, that's still limited risk as long as the recommendation doesn't have a significant impact on the customer.

Minimal risk (no additional obligations)

The vast majority of AI applications fall here:

  • Spam filters
  • AI-powered product recommendations
  • Inventory optimisation
  • Internal process automation
  • AI for route optimisation
  • AI for email subject lines and A/B testing

No extra obligations — only existing legislation (GDPR, consumer law) still applies. Most forms of business process automation fall into this category.

Concrete SMB example: A logistics company using AI to optimise routes and predict inventory levels. This has no impact on individual people, so no extra obligations apply. The same goes for a marketing agency using AI for email subject lines or A/B test analysis, or an e-commerce store using AI to recommend products based on browsing behaviour.

What This Means for Your Business

Let's make it concrete. As an SMB, you're probably using AI in one or more of these ways:

AI chatbots on your website

Risk category: Limited risk. What you need to do: Clearly state that the visitor is interacting with an AI system. A notice like "You're chatting with our AI assistant" is sufficient. Also ensure your chatbot doesn't provide misleading information about products or services — that was already required under consumer law, but the AI Act reinforces it.

Where it goes wrong: Some businesses have their chatbot pose as a human employee, complete with a name and photo. That's a direct violation. The fix is simple: be honest. Customers appreciate transparency — research shows that users who know they're chatting with AI aren't less satisfied.

Automated sales and lead scoring

Risk category: Usually minimal risk. What you need to do: As long as the AI doesn't make autonomous decisions with direct legal consequences for individuals (like automatically rejecting a quote), there's no additional obligation. Using AI to score and prioritise leads? Minimal risk. Using AI to automatically reject contracts? That moves towards high risk. Learn more about sales automation and how to set it up safely.

AI agents for business processes

Risk category: Depends on the application. What you need to do: An AI agent that answers emails or processes invoices typically falls under limited or minimal risk. An AI agent that makes HR decisions or automatically blocks customers could be classified as high risk. The key question is: does the AI have autonomy over decisions that directly affect people?

AI tools that employees use independently

Risk category: Depends on usage. What you need to do: If employees use ChatGPT or Copilot to write emails, draft reports, or advise customers, that's an AI application by your business. As an employer, you're responsible for how AI is used in your organisation. Create an AI policy that makes clear which tools are allowed, how they may be used, and which output always requires human review.

Deadline: from August 2026, all obligations for high-risk AI systems are fully enforceable. Businesses that start compliance now won't need to panic later.

Learn more about AI consulting?

View service

Obligations per Risk Category

What do you actually need to arrange? Here's an overview per category:

For limited risk (chatbots, AI content)

ObligationWhat it involves
TransparencyInform users they're interacting with AI
LabellingMark AI-generated content as such

That's it. Two obligations. For most SMBs with an AI chatbot, this is everything that's needed.

For high risk (HR AI, credit assessment)

ObligationWhat it involves
Risk management systemOngoing identification and mitigation of risks
Data governanceQuality requirements for training data
Technical documentationDescription of the system, purpose, and operation
LoggingAutomatic recording of system events
TransparencyInform users about operation and limitations
Human oversightPossibility for human intervention
Accuracy and robustnessTesting for reliability and cybersecurity

This sounds like a lot, but it boils down to: document what you build, test whether it works, and ensure a human can intervene. If you're already building properly — with test phases, documentation, and a human-in-the-loop — you're halfway there.

Save 20 hours per week on compliance documentation by building it into your AI development from the start

What if you use an external AI system?

Many SMBs don't build their own AI but buy it in. In that case, the provider (the software vendor) is primarily responsible for technical compliance — documentation, testing, risk management. But as a deployer, you still have obligations:

  • You must use the system according to the provider's instructions
  • You must arrange human oversight if the system requires it
  • You must report incidents to the provider and potentially to the regulator
  • You must inform users that they're dealing with AI

The comparison with GDPR works here: even with personal data, you're responsible as the data controller, even if you outsource the processing to a processor. Make sure your data processing agreements also include AI-specific clauses.

Common Mistakes in AI Compliance

Working with SMBs on AI compliance, we see the same mistakes repeatedly. Here's how to avoid them:

1. "We don't use AI" — while employees definitely do

The most common blind spot. Employees use ChatGPT, Copilot, or other AI tools for emails, reports, or customer communication. That also falls under the AI Act. If an employee uses ChatGPT to draft a quote and sends it without review, that's an AI application by your business.

Research shows that 68% of employees in the Netherlands use AI tools for work, but only 24% of employers are aware of it. That gap is a compliance risk you need to close now.

Solution: Inventory not just the AI tools your company officially purchased, but also what employees use on their own. Create an AI policy that clarifies which tools are permitted and how they may be used. Send a short survey: "Which AI tools do you use for work?" The answers will surprise you.

2. Forgetting transparency for your chatbot

Many businesses install an AI chatbot and forget the mandatory disclosure. The visitor chats with AI but thinks they're talking to a staff member. This is a direct violation of the transparency obligation.

Solution: Add a clear notice at the first message: "You're chatting with our AI assistant." This takes five minutes and prevents problems. Also add a disclaimer that the bot can make mistakes, and always offer the option to reach a human employee.

3. Assuming "minimal risk" means you have no obligations

The AI Act imposes no extra obligations for minimal-risk AI, but GDPR still applies. If your AI processes personal data — even for minimal-risk applications — you must comply with GDPR. Think: processing agreements, privacy statements, and a record of processing activities.

Solution: Evaluate your AI applications not only through the AI Act lens, but also through the GDPR lens. Need help mapping this out? Our AI readiness check is a good starting point.

4. Treating compliance as a one-time checkbox

The AI Act requires ongoing compliance. Your risk management system must stay up-to-date, your documentation must evolve, and you need to assess new AI applications before deploying them. A one-time audit isn't enough.

Solution: Schedule an annual AI compliance review, just like your GDPR review. Combine them if you do both — there's significant overlap.

5. Misjudging the risk category

Many businesses underestimate the risk level of their AI application. A tool that "just" filters CVs sounds harmless, but falls under high risk because it directly affects applicants' chances. Read our detailed article on AI in recruitment: rules and risks for a complete analysis of this high-risk application. An AI that "just" does customer segmentation could be high risk if that segmentation determines who does and doesn't get access to certain services.

Solution: For every AI application, ask the question: "Does this AI influence decisions that directly affect people?" If the answer is yes, or if you're unsure, treat it as high risk until you've established otherwise.

Save 15 hours per week on fixing compliance problems you could have prevented

Practical Compliance Steps for SMBs

Compliance sounds like a large and intimidating project, but for SMBs it's manageable if you take it step by step. Here's a concrete action plan:

Week 1–2: AI inventory

Create a complete list of all AI applications in your business. Think broader than you'd expect:

  • Official tools: Chatbots, CRM with AI features (HubSpot, Salesforce), marketing automation, AI-powered analytics
  • Purchased software with AI components: Accounting software with automatic categorisation, HR tools with matching algorithms, e-commerce platforms with personalisation
  • Employee-driven tools: ChatGPT, Copilot, Gemini, Claude, Midjourney, DALL-E
  • Automated processes: Workflows that call AI APIs, automated decision-making

For each application, note: what does it do, who uses it, what data does it process, and what decisions does it make (or support)?

Week 3: Risk analysis

Go through your list and determine for each application:

  1. Does it process personal data? If so, GDPR obligations apply.
  2. Does it make autonomous decisions that affect people? If so, possibly high risk.
  3. Does it communicate directly with customers? If so, transparency obligations.
  4. Is there human oversight? If not, you may need to add it.

Create a simple table with three columns: application, risk category, required action.

Week 4: Quick wins

Most SMBs can handle 80% of their compliance in a single week:

  • Add transparency notices to all chatbots and AI interactions
  • Draft an AI policy for employees (1–2 pages is enough)
  • Add AI-specific clauses to your data processing agreements
  • Document your AI applications in a register (similar to your GDPR register)

Month 2–3: Address high-risk systems

If you use high-risk AI, start with:

  • Document the purpose, operation, and limitations of the system
  • Test for bias and discrimination (especially for HR and financial applications)
  • Ensure a human can override every AI decision
  • Set up a log that records AI decisions
  • Create a plan for ongoing monitoring

Ongoing: annual review

Schedule an annual AI compliance review, combined with your GDPR review. Check:

  • Have new AI applications been added?
  • Are existing risk categories still correct?
  • Is the documentation up-to-date?
  • Is human oversight working properly?
  • Have there been incidents requiring action?

How to Prepare

Five concrete steps you can take this month:

1. Inventory your AI usage

Make a list of all AI applications in your business. Think about:

  • Chatbots on your website
  • AI features in your CRM or marketing tools (like HubSpot, Mailchimp)
  • Automated processes that use AI
  • AI tools that employees use individually (ChatGPT, Copilot)

That last one is often overlooked. If employees independently use AI tools for customer communication or decision-making, that also falls under the AI Act.

2. Classify by risk category

Go through your list and determine the risk category for each application. Most will be minimal or limited risk. Note which applications might be high risk — those are your priorities.

3. Implement transparency measures

For every chatbot or AI interaction: add a clear notice. This takes an afternoon and saves you considerable hassle later.

4. Document high-risk systems

If you use high-risk AI: start documenting. Describe the purpose, operation, training data, and measures for human oversight. This doesn't need to be a 100-page legal document — a clear technical document will do.

5. Schedule a compliance review

Put an annual review in your calendar. The AI Act isn't a one-time checkbox — it's ongoing compliance. Combine it with your existing GDPR review if you have one.

Want to know if your business is ready for AI? That readiness check also helps identify compliance risks. Or use our AI compliance checklist for SMBs to verify step by step that you meet all requirements. More on how the legal sector itself uses AI in our article on AI for lawyers and legal professionals.

For a practical approach to AI implementation that accounts for compliance from the start, read our article on implementing AI in your business. And if you're considering automation, it helps to understand the costs of business automation and the compliance implications.

Conclusion

The EU AI Act isn't a reason to panic — it's a reason to act. For most SMBs, it comes down to three things:

  1. Know which AI you use (including tools employees adopt on their own)
  2. Be transparent with users (especially with chatbots and AI-generated content)
  3. Document and test if you use AI for decisions that affect people

Businesses that start compliance now will have a competitive advantage. They build trust with customers, avoid fines, and are better prepared for future regulation. In tenders and B2B partnerships, AI compliance is increasingly becoming a selection criterion — businesses that sort it out now will benefit later.

The key is: build compliance in from the start, not after the fact. That saves time, money, and stress. Just like with GDPR: the businesses that took it seriously early had the least work when enforcement started.

Ready to make your AI usage compliant? Follow our step-by-step AI compliance checklist. With full enforcement for high-risk systems starting August 2026, read our AI Act deadline 2026 checklist for concrete monthly action steps.

Learn more about AI consulting?

View service

Curious how much time you could save?

Request a free automation scan. We'll analyze your processes and show you where the gains are — no strings attached.