ai-compliancegdpreu-ai-actsmb

AI Compliance Checklist for SMBs

March 7, 20268 min readPixel Management

This article is also available in Dutch

You are using ChatGPT for customer service drafts, an AI tool for lead scoring, and an automated system that processes invoices. But have you ever checked whether all of that complies with the regulations? And if so — whether you are doing it correctly?

The EU AI Act is in force. The GDPR has applied since 2018. Together, they form a web of obligations that every business using AI needs to navigate. The problem: most SMBs do not know exactly what they need to do. The legislation is complex, the guidelines are vague, and the fines are steep.

This article gives you a concrete 10-step checklist. No legal jargon — action items you can execute this month. Work through them one by one and you will know exactly where you stand.

Why you need to act now

The EU AI Act follows a phased implementation timeline. The first obligations already apply. High-risk AI systems must meet strict requirements, and enforcement is ramping up in the coming months. The fines are serious: up to 7% of global annual turnover or 35 million euros — whichever is higher. The August 2026 deadline is approaching fast — read our specific AI Act deadline 2026 checklist for a month-by-month action plan.

But compliance is not just about avoiding fines. It is about trust. Businesses that handle AI transparently and responsibly are perceived as more trustworthy by customers and partners. That translates directly to revenue.

For the full legal context, read our comprehensive overview of the EU AI Act and AI legislation in the Netherlands.

The checklist: 10 steps to AI compliance

Step 1: Inventory all your AI systems

What: Map out every AI tool and system your business uses. Everything. Including the tools employees started using "just quickly" without a formal approval process.

How:

  • Send a survey to all departments: "Which AI tools do you use in your daily work?"
  • Check your software subscriptions and invoices for AI-related tools
  • Inventory AI embedded in existing software (your CRM probably has AI features you are not consciously using)
  • Create a spreadsheet with: tool name, vendor, purpose, what data it processes, which department uses it

Why this matters: You cannot be compliant with something you do not know exists. Most SMBs we speak with discover during this step that they use two to three times more AI tools than they thought. Unauthorized AI usage by employees — known as shadow AI — is one of the most common compliance blind spots that surfaces during this inventory.

If you want to understand which AI tools are common in SMBs and how they compare, read our article on using ChatGPT for business.

Step 2: Classify each system by risk level

What: The EU AI Act categorizes AI systems into four risk levels: unacceptable, high risk, limited risk, and minimal risk. Your obligations depend on the category.

How:

  • Unacceptable risk (prohibited): social scoring, manipulation of vulnerable groups, real-time biometric identification. If you use this, stop.
  • High risk: AI used for HR selection, credit assessment, access to essential services, or law enforcement. Requires conformity assessments, documentation, and human oversight.
  • Limited risk: Chatbots, deepfake generators, emotion recognition. Requires transparency obligations (users must know they are interacting with AI).
  • Minimal risk: Spam filters, AI in games, recommendation systems. No specific obligations, but general due diligence applies.

In practice: Most SMB AI applications — chatbots, automation, content generation — fall into the limited or minimal risk category. But the moment you use AI for personnel decisions or financial assessments, you enter high-risk territory. Read more about AI risks and liability and specifically about AI in recruitment — one of the most discussed high-risk applications. Legal professionals are also increasingly using AI — read more about AI in the legal sector.

Step 3: Conduct a DPIA where required

What: A Data Protection Impact Assessment (DPIA) is mandatory under the GDPR when processing is likely to result in a high risk to individuals. For AI applications that process personal data at scale, this is almost always the case.

How:

  • Describe what the AI system does and what personal data it processes
  • Assess necessity and proportionality — is AI the least intrusive way to achieve this purpose?
  • Identify risks to data subjects (discrimination, privacy violation, erroneous decisions)
  • Document mitigation measures you are taking to reduce those risks
  • Involve your DPO or privacy advisor in the assessment

Cost: A DPIA costs 2,000 to 10,000 euros depending on complexity. A significant investment, but far cheaper than a GDPR fine.

Read our detailed article on GDPR and AI rules for the full overlap between privacy law and AI.

Step 4: Create an AI policy for employees

What: An internal document that specifies which AI tools employees may use, for what purposes, and under what conditions. Without such a policy, AI usage in your organization is the Wild West — everyone does whatever they want.

How:

  • Define which AI tools are approved (and which are not)
  • Specify what data may and may not be entered into AI tools (never personal data in free versions)
  • Establish that AI output must always be reviewed by a human before it is shared externally
  • Describe the process for requesting new AI tools
  • Communicate the policy to all employees and get acknowledgment signatures

Tip: Keep the policy short and practical. A two-page document that everyone reads is better than a thirty-page protocol that nobody opens.

Step 5: Document your AI systems

What: The EU AI Act requires businesses deploying high-risk AI to maintain extensive documentation. But even for limited-risk systems, documentation is a best practice that protects you during an audit.

How: For each AI system, document:

  • Purpose and intended use
  • Type of AI (classification, generative, predictive)
  • What data is used for input and training
  • What decisions the system makes or supports
  • How output is validated
  • Who is responsible for the system
  • When it was last evaluated

In practice: Use a simple template per system. You do not need to write a novel — a structured one-page document per tool is sufficient.

Step 6: Assess your AI vendors

What: If you use third-party AI tools (and you almost certainly do), you need to assess whether those vendors comply with relevant regulations. You are co-responsible for what happens with data.

How:

  • Check whether the vendor offers a data processing agreement (mandatory under GDPR)
  • Ask where data is stored and processed (EU or elsewhere?)
  • Verify the vendor has a privacy and security policy
  • Assess whether the vendor is transparent about how the AI model works
  • Ask whether user data is used for model training (many free AI tools do this)

Red flags:

  • No data processing agreement available
  • Data processed outside the EU without adequate protection
  • No clarity on data retention and deletion
  • User data used for model training with no opt-out

Step 7: Ensure transparency toward customers

What: Under the EU AI Act, you must inform users when they are interacting with an AI system. Specifically: if you have a chatbot, visitors must know they are talking to AI.

How:

  • Add a clear notice to your chatbot: "You are chatting with an AI assistant. A human agent is available if you prefer."
  • Update your privacy policy with information about AI usage
  • Inform customers when AI plays a role in decisions that affect them (pricing, risk assessment, service allocation)
  • If you publish AI-generated content, consider adding a disclosure

Why this is also commercially smart: Transparency about AI usage increases trust. Research shows that customers respond more positively to businesses that are upfront about AI than to businesses where they later discover they unknowingly interacted with AI. For a detailed explanation of which disclosures are required per AI system type, read our article on AI transparency requirements.

Step 8: Set up human oversight

What: For every AI application that makes or influences decisions affecting people, there must be a mechanism for human intervention. This is not optional — it is a legal requirement.

How:

  • Determine per AI system what level of human oversight is needed
  • For high-risk decisions: a human must approve every AI recommendation before execution
  • For limited risk: an escalation route where a customer or employee can request human review
  • Train the people who oversee AI systems — they need to understand how the system works and where it can fail
  • Document the oversight process

Wondering whether your organization is ready for responsible AI deployment? Read Is your business ready for AI?.

Step 9: Set up an audit trail

What: An audit trail is a log of all decisions an AI system makes. It allows you to check after the fact what happened, why, and whether it was correct.

How:

  • Log per AI interaction: input, output, timestamp, and which model version was used
  • Retain logs for at least the legally required period (for most processing: at minimum the retention period of the underlying data)
  • Ensure logs cannot be altered retroactively
  • Set up monitoring for anomalous behavior: if the system suddenly shows different patterns, you want to know
  • Periodically test whether you can reconstruct why a specific decision was made

In practice: Most AI platforms offer logging as a feature. Make sure it is turned on and that the data stays in the EU.

Step 10: Schedule an annual review

What: AI compliance is not a one-time project. Legislation changes, your AI usage expands, and models get updated. An annual review keeps you current.

How:

  • Block one day per year for an AI compliance review
  • Re-run the full inventory (step 1) — new tools have almost certainly been added
  • Reclassify systems if functionality has changed
  • Update documentation, policies, and data processing agreements
  • Check whether new legislation or guidelines have been published
  • Evaluate whether your human oversight is still adequate

Tip: Combine your AI review with your existing privacy audit. The overlap is significant and it saves time.

Save 12 hours per week on compliance preparation by following a structured checklist instead of reacting ad hoc

The cost of non-compliance

Let us be straightforward about the risks. The fines speak for themselves:

  • GDPR: Up to 20 million euros or 4% of global annual turnover
  • EU AI Act: Up to 35 million euros or 7% of global annual turnover
  • Combined: If your AI system processes personal data and is classified as high risk, you can violate both sets of regulations simultaneously

But fines are not the only risk. Reputational damage from a data breach or a discriminatory AI system can be far more costly than the fine itself. And the loss of customer trust is something you cannot fix with money.

The investment in compliance — a few thousand euros in advice, documentation, and technical adjustments — is a fraction of the potential damage.

Common mistakes to avoid

Mistake 1: "We are too small for enforcement." Data protection authorities across Europe have increasingly fined smaller organizations in recent years. The size of your business does not protect you.

Mistake 2: "We only use free AI tools, so it is low risk." Free tools are actually higher risk. The data you enter is often used for model training. If that includes personal data, you have a GDPR problem.

Mistake 3: "Our IT vendor handles compliance." As the data controller, you bear ultimate responsibility. You can outsource execution, but not accountability.

Mistake 4: "We did a DPIA once, so we are done." A DPIA must be repeated when the processing changes significantly. If you switch to a new AI model or expand the scope of an existing system, an update is required.

Getting started

You do not need to complete all ten steps this week. Start with step 1 (inventory) and step 4 (AI policy for employees). Those take about a day and immediately give you a clear picture of your current situation.

Then work through the remaining steps at a pace that fits your business. The most important thing is to begin — compliance is a process, not a destination. With the August 2026 enforcement date approaching, also read our urgent action plan for the AI Act deadline with a month-by-month approach.

For a comprehensive guide on protecting business data when using AI — including encryption, vendor DPAs, and secure deployment models — read our article on AI data security.

Want help setting up a compliant AI strategy? Request a free consultation and we will guide you through the checklist. Or explore how to deploy AI agents responsibly within the bounds of current regulations.

Learn more about AI consulting?

View service

Curious how much time you could save?

Request a free automation scan. We'll analyze your processes and show you where the gains are — no strings attached.