ai-actcomplianceregulationaugust-2026

EU AI Act August 2026: What You Must Do Now

April 1, 20269 min readPixel Management

This article is also available in Dutch

The EU AI Act becomes fully enforceable for high-risk AI systems on August 2, 2026. That is four months from now. If you haven't started a compliance programme, you're behind — but it's not too late. This article gives you a concrete month-by-month action plan, a risk classification table for common SMB tools, and a clear breakdown of the penalties you face.

In short: The EU AI Act requires every business that uses or develops AI systems in the EU to meet obligations based on the risk level of those systems. The heaviest requirements — conformity assessments, technical documentation, human oversight — become enforceable from August 2, 2026, with fines up to EUR 35 million.

For the full legal background, see our comprehensive guide on the EU AI Act and AI legislation in the Netherlands. For a general compliance checklist, read our AI compliance checklist for SMBs. This article is different: it is a countdown. Five months, five phases, zero excuses.

Why August 2026 Is the Tipping Point

The EU AI Act is not new legislation — it entered into force on August 2, 2024. Since then, obligations have been phased in:

  • February 2025: Ban on unacceptable AI practices (social scoring, manipulation of vulnerable groups)
  • August 2025: Obligations for general-purpose AI models (GPT-style systems)
  • August 2, 2026: Full enforcement for high-risk AI systems

That last date is what matters right now. High-risk systems — AI used for recruitment, credit scoring, medical decision-making — must be fully compliant by that date. Not "in progress." Not "we're working on it." Fully compliant.

In the Netherlands, enforcement sits with the Autoriteit Persoonsgegevens (AP) — the Dutch Data Protection Authority — and the Rijksinspectie Digitale Infrastructuur (RDI). Both agencies have announced they will actively enforce from August 2026 onwards. The AP has specifically noted that businesses unable to produce a risk assessment will be first in line for scrutiny.

Already looking for a step-by-step checklist? Our AI Act deadline 2026 checklist walks you through eight specific action items.

Month-by-Month Action Plan: April Through August 2026

April: Inventory All Your AI Systems

You cannot be compliant with something you don't know exists. The first step is a complete inventory.

Do this in April:

  • Send an inventory questionnaire to every department: which AI tools do you use?
  • Review all software subscriptions for embedded AI features (your CRM, your email tool, your accounting software — many have AI capabilities you haven't deliberately enabled)
  • Map shadow AI: tools employees have adopted without formal approval
  • Build a central register listing each system's name, vendor, purpose, data processed, and responsible department

The outcome: a complete AI register that forms the foundation for everything that follows.

Important: most SMBs underestimate how much AI they use. Inventories typically surface two to three times more systems than expected.

May: Classify Each System by Risk Level

With your inventory in hand, classify every system according to the AI Act's four risk categories.

Do this in May:

  • Assess each system: is it unacceptable, high-risk, limited risk or minimal risk?
  • Use the risk classification table later in this article as a starting point
  • Flag every system that processes personal data — that is where the AI Act and GDPR overlap. Read more about that intersection in our article on GDPR and AI rules
  • Unacceptable-risk systems: shut them down immediately
  • High-risk systems: mark as a priority for the next phase

The outcome: a classified overview that makes clear which systems require which approach.

June: Document and Conduct DPIAs

Documentation is the core of AI Act compliance. Without documentation, you cannot demonstrate compliance — and that is the same as non-compliance.

Do this in June:

  • Prepare technical documentation for each high-risk system: how does it work, what data does it process, what decisions does it make, how is human oversight arranged?
  • Conduct a Data Protection Impact Assessment (DPIA) for each system that processes personal data at scale
  • Document your risk management system: which risks have you identified, what measures are you taking, who is responsible?
  • Build a transparency register: where and how do you inform users they are interacting with AI? Review the specific requirements in our article on AI transparency requirements for businesses

The outcome: a complete documentation dossier per high-risk system that you can present during an audit.

Learn more about AI consulting?

View service

July: Test, Audit and Remediate

With documentation in place, it is time for the testing phase. Does your compliance hold up in practice?

Do this in July:

  • Run an internal audit: does the documentation match reality?
  • Test whether human oversight actually functions — can a staff member override an AI decision?
  • Check your transparency disclosures: are they in the right places, are they clear enough?
  • Test for bias in high-risk systems (especially AI used in recruitment or credit scoring)
  • Consider adopting the AIC4NL standard — the Dutch certification framework for trustworthy AI systems, developed in collaboration with the Ministry of Economic Affairs
  • Verify that your vendors (the AI system providers) have their conformity assessments in order — as a deployer, you are partly dependent on them

The outcome: a tested and validated compliance dossier, plus a list of remaining items to resolve in the final weeks.

August: Sign-off and Monitoring

The deadline is August 2. Make sure everything is finalised before that date.

Do this in the first week of August:

  • Have a responsible person (director or compliance officer) formally sign off on the compliance status of each high-risk system
  • Set up an ongoing monitoring process: compliance is not a one-off activity but a continuous process
  • Schedule a semi-annual review to keep your AI register, documentation and risk assessments current
  • Communicate internally: all employees must know which rules apply and what is expected of them

The outcome: formal compliance by the deadline, with a structure that holds up afterwards.

Save 8 hours per week on compliance preparation through a structured month-by-month action plan

Risk Classification: Common SMB AI Applications

Which risk level applies to the AI tools you use? This table maps common SMB applications to their risk categories.

AI ApplicationRisk LevelExplanation
Customer service chatbotLimited riskTransparency obligation: user must know it is AI
Spam filter (email)Minimal riskNo specific obligations
AI-powered product recommendationsMinimal riskNo specific obligations
CV screening / applicant selectionHigh riskConformity assessment, documentation, human oversight required
Credit scoring / lending decisionsHigh riskDirect impact on individuals' financial access
AI content generation (text, social media)Limited riskMust be labelled as AI-generated
Automated invoice processingMinimal riskNo specific obligations (unless decision-making authority)
AI lead scoring (sales)Limited riskTransparency obligation where it directly affects customers
Customer service sentiment analysisLimited riskTransparency obligation
AI for medical triage / adviceHigh riskFalls under critical health applications
Deepfake detection or generationLimited riskDisclosure required
Automated price optimisationMinimal riskNo specific obligations

Note: this table is a simplification. The exact classification depends on the specific use case, context, and level of autonomy of the system. When in doubt: seek expert advice.

You can also test your AI systems via Dutch regulatory sandboxes — government-established testing environments where businesses can have their AI applications assessed before going live. This is particularly valuable for innovative applications where the risk category is not immediately clear.

Penalty Structure: What Are You Risking?

The EU AI Act uses a tiered penalty structure. The size of the fine depends on the type of violation.

Violation TypeMaximum Fine (Large Enterprise)Maximum Fine (SMB/Startup)
Deploying a prohibited AI systemEUR 35 million or 7% of global turnoverProportionally reduced
Non-compliance with high-risk obligationsEUR 15 million or 3% of global turnoverProportionally reduced
Providing incorrect information to regulatorsEUR 7.5 million or 1.5% of global turnoverProportionally reduced
Non-compliance with transparency obligationsEUR 15 million or 3% of global turnoverProportionally reduced

For SMBs: the regulation provides for proportional fines for small and medium-sized enterprises and startups. That does not mean you are safe — it means the fine is scaled to your business size, but it will still come. The Autoriteit Persoonsgegevens has indicated it will issue warnings for first offences, but has also made clear that repeated non-compliance will be fined directly.

Dutch Context: AP, AIC4NL and Regulatory Sandboxes

The Netherlands has taken a relatively active role in AI Act implementation. A few specifics:

Autoriteit Persoonsgegevens (AP) has been designated as the primary supervisory authority for the AI Act in the Netherlands. The AP already has extensive enforcement experience from the GDPR and is applying that expertise to AI oversight. They have established a dedicated AI team that will begin active enforcement from August 2026.

AIC4NL is the Dutch standard for trustworthy AI systems, based on the German AIC4 standard and adapted for the Dutch market. AIC4NL certification is not legally required, but it provides a concrete framework to demonstrate that your AI systems meet AI Act requirements. An increasing number of business clients and public tenders are asking for it.

Regulatory sandboxes are government-established testing environments where businesses can assess their AI applications in a controlled setting. The Dutch Digital Trust Center (DTC) coordinates these. For SMBs uncertain about how their AI systems should be classified, sandboxes offer a low-barrier way to gain clarity.

You can improve your AI systems' compliance by setting up business automation the right way — with built-in logging, human oversight, and transparency disclosures.

What If You Are Not Ready in Time?

Let's be honest: many SMBs will not meet the deadline. What do you do then?

Scenario 1: You are mostly compliant but missing documentation. The AP will likely issue a warning first and offer a remediation period. But: you must be actively working on it. "We have started" is a better position than "we didn't know."

Scenario 2: You have done nothing. This is the risk scenario. The AP has announced that businesses with no preparations whatsoever will be prioritised for enforcement. Start today, even if you won't make the deadline — every step you take improves your position.

Scenario 3: You do not use high-risk AI. Then your obligations are limited to transparency disclosures for chatbots and AI-generated content. That is relatively straightforward to implement, but you still need to do it. Check your situation using our AI Act deadline 2026 checklist.

Next Steps

The clock is ticking. Four months is not much, but it is enough if you start now and work systematically. You have an action plan, a classification framework, and an overview of the consequences. The question is not whether the AI Act affects you — it does. The question is whether you are prepared.

Start with step one: the inventory. You can do that this week.

Learn more about AI consulting?

View service

Curious how much time you could save?

Request a free automation scan. We'll analyze your processes and show you where the gains are — no strings attached.