The AI Act deadline of August 2, 2026, is the date when the EU AI Act becomes fully enforceable for high-risk AI systems — including mandatory conformity assessments, documentation requirements and human oversight obligations. For every business in the Netherlands and across the EU that uses AI, this is no longer abstract regulation from Brussels. It's a concrete date with concrete consequences.
Key takeaway: Businesses that aren't compliant by August 2, 2026, face fines up to EUR 35 million or 7% of global annual turnover. Fewer than 20% of Dutch SMBs have started preparations (Nederland Digitaal, 2026).
For the full legal context, see our comprehensive overview of the EU AI Act and AI legislation in the Netherlands. This article is different: it's an action list. Not a legal essay, but steps you can take this month.
What Changes on August 2, 2026?
The AI Act uses a phased implementation schedule. The first prohibitions (unacceptable risk) have been in effect since February 2025. Obligations for general-purpose AI models apply since August 2025. But the biggest impact comes on August 2, 2026: full enforcement of rules for high-risk AI systems.
This means:
- Conformity assessments become mandatory for high-risk AI
- Technical documentation must be in order
- Human oversight must be demonstrably implemented
- Risk management systems must be actively running
- Transparency obligations apply across all AI categories
Enforcement shifts from purely national (data protection authorities, sector regulators) to a combination of national and European oversight. The European AI Office coordinates and can independently enforce rules for general-purpose AI models.
The Checklist: 8 Steps Before August 2, 2026
1. Inventory All AI Systems in Your Organisation
You can't be compliant with something you don't know exists. Create a complete overview of every AI tool, model and automated decision in your business. Don't forget the tools employees have adopted on their own initiative — shadow AI is one of the most underestimated compliance risks for SMBs.
What to record per system:
- Name and vendor
- Purpose and use case
- What data it processes (personal data? financial data?)
- Which department uses it
- Whether a data processing agreement exists
2. Classify Each System by Risk Level
The AI Act works with four risk categories. Obligations differ per category. Our AI compliance checklist for SMBs contains a detailed explanation per category, but here's the quick version:
| Risk Category | Examples | Obligations |
|---|---|---|
| Unacceptable | Social scoring, manipulation of vulnerable groups | Prohibited — stop immediately |
| High risk | HR selection, credit scoring, medical diagnostics | Conformity assessment, documentation, human oversight |
| Limited risk | Chatbots, content generation, sentiment analysis | Transparency obligations (disclosure) |
| Minimal risk | Spam filters, recommendation systems, game AI | No specific obligations |
Most SMB AI applications fall into the limited or minimal risk category. But watch out: as soon as AI influences personnel decisions or financial assessments, you're in high-risk territory.
3. Check Your Transparency Obligations
Regardless of risk category: if customers or users interact with AI, they need to know. This applies to chatbots, automated emails, AI-generated content and synthetic media. Read the specific requirements in our article on AI transparency requirements for businesses.
In practice:
- Add a clear notice to your chatbot: "You are communicating with an AI assistant"
- Label AI-generated content accordingly
- Document how and where you've implemented AI disclosure
4. Conduct a Risk Assessment for High-Risk Systems
Do you have AI systems classified as high risk? You'll need to set up a formal risk management system. That sounds heavier than it is. The core: identify what can go wrong, how likely it is, and what the impact would be.
Document per high-risk system:
- Potential failures and their impact
- Measures to mitigate risks
- Who's responsible for monitoring
- How you intervene when something goes wrong
To understand the legal consequences of AI failures, read our article on AI risks and liability.
5. Ensure Human Oversight
The AI Act requires that high-risk AI systems have built-in human oversight. This doesn't mean someone has to approve every output. It does mean:
- There's a designated person responsible for the system
- That person understands the system's capabilities and limitations
- There's a procedure to stop or override the system
- Decisions with significant impact on individuals aren't fully automated
This is also relevant for limited-risk systems. An AI chatbot advising customers about financial products must have an escalation procedure to a human agent.
6. Prepare Technical Documentation
For high-risk AI, you must maintain documentation proving your system meets the requirements. The AI Act prescribes specific elements:
- Description of the system and its intended purpose
- Data categories the system processes
- Performance metrics and known limitations
- Instructions for use
- Log of changes and incidents
Start today. Writing documentation retroactively costs ten times more than maintaining it from the beginning.
7. Audit Your Vendors
Using AI through SaaS platforms? As a deployer, you're responsible for correct usage, but the provider must supply conformity declarations for high-risk systems. Ask your vendors:
- Does the system comply with the EU AI Act?
- Is a conformity declaration available?
- What risk category applies to the system?
- How is human oversight facilitated?
- What does the data processing agreement say about AI-specific obligations?
If your vendor can't answer these questions, that's a red flag.
8. Create a Compliance Calendar
Compliance isn't a one-time effort. Schedule recurring reviews:
- Monthly: Check for new AI tools employees have started using
- Quarterly: Review risk assessments and documentation
- Bi-annually: Audit transparency measures and disclosure texts
- Annually: Full compliance audit, potentially with an external party
Save 8 hours per week on manual AI compliance documentation and risk analysis
What if You're Not Ready in Time?
Enforcement starts on August 2, 2026, but don't expect a day-one crackdown. The European Commission has indicated that the initial phase will focus on awareness and guidance. That said, complaints from consumers, employees or competitors will be investigated immediately.
The fine structure has three tiers:
- Up to EUR 35 million or 7% of turnover: for prohibited AI practices
- Up to EUR 15 million or 3% of turnover: for violations of high-risk requirements
- Up to EUR 7.5 million or 1.5% of turnover: for providing incorrect information to authorities
Proportional fines apply for SMBs, but even those can amount to tens of thousands of euros. More importantly: the reputational risk. A business publicly fined for AI non-compliance loses the trust of customers and partners.
Action Items for This Month
You don't need to do everything at once. Start with these three actions:
- Week 1: Inventory all your AI systems (step 1). Send an email to all departments.
- Week 2: Classify each system by risk level (step 2). Flag the high-risk systems.
- Week 3-4: Implement transparency measures for all your chatbots and AI-generated content (step 3).
After that, you have a foundation. Build the remaining steps over the following months.
Want professional guidance through the compliance process? An AI consulting engagement gets you fully prepared for the deadline within four to six weeks. Also read our urgent action plan EU AI Act August 2026: What You Must Do Now for a month-by-month countdown.
Learn more about AI consulting?
View service