AI in recruitment is the use of artificial intelligence to source, screen and evaluate candidates — from automated CV filtering and chatbot interviews to assessment tools and predictive models for candidate success. The EU AI Act classifies these applications as high-risk, meaning employers in the Netherlands must meet strict requirements before deploying AI in their hiring process.
That classification is well-founded. In 2024, the Netherlands Institute for Human Rights published a report showing that 40% of tested AI recruitment tools were structurally disadvantageous for women, older workers or candidates with a migration background. Amazon scrapped an internal AI hiring system in 2018 after it systematically scored women lower. And in New York, it has been legally required since 2023 to have AI recruitment tools audited for bias annually.
This article explains what rules apply, what risks you face and how to use AI responsibly in your hiring process.
Why Does the EU AI Act Classify Recruitment as High-Risk?
The EU AI Act distinguishes four risk levels for AI systems. Recruitment falls in the high-risk category (Annex III, point 4) because AI decisions in hiring have direct impact on individuals' lives and careers.
Specifically, these applications are designated as high-risk:
- CV screening: Automatically filtering or ranking applicants
- Chatbot interviews: AI systems that interview and evaluate candidates
- Assessment tools: AI-driven tests that measure suitability, personality or competencies
- Predictive models: AI that predicts whether a candidate will be successful in the role
- Automated pre-selection: Any AI that determines which candidates advance to the next round
What this means for employers:
| Obligation | What You Must Do | Deadline |
|---|---|---|
| Conformity assessment | Have the AI system assessed for safety and non-discrimination | Before deployment |
| Technical documentation | Document how the system works, what data it was trained on and how decisions are made | Ongoing |
| Human oversight | Ensure a human can override every AI-supported decision | Ongoing |
| Bias monitoring | Periodically test the system for discrimination | At least annually |
| Transparency | Inform candidates that AI is used in the selection process | With every use |
| DPIA | Conduct a Data Protection Impact Assessment | Before deployment |
| Registration | Register the high-risk AI system in the EU database | Before deployment |
Read our comprehensive overview of AI legislation in the Netherlands and the EU AI Act for the full legal context.
What Discrimination Risks Does AI in Hiring Bring?
AI discrimination in recruitment is not a hypothetical scenario. It happens now, structurally, and often invisibly.
How Does Bias Arise in Recruitment AI?
AI models learn patterns from historical data. If a company has predominantly hired men for technical roles over the past ten years, the model learns that male candidates are a "better fit." Not because that's true — but because the data shows that pattern.
The most common forms of bias:
- Gender bias: Women are scored lower for roles where men historically dominate
- Age bias: Older candidates are disadvantaged because the model associates "dynamic" with younger age
- Ethnic bias: Names, postal codes or language use are unintentionally factored in
- Socio-economic bias: Candidates from certain universities or with certain hobbies are favored
- Disability bias: CVs with gaps (due to illness or disability) are ranked lower
Concrete Examples
Amazon (2018): The internal AI hiring system penalized CVs containing the word "women's" (such as "women's chess club captain"). The system was trained on ten years of hiring decisions that were predominantly male.
HireVue (2021): The FTC investigated the company because their video interview tool used facial recognition that was systematically disadvantageous for people with darker skin tones. HireVue removed the facial recognition feature.
iTutorGroup (2023): The EEOC reached a $365,000 settlement after the company used an AI system that automatically rejected applicants over 55.
The financial and legal risks of AI discrimination in hiring are significant. Read more about liability and protection in our article on AI risks and liability.
What Dutch Legislation Is Relevant?
In addition to the EU AI Act and the GDPR, specific Dutch employment law rules apply:
- Article 1 of the Constitution: Prohibition of discrimination
- Equal Treatment Act (WGB): Prohibits distinctions based on religion, belief, political opinion, race, gender, nationality, sexual orientation and marital status
- Age Discrimination in Employment Act (WGBLA): Specific prohibition of age discrimination
- Equal Treatment on the Basis of Disability or Chronic Illness Act (WGBH/CZ): Protection against disability discrimination
The Netherlands Institute for Human Rights adjudicates discrimination complaints and can also assess AI-driven discrimination. An employer that deploys a discriminatory AI system can be held liable — even if the employer did not know the system was discriminating.
Learn more about AI consulting?
View serviceHow to Use AI Responsibly in Hiring
1. Conduct a DPIA Before You Start
A Data Protection Impact Assessment is mandatory for high-risk AI systems. Document:
- What personal data is processed
- What decisions are (partly) made by AI
- What risks exist for candidates
- What measures you take to mitigate those risks
2. Test for Bias Before You Launch
Use structured bias audits:
- Test with synthesized CVs that vary only on gender, age or ethnicity
- Compare scores and look for statistically significant differences
- Repeat the test at least annually and after every model update
3. Build in Human Oversight
The EU AI Act requires "meaningful human oversight" — that doesn't mean a human blindly approves the AI recommendation, but that a human has the authority and information to override the AI.
How to implement this:
- Display the AI score as an aid, not a decision
- Give the recruiter insight into the factors behind the score
- Document when and why a recruiter deviates from the AI recommendation
4. Inform Candidates
Transparency is mandatory. Candidates must know:
- That AI is used in the selection process
- What data is processed
- How they can object to the automated assessment
- That they have the right to human reconsideration
5. Choose the Right Tools
Not all AI recruitment tools are equal. Check with your vendor:
- Has an independent bias audit been conducted?
- What data was used to train the model?
- Does the tool offer explainability (can it show why a candidate scored high or low)?
- Does the tool comply with EU AI Act requirements for high-risk systems?
Considering a broader digitalization effort? Read our digitalization guide for SMBs for a structured approach.
Save 10 hours per week on manual CV screening and initial selection rounds
Which Recruitment Tools Are Affected?
An overview of popular AI recruitment tools and their risk classification:
| Tool | Function | EU AI Act Risk Level | Bias Audit Available? |
|---|---|---|---|
| HireVue | Video interviews, assessments | High risk | Yes (post-FTC investigation) |
| Pymetrics/Harver | Game-based assessments | High risk | Yes |
| Textio | Job posting optimization | Limited risk | Yes |
| LinkedIn Recruiter | Candidate sourcing, ranking | High risk (with automated filtering) | Limited |
| ChatGPT/Claude | CV analysis, screening | High risk (when informing selection decisions) | No (not built for HR) |
| Greenhouse/Lever | ATS with AI scoring | High risk (with automated ranking) | Varies by feature |
Important: using generic AI tools (ChatGPT, Claude) for CV screening also falls under the high-risk classification if the output influences selection decisions.
Find the step-by-step compliance approach in our AI compliance checklist for SMBs.
Frequently Asked Questions
The Next Step
AI can make recruitment more efficient and more objective — but only if you do it right. The risks of uncontrolled AI in hiring are too significant to ignore: discrimination claims, fines up to 7% of your revenue, and reputational damage that lasts for years.
The key is balance: use AI to improve the recruitment process, but build in the right safeguards. Test for bias, document your decisions, inform candidates and ensure a human makes the final call.
Want to know how to responsibly deploy AI in your hiring process? An AI consulting session helps you set up a compliant and effective recruitment AI framework. For the technical implementation of AI agents that meet high-risk requirements, we'll build a solution together that stands on solid legal ground.
Learn more about AI consulting?
View service