A Law That Changes the Rules for AI in Hiring
The EU AI Act is the world's first comprehensive legal framework specifically regulating artificial intelligence. It came into force in August 2024, and its most critical requirements for employment AI take effect in August 2026. If you use AI tools in your hiring process and you operate in, or hire candidates from, the European Union, this law applies to you.
Fines for non-compliance can reach 30 million euros or 6% of global annual turnover, whichever is higher.
How the Law Classifies AI Hiring Tools
AI systems used for recruitment and candidate selection are explicitly classified as high-risk. That covers resume screening tools, AI interview platforms, automated scheduling assistants that make prioritization decisions, and psychometric assessment tools.
What You Need to Document
For each high-risk AI system you use, you need to maintain technical documentation covering the system's purpose, the training data used, the performance metrics and accuracy levels, the testing conducted before deployment, and the human oversight mechanisms in place.
You also need to keep logs of the system's operation that allow you to trace how decisions were made. "The algorithm decided" is not an acceptable answer under this law.
What Candidates Must Be Told
Candidates must be informed that an AI system is being used in their hiring process. They must understand what the system does and what data it processes. They must be given the opportunity for human review of any significant decision made by the AI.
Practical Steps to Get Compliant
Start with an inventory of every AI tool involved in your hiring process. Talk to your vendors now. Ask for their compliance roadmap and bias test results. Appoint a human oversight owner for each AI tool. Build candidate-facing disclosures into your process before candidates start any AI-assisted screening.