The Problem Nobody Wants to Talk About
Companies adopt AI hiring tools partly to remove human bias from the process. The pitch makes sense: a machine doesn't care if a candidate went to the wrong school or has an unfamiliar name. It just scores the data.
The reality is messier. AI systems can reproduce and even amplify the biases already embedded in historical hiring data. The machine doesn't discriminate on purpose. It doesn't have to.
How Bias Gets Into AI Hiring Systems
The most common entry point is training data. If a company trains a hiring model on ten years of its own successful hires, and those hires were overwhelmingly from a particular demographic, the model learns to prefer that demographic.
Amazon ran into this in 2018 when its internal resume screening tool, trained on past hiring data, systematically downranked resumes that included the word "women's." The company scrapped the tool.
What the Law Now Requires
New York City's Local Law 144 requires employers to conduct annual bias audits of automated employment decision tools and publish summary results. The EU AI Act classifies AI recruiting tools as high-risk systems with mandatory risk assessments, human oversight requirements, and transparency obligations.
What Companies Should Actually Be Doing
Ask your vendors hard questions. Don't accept "our tool is unbiased" as an answer. Ask for third-party bias audit results, broken down by race, gender, and disability status.
Test your own data. Even if a vendor's tool passes external audits, it may perform differently when applied to your specific candidate pool. Run adverse impact analyses on your screening outcomes at least quarterly.
Keep humans meaningfully in the loop. Have a recruiter or hiring manager who actually reviews edge cases, flags anomalies, and can override the system with documented reasoning.