In 2024, the European Union passed the Artificial Intelligence Act, a landmark law that classifies any AI software used in hiring as “high-risk.” That designation triggers stringent obligations for both the developers who build these systems and the employers who deploy them.

By Ryan Zhang, On Labor

In 2018, Amazon unveiled a groundbreaking AI hiring tool. But what began as a promise to revolutionize how the company identified talent devolved into an algorithm that “did not like women.” The model, trained on a decade of old resumes mostly from men, penalized references to women’s organizations and graduates of women’s colleges. Although Amazon abandoned the tool, the incident revealed a more fundamental problem: In automating hiring, employers are also automating bias. Today, AI plays a major role in hiring, yet the U.S. has failed to establish coherent guardrails even as jurisdictions like the European Union have acted decisively. To better protect workers, the U.S. should follow Europe’s lead in codifying fairness and transparency into AI-driven hiring.

AI is no longer experimental but ubiquitous. 87 percent of companies now use AI at some stage of the hiring process. Many start by deploying algorithms that use data from past “successful hires” to target job ads towards people with similar profiles, thereby favoring candidates who mirror the existing workforce and perpetuate its makeup. AI tools then screen candidates by extracting data from resumes and ranking them, again relying on historical patterns to predict who is likely to succeed. Some systems go further, using AI-powered video interviews that analyze “non-verbal cues, communication style, professionalism, and listening” to assess candidates’ interpersonal skills — criteria that can disadvantage candidates with cultural backgrounds outside the model’s norm. In many cases, applicants are screened out before ever reaching a human reviewer, leaving AI tools to assert significant yet invisible influence over the hiring process.

Apache 2.0 license on GitHub. https://github.com/borisdayma/dalle-mini

Recent lawsuits and enforcement actions mark the first wave of legal challenges testing how existing anti-discrimination laws apply to AI-driven hiring systems. In EEOC v. iTutorGroup, the Equal Employment Opportunity Commission brought its first publicly known AI-bias enforcement action, alleging that iTutorGroup’s software automatically rejected female applicants aged 55 or older and male applicants aged 60 or older, without human review. In August 2023, the company agreed to pay $365,000 to settle the claims. In Mobley v. Workday, a federal court allowed age discrimination claims to proceed on the theory that Workday’s AI software could act as an “agent” of employers who delegate hiring functions to it. The decision signaled that AI hiring vendors could face direct liability—a move that could prevent developers from offloading responsibility for discrimination onto employers.

Several additional cases are in early stages of litigation. In Harper v. Sirius XM Radio, a job seeker alleged that an AI-powered hiring system unfairly downgraded his applications by relying on proxies like education, zip code, and work history that disproportionately disadvantage Black candidates. And in ACLU v. Aon Consulting, the ACLU filed complaints with the FTC and EEOC alleging that Aon’s “personality” assessment test and automated video interviewing tool, which were marketed as “bias-free,” screened out racial minorities and people with disabilities at disproportionate rates. Together, these actions suggest that while existing anti-discrimination statutes can reach algorithmic bias, such claims remain rare relative to AI’s widespread use.

U.S. policymakers have yet to respond coherently. With Congress silent, some states and cities stepped in to fill the gap. New York City’s Local Law 144, for instance, mandates annual bias audits and disclosure for employers using AI tools. California requires employers to keep four years of decision-making data from AI systems and to provide 30-days’ notice when AI systems are used in hiring—measures designed to facilitate worker complaints and investigations. These laws push in the right direction but remain geographically patchwork, leaving most workers without protection.

Europe, by contrast, has acted decisively. In 2024, the European Union passed the Artificial Intelligence Act, a landmark law that classifies any AI software used in hiring as “high-risk.” That designation triggers stringent obligations for both the developers who build these systems and the employers who deploy them.

Under the EU’s law, developers bear the heaviest responsibilities. They must test their models, identify any disparate outcomes, and ensure that training data are “relevant, sufficiently representative and . . . free of errors,” preventing the replication of historical bias. They must also document  how their algorithms were built and the data sources they rely on, so that regulators can audit models suspected of producing discriminatory results. If they become “aware of the risk” that a system is unfair, they must “immediately investigate the causes” and take “necessary corrective actions,” including retraining or recalling the model. These requirements establish fairness as a precondition for participating in the European market.

Once AI technologies are on market, employers share responsibility for their use. The Act requires employers to maintain detailed “logs automatically generated by the system,” recording every instance of an AI tool’s use. These logs create an audit trail, enabling regulators and rejected applicants to reconstruct hiring decisions and test for discriminatory outcomes. Employers must also provide human oversight. AI systems must be overseen by “at least two natural persons with the necessary competence, training and authority” who can “properly interpret” the system’s output and intervene if outcomes appear biased or unreliable. In these cases, the model must be retrained or withdrawn entirely.

Beyond defining obligations, the Act also provides for enforcement. Each EU member state is required to designate a single national supervisory authority responsible for monitoring compliance, while the newly-established European Artificial Intelligence Board coordinates oversight across the bloc. Moreover, developers are required to affix a “CE marking,” a certification label that signals compliance with EU standards to business customers.

Most importantly, the Act has real teeth. Developers and employers who fail to “take the necessary corrective actions” can be fined of up to €35 million or 7 percent of global annual revenue. The Act also operates in tandem with the General Data Protection Regulation, which guarantees individuals the right “not to be subject to a decision based solely on automated processing.” In practice, this allows job applicants to ask companies how an algorithm affected their outcome and request a human re-evaluation, creating accountability that U.S. law currently lacks. Ultimately, the Act forms the world’s most comprehensive framework of algorithmic governance, combining ex ante rules demanding fairness before deployment, and ex post rights letting individuals challenge unfair results.

Although researchers have not yet been able to assess the Act’s impact given its recency — it is in partial effect and is slated to enter full effect in August 2026 — its influence is already visible. The StepStone Group, a major online jobs platform, publicly audited its AI recommendation engine for bias — an effort praised by legal experts as a model for responsible compliance — while TechWolf, a Belgium-based HR tech firm, described the law as “manag[ing] risks explicitly and safeguard[ing] fairness and transparency.” Other major companies, like Amazon, Microsoft, and OpenAI, have also voiced support for the Act as a whole.

Adopting a comprehensive regulatory regime in the U.S. for AI-assisted hiring will not be easy. Debates will likely ensue over how to balance innovation with regulation and preserve competitiveness. Given the current political climate, federal action in the U.S. may be infeasible, so states should lead the way. Still, to ensure that workplaces are environments free from discrimination, where employment rewards ability rather than entrenches bias, American policymakers must act with urgency and resolve.