Artificial Intelligence in Hiring: Diverging Federal, State Perspectives on AI in Employment?
Highlights
- President Donald Trump's executive order (EO), Removing Barriers to American Leadership in Artificial Intelligence, cleared away policies seen as hindering innovation, signaling a major shift in U.S. artificial intelligence (AI) policy.
- The order follows President Trump's earlier action to reverse several Biden-era orders, including the Oct. 30, 2023, EO that addressed discrimination and bias in AI and suggested that AI systems for recruiting and hiring could worsen existing inequalities.
- Shortly after the order was issued, the U.S. Equal Employment Opportunity Commission (EEOC) removed several guidance documents from its website, including recommendations for employers on using AI tools responsibly in hiring.
- Although federal guidance on AI use in the workplace seems to have been revoked (or at least removed from agency websites), employers must still comply with existing federal, state and local laws when implementing AI.
The Trump Administration has moved quickly to roll back Biden-era protections related to artificial intelligence (AI) in federal hiring practices. On Jan. 20, 2025, President Donald Trump issued an executive order (EO), Initial Rescissions of Harmful Executive Orders and Actions, that revoked Biden's EO, Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence.
On the same day, President Trump introduced his own EO, Reforming the Federal Hiring Process and Restoring Merit to Government Service. This order states that its goal is to shift federal hiring toward assessing candidates based on their skills, experience and commitment to the U.S. Constitution rather than factors such as race, sex or religion.
Three days later, on Jan. 23, 2025, President Trump signed EO 14179, Removing Barriers to American Leadership in Artificial Intelligence. This order requires federal agencies to review and roll back existing AI policies and regulations.
In response, federal agencies, including the U.S. Equal Employment Opportunity Commission (EEOC) and U.S. Department of Labor, aligned with the new administration's goals by retracting their guidance on AI and workplace discrimination. Notably, the EEOC's 2023 guidance on responsible AI use in employment selection and the Office of Federal Contract Compliance's (OFCCP) guidance on AI and equal employment opportunity for federal contractors were removed from their websites.
Though federal guidance on AI use in the workplace appears to have been revoked (or removed from agency websites), employers are still required to comply with current federal, state and local laws when implementing AI.
State-Level Responses to AI in Employment: Protecting Jobseekers from Discrimination
In contrast to President Trump's approach, state legislators nationwide have become increasingly concerned about the use of AI in hiring, particularly because of its potential to discriminate based on gender, race or other protected characteristics. To address these concerns, several states and localities have introduced or passed legislation aimed at mitigating the potential discriminatory impact of AI in employment practices.
At the forefront of these efforts is New York City, which implemented Local Law 144 (the NYC AI Bias Law) in July 2023. This law requires employers and employment agencies operating in the city that use automated employment decision tools (AEDTs) for hiring or promotion decisions to conduct annual independent bias audits.
At the time of its introduction, the NYC AI Bias Law was unprecedented in the U.S., but it has since spurred other cities and states to propose or enact similar legislation:
- California (Proposed): Assembly Bill 2930 would require employers (or AI developers) using automated decision systems to conduct an impact assessment before deploying the system and annually thereafter. These assessments must be submitted to the California Civil Rights Department. Separately, the California Privacy Protection Agency (CPPA) is currently working on draft regulations under the California Consumer Privacy Act (CCPA), under which businesses would need to conduct a risk assessment when using "automated decision-making technology" in various contexts, including employee hiring, work assignments, compensation, promotion and termination.
- Colorado (Enacted): Effective Feb. 1, 2026, Senate Bill (S.B.) 24-205 mandates employers to comply with high-risk AI system standards, including conducting bias audits for AI used in employment and insurance. Employers must take reasonable care to prevent algorithmic discrimination and implement risk management policies. This legislation covers a broad spectrum of applications, including employment, financial services, housing, healthcare, education, insurance and legal services, with the aim of protecting against "algorithmic discrimination."
- Illinois (Effective January 2020): The Artificial Intelligence Video Interview Act (820 ILCS 42/1) prohibits employers from using AI tools in recruitment, hiring, promotion or other employment-related decisions when the use of AI leads to discrimination based on protected characteristics.
- Maryland (Enacted): S.B. 446 prohibits an employer from using certain facial recognition services during an applicant's interview for employment unless the applicant consents under certain conditions.
- New York State (Proposed): Bill A00567/2025 would require a summary of the bias audit results to be shared with the state's Department of Labor and would authorize an internal auditor under certain circumstances.
- Texas (Proposed): House Bill (H.B.) 1709 would set up a comprehensive legal framework to prevent and remedy algorithmic discrimination against people based on protected characteristics. The proposed bill would require "developers" and "deployers" to use reasonable care to protect consumers from any known or reasonably foreseeable risks of algorithmic discrimination arising from high-risk AI systems. This bill is similar to California's Bill A2930 in that it offers consumers broad protections when, among other things, being subject to the use of AI for "consequential decisions" such as employment opportunities.
- Virginia (Proposed): H.B. 2094 would prevent the use of AI systems that result in "differential treatment or impact that disfavors an individual or [a] group" based upon protected characteristics. Like Texas' H.B. 1709 and California's Assembly Bill A2930, Virginia's proposed bill offers consumers broad protections against AI used in "consequential decisions" such as employment.
These state laws aim to safeguard jobseekers from the potential discriminatory effects of AI, representing a clear contrast to the Trump Administration's approach.
What Does This Mean for Employers?
Despite the new administration's orders on AI, employers must navigate the complex intersection of technology and employment law to avoid significant liability. Federal statutory protections remain in place, including Title VII of the Civil Rights Act of 1964, which prohibits both intentional and unintentional discrimination in employment based on race, color, religion, sex and national origin. These protections apply to the use of AI in recruiting and hiring processes. Moreover, the growing patchwork of state and local laws regulating AI remains unaffected by the EOs and requires careful attention.
Additionally, the rise of case law is already underway. For example, in a landmark ruling in July 2024, the U.S. District Court for the Northern District of California allowed claims of unlawful discrimination to proceed against Workday Inc. for its AI-driven recruitment software, which is used by thousands of companies. The plaintiff in that case is now seeking to bring it on a class action basis. This case highlights that courts are unlikely to allow companies to avoid liability for decisions made by their AI systems. It also suggests that in addition to businesses using AI tools, even AI tool developers could potentially be held accountable for discriminatory outcomes under existing laws.
Given these developments, companies must stay vigilant and continue to monitor and comply with all relevant federal, state and local laws and regulations as they continue to change. The recent executive orders from the Trump Administration, alongside evolving state laws, have created a more intricate regulatory environment for businesses.
Additionally, companies using AI tools for recruiting and hiring should take proactive steps such as ensuring they have been subject to specific impact assessments and audits where required by local laws and, more generally, seek to identify and address any potential biases in the algorithms. Ongoing internal training is also essential to ensure human resources teams understand the technology they are using and are aware of the legal implications.
Holland & Knight's Litigation and Dispute Resolution, Labor, Employment and Benefits, Data Strategy, Security & Privacy and Artificial Intelligence teams will continue to provide updates on the evolving regulatory landscape governing AI use in employment and assist clients in navigating these complexities.
Information contained in this alert is for the general education and knowledge of our readers. It is not designed to be, and should not be used as, the sole source of information when analyzing and resolving a legal problem, and it should not be substituted for legal advice, which relies on a specific factual analysis. Moreover, the laws of each jurisdiction are different and are constantly changing. This information is not intended to create, and receipt of it does not constitute, an attorney-client relationship. If you have specific questions regarding a particular fact situation, we urge you to consult the authors of this publication, your Holland & Knight representative or other competent legal counsel.