US states push forward with new AI regulation wave

regulation

Artificial intelligence regulation has rapidly developed into a global priority, with lawmakers increasingly focused on tackling algorithmic bias, strengthening transparency, and safeguarding workers’ rights.

While much of the recent attention has centred on government-led initiatives such as the EU AI Act and AI-focused executive actions at a federal level in the US, a parallel shift is also accelerating at the state level, claims Saifr.

Across the country, states are designing legal frameworks to ensure AI is deployed responsibly—especially in employment practices.

Until recently, AI oversight was largely discussed in national and international contexts. The European Union introduced its flagship EU AI Parliament Act, and the US has, at various points, issued executive guidance, including under the Biden Administration. The current administration has also created an AI-focused task force, demonstrating that AI governance has now become an issue of federal concern.

Yet, federal action is only part of the picture. US states have become considerably more active in drafting and enforcing their own AI regulations. Some have fully enacted laws, while others are evaluating formal rulemaking. A Stanford study found that AI regulations have grown by 21.3% across 75 countries, highlighting a global wave of governance aimed at promoting ethical and transparent AI adoption.

So far, seven states have implemented rules to oversee the use of AI, particularly in employment decisions: California, Colorado, New York, Maryland, Utah, Texas and Illinois. The core themes emerging across these legislative efforts include safety, consumer protection, digital identity rights, and stringent anti-bias expectations. Below are examples of how these states are reshaping AI policy for the workplace.

In California, current legislation (CA A.B. 2602) states that employment agreements cannot permit the creation or use of a digital replica of a worker’s voice or likeness without worker or union consent. In addition, a regulation set to take effect on 1 October 2025 expands the Fair Employment and Housing Act to prohibit discriminatory outcomes when employers use automated decision systems.

Colorado has enacted S.B. 205, which takes effect on 30 June 2026. It requires employers using high-risk AI systems in employment and insurance to meet strict bias auditing requirements, offering a structured framework for risk reduction.

Illinois has amended the Illinois Human Rights Act through H.B. 3773, which will become effective on 1 January 2026. It bars employers from using AI in hiring, promotion, training selection, discipline or other core employment areas if it results in discrimination against protected classes.

Maryland requires employers to obtain consent before using AI-driven facial recognition in hiring, and mandates transparency and ethical governance. Meanwhile, New York state and New York City have passed separate measures that restrict digital replicas in employment agreements and require bias audits for automated employment decision tools.

Texas has enacted H.B. 149, which comes into force on 1 January 2026. It prohibits employers from using AI systems to intentionally discriminate against protected classes and restricts AI technologies from manipulating behaviour, social scoring, or uniquely identifying individuals without consent.

At present, employment decision-making is the main target for state-level AI rulemaking. However, observers expect the regulatory focus to expand swiftly across a wide range of industries. If all 50 states introduce distinct compliance requirements, organisations operating across the US may eventually face complex multi-state regulatory obligations.

In sectors with more established oversight, such as financial services, regulators including FINRA and the SEC are evaluating potential AI standards, despite earlier proposals being withdrawn at a federal level. This signals growing regulatory alignment between emerging AI risk frameworks and traditional compliance systems.

What remains unclear is the shape of a national AI regulatory framework, how state and federal rules will coexist, and what compliance models firms will need to adopt to operate across the entire US market. However, it is evident that both federal and state governments understand the immense societal impact of AI and recognise the need to protect citizens as technologies evolve.

Read the daily RegTech news

Copyright © 2025 RegTech Analyst

Enjoyed the story? 

Subscribe to our weekly RegTech newsletter and get the latest industry news & research

Copyright © 2018 RegTech Analyst

Investors

The following investor(s) were tagged in this article.