State-by-State AI Rules for Businesses in 2026

By Kevin Welch, CEO & Founder, Journey Payroll & HR
Published: February 2026
Last updated: February 2026

A Practical Guide to What’s Live, What’s Coming, and  What to Ask Your Vendors

Most business owners still hear “AI regulation” and think that’s for developers.

But in 2026, the risk is increasingly tied to the business that uses AI, especially in hiring, promotion and performance workflows, customer interactions, and other automated decision systems.

Here’s the reality that catches people off guard: you don’t have to be headquartered in a state for its rules to matter. If your AI-influenced decisions impact an employee, applicant, or customer in a place with specific requirements, you may need to account for them.

There is no single comprehensive federal AI statute governing private-sector AI use. Instead, regulation is a mix of sector-specific laws, agency enforcement, and state and local requirements.

Key Takeaways

  • There’s no single comprehensive federal AI statute, but state and local rules already impact businesses using AI in real decisions, especially in hiring.
    NYC is enforcing hiring-AI requirements, including bias audits, notices, and posting requirements.
    Colorado’s high-risk AI framework is coming, effective June 30, 2026, following an extension under SB25B-004.
    • The fastest way to reduce surprise risk is not to become a compliance expert. It is to get clear, written answers from vendors, including documentation, logs, and disclosures.

Definitions

  • AI or automated decision tool: Software that scores, ranks, recommends, or materially influences decisions about people, including employees, applicants, and consumers.
    • Deployer: The business that uses the tool in real decisions, even if it didn’t build it.
    • Tiering: Grouping jurisdictions by how directly their rules affect operations.
  • Tier 1: Broad AI governance
  • Tier 2: Employment-specific AI rules
  • Tier 3: Consumer privacy rules affecting profiling and automated decisions

The Signal: This isn’t Theoretical

  • In 2023, the EEOC announced a $365,000 settlement with iTutorGroup tied to allegations that hiring software automatically rejected older applicants.
  • In Mobley v. Workday, a federal court order dated July 12, 2024 granted in part and denied in part a motion to dismiss, allowing portions of the claims tied to alleged algorithmic screening discrimination to proceed.

What this signals for the next 3 to 12 months is simple: awareness is rising faster than most companies’ documentation. That’s why questions increase, internally from employees and externally from customers and regulators.

State Index by Tier

This is a practical shortlist of high-impact laws for many employers and consumer-facing businesses. It is not an exhaustive 50-state survey.

Tier 1: Broad AI governance

  • Colorado: High-risk AI framework under SB24-205, with effective date extended to June 30, 2026.

Tier 2: Employment-specific AI rules (hiring and interviews)

  • New York City: Automated Employment Decision Tools (AEDT) rules; enforcement began July 5, 2023.
  • Illinois: HB3773 employment AI amendments; effective January 1, 2026.
  • Illinois: Artificial Intelligence Video Interview Act; effective January 1, 2020.
  • Maryland: Interview facial recognition consent requirement (HB1202); effective October 1, 2020.

Tier 3: Consumer Privacy Laws Affecting Profiling and Certain Automated Decisions

These are consumer privacy frameworks. Whether and how they apply to employee or B2B contexts varies by state and effective date, but they matter immediately for consumer-facing AI decisioning. The National Conference of State Legislatures tracks the expansion of state consumer privacy laws.

A clear statutory example of “profiling plus significant effects” language is Virginia’s consumer data protection law, which provides an opt-out right for profiling in furtherance of decisions that produce legal or similarly significant effects concerning the consumer.

What Each Tier Means and Why it Matters

Tier 1: Broad AI governance: Colorado

This is the big framework tier. It is designed to address AI that can create consequential harms, including algorithmic discrimination risk.

What this means for businesses: If AI is a meaningful factor in consequential decisions affecting Colorado residents, you should expect higher expectations around governance, documentation, impact assessments, and risk controls. Colorado’s effective date is June 30, 2026.

Tier 2: Employment-specific AI rules: NYC, Illinois, Maryland

This is the day-to-day HR workflow tier. It covers hiring tools, interviews, screening, and notices.

Why this tier matters most: It is the fastest path to employee questions because it touches hiring and promotion decisions directly, especially in NYC where requirements are already enforced.

Tier 3: Consumer privacy automated decision rules

This tier matters most when you use AI to make or influence decisions about customers, including eligibility, access, denials, personalization, fraud scoring, and similar automated decision workflows.

Important note: These laws are primarily consumer-focused; employee coverage varies by state and effective date. But if your AI affects consumers, Tier 3 is already relevant.

What to do Next: Baseline Posture

Before you try to “comply with every law,” do the simple things that reduce risk across all tiers:

  1. Inventory tools that use AI or predictive analytics, such as ATS and HRIS platforms, scheduling, performance, CRM, and support chat.
  2. Map impacted people by state, including employees, applicants, and customers.
  3. Document what the tool is doing, including what decisions it influences and what data it uses.
  4. Disclose when AI materially influences meaningful decisions, especially hiring and consumer decisioning.
  5. Keep evidence, including decision logs, audit outputs where required, and vendor commitments.

Vendor Questionnaire (Copy and Paste)

We’re completing a multi-state review of tools that may use AI or predictive analytics. Please provide written responses:

  1. AI features in scope: Which parts of your product score, rank, recommend, approve or deny, or materially influence decisions about people?
  2. Where impacted people live: In which states do these features affect employees, applicants, or consumers?
  3. State-by-state compliance mapping: Provide a written compliance matrix for relevant requirements, including NYC AEDT, Illinois employment AI updates, and Colorado SB24-205 obligations.
  4. Audit or assessment support: If our use triggers audits or assessments, what do you provide, including methodology, outputs, and frequency?
  5. Decision logs: Can you produce decision logs, including inputs, outputs, timestamps, and versioning, for a specific decision within X business days?
  6. Disclosure controls: Do you support configurable notices or disclosures by jurisdiction, including candidate notices and consumer disclosures?
  7. Escalation protocol: If we receive a complaint or regulator inquiry, what’s your SLA and what documentation do you provide?
  8. Contract terms: Do you warrant compliance representations, and what indemnities apply if those representations are inaccurate?

FAQ

Do AI laws apply if my vendor provides the AI?
Often yes, because obligations can attach to the deployer, meaning the business using the tool, especially around notices, audits, and documentation. NYC is a clear example of employer obligations.

Is NYC really enforcing hiring-AI rules?
Yes. NYC notes enforcement began July 5, 2023.

Do consumer privacy laws cover employees?
It varies by state and effective date. Many are primarily consumer-focused. The National Conference of State Legislatures tracks state consumer privacy laws.

What does “profiling” mean in this context?
Virginia’s statute ties profiling to decisions that produce legal or similarly significant effects concerning the consumer and provides an opt-out right in that context.

What’s the biggest Tier 1 state to watch?
Colorado, because it has a broad framework and a clear effective date of June 30, 2026, extended under SB25B-004.

About Kevin Welch

Kevin Welch is the CEO, Owner, and Founder of Journey Payroll and HR. He helps business owners reduce risk, stay compliant, and build practical systems that protect both employers and employees without turning HR into a bureaucratic mess.

 

ATS

ATS Contact Form

"*" indicates required fields

This field is for validation purposes and should be left unchanged.

Loading...