In January 2025, the Biden administration's Executive Order on Safe, Secure, and Trustworthy Artificial Intelligence — which had established the most ambitious federal framework for AI governance in American history — was revoked within hours of the new administration taking office. In its place: a direction to federal agencies to prioritise AI innovation, and a philosophical pivot from precaution to acceleration.

For organisations operating in the United States, the lesson was not simply political. It was structural. A regulatory framework built on an Executive Order has no more permanence than the signature on a single page. Without congressional legislation, the United States' AI regulatory landscape will continue to shift with each administration — and the real action, in the meantime, is happening at the state level.

The real AI regulation in America is not happening in Washington. It is happening in Sacramento, Austin, Denver, and Springfield — and the rules are not the same.

The federal position: innovation first

As of mid-2026, there is no comprehensive federal AI statute in the United States. The current administration has made its position clear: federal policy will prioritise the competitiveness of American AI development, and regulatory friction will be minimised. NIST's AI Risk Management Framework provides a voluntary guidance structure that many organisations have adopted as a de facto standard — but adoption is not mandated, and enforcement is non-existent.

The Federal Trade Commission retains authority to address deceptive or unfair AI practices under Section 5 of the FTC Act. The EEOC has guidance on AI use in hiring. The CFPB has addressed AI in credit decisions. But these are sector-specific overlays applied to existing statutory frameworks — not a comprehensive AI governance regime.

The state layer: where regulation actually lives

More than thirty American states have introduced or enacted AI-related legislation since 2023. The result is not a coherent national approach — it is fifty separate experiments, running simultaneously, with different definitions, different scope triggers, different risk thresholds, and different enforcement mechanisms.

Colorado: the first mover

Colorado's AI Act, operative in 2026, applies to developers and deployers of high-risk artificial intelligence systems — systems that make consequential decisions in employment, education, financial services, healthcare, housing, and government benefits. Covered entities must conduct impact assessments, disclose AI use, provide rights to human review, and report certain incidents to the Colorado Attorney General. The framework consciously parallels the EU AI Act's risk-based approach.

California: the battleground

California's suite of enacted measures includes AB 2013 (training data disclosure), SB 942 (AI-generated content watermarking), and AB 1008 (CCPA application to AI systems). SB 1047 — which would have imposed significant safety obligations on frontier AI developers — was vetoed by Governor Newsom in 2024. The cumulative effect for organisations serving California is a layered compliance obligation that requires careful mapping against the full CCPA/CPRA framework.

Illinois: employment focus

Illinois has taken a targeted approach focused on AI in employment. The Artificial Intelligence Video Interview Act requires employers using AI to analyse video interviews to disclose this use and obtain consent. Illinois has also expanded its Biometric Information Privacy Act to address AI-generated biometric inferences — creating material compliance obligations for firms conducting AI-assisted hiring.

Texas: the pro-innovation counter

Texas passed the Texas Responsible AI Governance Act in 2025, taking a markedly different approach: enforcement-oriented, focused on discrimination outcomes rather than process requirements, and sceptical of pre-deployment compliance mandates. It reflects the broader political economy of AI regulation in conservative-governed states and suggests that divergence along political lines will persist.

What this means for compliance strategy

The most prudent approach for internationally active organisations is to establish a baseline AI governance framework calibrated to the most demanding jurisdiction in which they operate — typically the EU AI Act for European operations and Colorado or California for American operations — and to build state-specific overlays on that foundation.

The organisations that treat US AI compliance as a single exercise will be continuously surprised. Those that build flexible, jurisdiction-aware governance frameworks will find the ongoing adaptation manageable.

The question of federal preemption remains genuinely open. Whether the current administration will support comprehensive federal AI legislation is uncertain. Organisations should plan on continued fragmentation for at least the next two to three years and build governance infrastructure accordingly.