The United Kingdom made a deliberate strategic choice. When the European Union was building the AI Act — a comprehensive, horizontal, risk-based statutory framework — the UK government announced that it would not legislate. Instead, existing sector regulators would apply existing rules to AI systems within their domains.
This was not regulatory inaction. It was a considered bet: that the UK could attract AI investment by offering a lighter, more flexible regulatory environment than Brussels — while still maintaining meaningful oversight through sector-specific expertise. Two years on, the scorecard is more complicated than either side of that debate has acknowledged.
What the sector-based approach has produced
The advantages are real. Sector regulators bring domain expertise that a horizontal AI regulator may lack. The ICO understands data protection. The FCA understands financial services. The CMA's foundation models work has been among the most analytically sophisticated regulatory analysis of AI market dynamics produced by any authority globally.
The disadvantages are equally real. The sectors do not align perfectly with how AI is actually deployed. An AI system used in HR recruitment at a financial services firm sits at the intersection of FCA, ICO, EHRC, and Acas remit — with no single authority having primary oversight. General purpose AI systems have no natural regulatory home. And the absence of a horizontal framework means there is no consistent definition of high-risk AI, no common impact assessment methodology, and no unified enforcement approach.
The UK's sector-based approach does not mean less regulation. It means less coherent regulation — which is harder to navigate, not easier.
The AI Bill that has not arrived
An AI Bill appeared in the King's Speech of 2024. It did not materialise during 2025. As of mid-2026, the government's position is that existing law already applies to AI and that the priority is supporting AI growth through AI Growth Zones rather than creating new legislative obligations. The AI Security Institute continues its evaluation work on frontier models, but it has no enforcement powers.
This creates a significant gap. Organisations seeking clarity on what the UK actually requires of them when deploying AI systems face a more uncertain answer than the government's communications suggest.
Copyright, training data, and the creative industries dispute
The most contested active issue in UK AI regulation is copyright. The government's 2024 consultation presented options for handling the use of copyrighted works in AI training. The creative industries responded with extraordinary force. The government's preferred option — a text and data mining exception with an opt-out mechanism — received support from fewer than 4% of consultation respondents.
The government is required under the Data (Use and Access) Act 2025 to publish its response and an economic impact assessment. As of publication, that response has not yet emerged. Getty Images v Stability AI is proceeding through the UK courts. Whatever the Court of Appeal decides will establish precedent — but will not resolve the policy question of whether Parliament intends to create a broader training data exception.
The relationship with EU law post-Brexit
A critical and underappreciated complexity for UK organisations is that the EU AI Act applies to organisations outside the EU that place AI systems on the EU market or whose AI outputs are used within the EU. A UK firm providing AI-powered services to French or German clients is within scope of the EU AI Act.
The most efficient approach for organisations straddling both markets is to adopt EU AI Act-aligned governance as the baseline — given that it is more demanding and more clearly specified — and to ensure that UK-specific obligations are mapped against that foundation. This avoids designing a UK-only governance architecture and subsequently discovering it is insufficient for EU market access.
The practical reality for GCs and boards
The absence of a UK AI Act does not mean the absence of UK AI obligations. UK GDPR Article 22 creates meaningful constraints on automated decision-making. The Equality Act 2010 applies to AI-driven discrimination with the same force as any other form. FCA, ICO, and CMA expectations are accumulating through guidance and enforcement action even without primary legislation.
What the absence of a comprehensive framework does mean is that organisations must do more mapping work themselves — identifying which regulators have jurisdiction over which AI use cases, assembling the relevant guidance, and building governance frameworks that are coherent across jurisdictions. The organisations investing in that work now will be significantly better positioned when legislation does arrive.