The EU Artificial Intelligence Act entered into force in August 2024. Its prohibitions on certain AI practices became applicable from February 2025. Article 4, requiring organisations to ensure that staff deploying AI systems have adequate AI literacy, became applicable at the same time. The obligations that most organisations consider the real AI Act — the requirements applying to high-risk AI systems — were scheduled to become applicable in August 2026.

They will be delayed. The European Commission's Digital Omnibus proposal included a proposal to push the high-risk AI system requirements back to December 2027 at the earliest — conditional on the availability of harmonised standards and implementation guidance. This is not an abandonment of the framework. It is an acknowledgement that the technical standards organisations need do not yet exist.

A delay to the high-risk AI system requirements is not a reprieve. It is additional time that organisations which are already behind will waste — and which organisations already investing in compliance will use to build competitive advantage.

What is already in force

The prohibitions are in force. Organisations deploying AI systems that manipulate individuals through subliminal techniques, exploit vulnerabilities based on age or disability, score individuals based on social behaviour, or conduct real-time remote biometric identification in publicly accessible spaces are already in breach of the Act. Fines reach up to thirty-five million euros or seven percent of global annual turnover for prohibited practices.

The Article 4 AI literacy requirement is in force. Organisations must ensure that staff involved in deploying AI systems possess sufficient AI literacy to understand and work with those systems at the level required by their role. This is a targeted, role-specific requirement that must be documented — not a generic digital skills obligation.

The obligations on providers and deployers of general purpose AI models with systemic risk are also in force. When you procure a GPAI model from a provider subject to the Act, your provider must give you documentation, transparency, and oversight mechanisms. Reviewing AI vendor contracts for compliance with these provisions should be a current priority.

High-risk AI: what the delayed obligations require

The high-risk AI system framework applies to AI in eight categories: biometric identification, critical infrastructure, education, employment, access to essential services, law enforcement, migration, and administration of justice.

For high-risk AI systems, providers must implement a risk management system covering the entire lifecycle, establish data governance practices, prepare technical documentation, implement automatic logging, ensure transparency for deployers, enable human oversight by design, and — where applicable — register the system in the EU AI database and undergo conformity assessment.

Deployers have their own obligations: conduct fundamental rights impact assessments, implement human oversight, monitor the system in operation, report serious incidents, and inform affected individuals of AI use where required. These are not administrative requirements — they require genuine operational infrastructure and designated human accountability.

What boards still do not understand

The most common board-level misconception is that the EU AI Act is primarily a compliance issue, to be delegated to the legal function and managed as a cost centre. This is incorrect in a way that creates material governance risk.

The Act creates board-level accountability directly. The requirement for human oversight of high-risk AI systems must be operationally effective — not performed through a policy document that no one reads. Governance theatre — a tick-box human review that rubber-stamps AI outputs without genuine scrutiny — does not satisfy the Act and will not satisfy an enforcement authority.

The EU AI Act does not ask boards to understand the mathematics of neural networks. It asks them to demonstrate that their organisation exercises genuine human judgment at points where AI affects people's lives. That is a governance question, not a technical one.

The second board-level issue is incident reporting. For organisations with high-risk AI systems in multiple EU member states, identifying the relevant authority, establishing internal incident detection and reporting workflows, and ensuring incidents are reported within the required timescales requires infrastructure that most organisations do not currently have.

The third issue is supply chain due diligence. Most organisations procure AI from third parties. The Act creates obligations that flow through this supply chain, and deployers cannot simply rely on their AI vendors' assurances. Existing AI procurement contracts should be reviewed and renegotiated where necessary.

The competitive reality

The organisations investing in EU AI Act compliance now are developing governance infrastructure — risk management systems, documentation practices, human oversight mechanisms — that will be required by any mature AI governance framework. They are also developing the institutional knowledge needed to deploy AI responsibly in high-stakes domains.

The EU AI Act is the most significant AI governance development in global regulatory history. The organisations that understand it most clearly and build governance frameworks that genuinely satisfy its requirements will be the organisations best positioned to deploy AI in the domains where it creates the most value — because they will have demonstrated that their deployment practices can be trusted.