EU AI Act: A Global Benchmark for Governing Artificial Intelligence
Brussels has done what Washington could not and Beijing would not. The EU AI Act, which entered into force in August 2024 and is now midway through a staggered implementation timeline, represents the first binding, horizontal regulatory framework for artificial intelligence anywhere in the world. For compliance professionals, the question is no longer whether this legislation matters. It is whether their organisations are moving fast enough to meet a set of obligations that are more operationally demanding, and more technically granular, than much of the early commentary suggested.
The regulation is not a repackaged GDPR for algorithms. It introduces a risk-based classification system, imposes lifecycle obligations on providers and deployers alike, creates new institutional machinery at both the European and national level, and extends its reach well beyond the bloc’s borders. What follows is a working overview of the EU AI Act as it stands, with attention to the areas most likely to consume compliance resources over the next 18 months.
How the EU AI Act’s Risk Classification Works in Practice
The four-tier model (unacceptable, high, limited and minimal risk) is by now well understood in outline. What deserves closer attention is how classification operates at the edges. A system is not high-risk simply because it touches a sensitive sector. It must fall within one of the specific use cases enumerated in Annex III, covering biometric identification, critical infrastructure, education, employment, access to essential services (including credit scoring), law enforcement, migration and the administration of justice. The list is exhaustive, not illustrative, though the Commission retains the power to amend it by delegated act.
Critically, the regulation also captures AI systems that serve as safety components of products governed by existing EU harmonised legislation, such as the Machinery Regulation and the Medical Devices Regulation. Where a product requires third-party conformity assessment, the AI component inherits the same obligation. Compliance teams in regulated industries should be mapping these overlaps now, because the conformity assessment route (self-assessment under Annex VI versus notified body involvement under Annex VII) has significant cost and timeline implications.
Consider a practical example. A bank deploys an AI system to automate credit-scoring decisions. That system falls squarely within Annex III as a high-risk use case governing access to essential financial services. The bank, as deployer, must conduct a fundamental rights impact assessment, ensure meaningful human oversight of automated decisions, and notify the market surveillance authority. If the bank has customised the underlying model to the point where it has altered the system’s intended purpose, it risks being reclassified as a provider, inheriting the full suite of conformity assessment, technical documentation and post-market monitoring obligations. That reclassification trigger, which also applies where a deployer places its own name or trademark on a system, is a procurement landmine that compliance officers need to flag early in any vendor relationship.
Conformity Assessment, CE Marking and the Standards Gap
For high-risk systems, market access requires completing a conformity assessment before the system is placed on the market or put into service. Most will follow the internal control procedure in Annex VI. Systems used for biometric identification require the heavier Annex VII route involving a notified body.
Here lies one of the regulation’s most pressing practical problems. Conformity assessment depends on harmonised standards, and the bodies responsible for producing them (CEN and CENELEC) are still working. The Commission issued a standardisation request in May 2023, but finalised standards remain some way off. In the interim, providers must demonstrate compliance against the essential requirements in the regulation itself, without the safe harbour that harmonised standards would provide. Organisations should be documenting their interpretive choices carefully to defend them if challenged.
Obligations That Compliance Teams Tend to Underestimate
Three requirements deserve more attention than they typically receive.
The first is Article 4’s AI literacy obligation. This applies to all organisations that provide or deploy AI systems, regardless of risk tier. Staff and other persons dealing with AI on the organisation’s behalf must have a sufficient level of AI literacy appropriate to their role and context. This is not a high-risk-only requirement. It took effect in February 2025, meaning most organisations are already subject to it.
The second is the fundamental rights impact assessment for deployers of high-risk systems, as outlined in the credit-scoring example above. This sits squarely on the deployer’s shoulders, separate from the provider’s own risk management obligations.
The third is the interaction with the GDPR. The EU AI Act does not replace existing data protection law. The data governance requirements in Article 10 (on training, validation and testing data) must be read alongside the GDPR’s purpose limitation and data minimisation principles. In practice, this dual compliance burden will be one of the most technically complex areas to navigate, particularly for systems trained on large datasets.
General-Purpose AI: Where the EU AI Act Breaks New Ground
The provisions on general-purpose AI (GPAI) models are among the regulation’s most novel features. All GPAI providers must maintain technical documentation, implement a copyright compliance policy, and publish a sufficiently detailed summary of training data content. Models assessed as posing systemic risk (defined by a computational threshold of 10²⁵ FLOPs, though the Commission may revise this) face additional obligations: adversarial testing, systemic risk assessment, incident reporting to the AI Office, and cybersecurity protections.
The open-source dimension matters here. The EU AI Act broadly exempts open-source AI models from provider obligations, a carve-out that reflects sustained industry lobbying. But the exemption vanishes for GPAI models with systemic risk. For compliance teams evaluating whether to build on open-source foundations, that boundary is material: an open-source model that crosses the systemic risk threshold brings its provider back into the full regulatory perimeter.
The AI Office has been developing codes of practice to give GPAI obligations practical shape. Compliance with an approved code creates a presumption of conformity, making participation in the drafting process strategically valuable.
Governance, Enforcement and the Extraterritorial Dimension
Enforcement operates through a layered architecture. The AI Office holds exclusive competence over GPAI models. National market surveillance authorities handle high-risk systems. The European Artificial Intelligence Board coordinates cross-border matters and issues guidance. Member states must also establish at least one AI regulatory sandbox, a facility compliance teams should explore where novel systems are being developed.
The national implementation picture, however, is uneven. Member states were required to designate competent authorities, but progress varies markedly across the bloc. For organisations operating in multiple jurisdictions, this patchwork creates practical uncertainty about who will be supervising what, and how consistently the rules will be applied. It is worth recalling that the GDPR experienced similar teething problems, with enforcement quality diverging sharply between data protection authorities. The EU AI Act’s enforcement landscape is likely to follow a comparable pattern.
Penalties are calibrated to bite. Deploying a prohibited system attracts fines of up to €35 million or 7% of global annual turnover. Breaching high-risk obligations carries a ceiling of €15 million or 3%. The regulation’s extraterritorial reach mirrors the GDPR: it applies to providers and deployers outside the EU where the system’s output is used within it. Non-European firms must appoint an authorised representative in the Union.
What to Do Now
The prohibitions and AI literacy obligation are already in force. GPAI obligations applied from August 2025. The high-risk requirements land in August 2026, with certain product-safety-linked systems following in August 2027. For compliance professionals, the immediate priorities are clear: complete an AI inventory, map systems to risk tiers and applicable product legislation, identify conformity assessment routes, close the AI literacy gap, stress-test procurement contracts for provider-deployer allocation, and monitor the AI Office’s evolving guidance closely. Those who treat the EU AI Act as a box-ticking exercise will find, as many did with the GDPR, that the regulation’s real force lies not in its penalties but in the operational discipline it demands.
