Skip to main content

EU AI Act:101 – An In-depth Analysis of Europe’s AI Regulatory Framework

28 March 2024

In this article, our Data, Privacy & Cyber team provide an in-depth analysis of Europe’s AI Regulatory Framework.

With much excitement the EU Parliament’s plenary vote on the EU AI Act (the “Act”) took place on 13 March 2024. The voting was overwhelmingly in favour and the Act is being hailed as the world’s first artificial intelligence legislation aimed at “putting a clear path towards a safe and human-centric development of AI”.

The European Council should formally endorse the final text at some point in April 2024, and then following further linguistic work, the Act will be published in the Official Journal of the EU and enter into force 20 days after publication with a staggered implementation allowing different provisions to come in over a three-year period (see below for more detail).

So, whilst there is still a little way to go before the Act becomes “law”, we are nearly at the finish line (or the starting line depending on your point of view), and it is important to understand the impact it might have on your organisation and the steps you can take now in order to prepare.

Who does it apply to?

In summary, the Act applies to “providers”, “importers”, “distributors” and “deployers” (both public and private) of “AI systems” which are placed on the EU market or which affect those located in the EU; i.e. in a similar way to GDPR it purports to have a wide jurisdictional “reach”. Brussels, as ever, is keen to promote the “Brussels effect”.

AI systems

There has been much debate through the Act’s passage as to what an AI system means. The Act has now aligned its definition with the OECD’s definition, i.e.

An AI system is a machine-based system that, for explicit or implicit objectives, infers, from the input it receives, how to generate outputs such as predictions, content, recommendations, or decisions that can influence physical or virtual environments. Different AI systems vary in their levels of autonomy and adaptiveness after deployment.

The Act makes it clear we are not talking about automation, rather the key distinguishing factor is the AI system’s capability to infer something; basic and very commonly used automation tools are therefore out of scope.

Key role definitions

The Act defines what it means by “providers”, “importers”, “distributors” and “deployers”. Note there we will need to see how these definitions are interpreted via guidance, commentary and case law but for now the Act sets out that:

  • Provider” means “a natural or legal person, public authority, agency or other body that develops an AI system or a general purpose AI model or that has an AI system or a general purpose AI model developed and places them on the market or puts the system into service under its own name or trademark, whether for payment or free of charge”.

    Think: Open AI, Google and other well known AI leaders, but do be aware that many unexpected players will be inadvertently caught by this definition (and some mere “Deployers” will be caught as “Providers” depending on how they implement AI Systems).

  • Importer’” means “any natural or legal person located or established in the Union that places on the market an AI system that bears the name or trademark of a natural or legal person established outside the Union”.

  • Distributor” means “any natural or legal person in the supply chain, other than the provider or the importer that makes an AI system available on the Union market without affecting its properties”.

    The boundary between Importer and Distributor is grey and we will need to see who falls into each camp but Think: Third party intermediaries selling Provider systems into the EU.

  • Deployer” means “any natural or legal person, public authority, agency or other body using an AI system under its authority except where the AI system is used in the course of a personal non-professional activity”.

    Think: The User (be it a business or an individual) of an AI system; but for those of you, like one of the authors, that uses ChatGPT to create bed time stories for their children do note the non-professional activity exemption (i.e. no need for privacy notices for your children!).

It is important to understand your role in order to understand the obligations that will apply as there are different levels of obligation depending on your role; and we talk more below about other key steps you can take with regard to the Act.

The Act does not in general apply to providers of free and open-source models (some exemptions apply e.g. systemic general purpose AI (“GPAI”) models even if open source will be subject to regulation), AI systems used for national security, military, or defence, or research, development and prototyping activities (prior to release on the EU market).

When does the EU AI Act come into force?

Once the Act is published in the Official Journal of the European Union, it will come into force after 20 days; and there is then a staggered implementation allowing different provisions to come in over a three-year period depending on the type of AI system (see below for more information on these systems) in question:

  • 6 months after coming into force the bans on prohibited practices will apply;
  • 9 months after the codes of practice should be ready;
  • 12 months after the obligations for general purpose AI including governance will apply and penalties will come into force;
  • 24 months after the Act is broadly fully applicable, including obligations for many high risk systems (those listed in Annex III - see below for more information as to what type of systems this covers); and
  • 36 months after the obligations for other high risk systems (those defined in Annex II - see below for more information as to what type of systems this covers) will apply.

See below for more detail on what these terms, e.g. “prohibited practices” and “high risk systems” mean, but in summary this is a long run in before the Act is fully in force, during which time a lot might change in the world of AI.

Expedited compliance – the AI Pact?

Recognising the timescales involved, the fast pace of the technological developments and the increase in public awareness and usage of AI systems, the European Commission wants something to plug the gap. The so-called AI Pact is designed to do this.

It is “a scheme that will foster early implementation of the measures foreseen by the AI Act”. The commitments will be pledges to work towards compliance with the Act (even before the time frame above comes into being), including details about how the obligations are being met. The Commission will publish these pledges to “provide visibility, increase credibility, and build additional trust in the technologies developed by companies taking part in the Pact”.

In November 2023, the Commission launched a call for interest for organisations willing to get actively involved in the AI Pact. The AI Pact will be officially launched following the formal adoption of the EU AI Act and organisations will be “invited to make their first pledges public”.

It is currently a watch this space situation to see how many companies actually make these pledges. Might deployer customers start forcing provider vendors to sign up to the AI Pact as part of commercial negotiations? Might it become a due diligence AI procurement question? Might it become a provider marketing tool i.e. “We are already compliant with the EU AI Act via the AI Pact”?

Types of AI and different risk and obligations

The Act takes a risk-based approach, identifying four categories of risk (i.e. the higher the risk the greater the obligation), as well as the more recently included specific obligations placed on Providers of GPAI models.

The different categories of risk are:

  • Unacceptable risk (prohibited)– AI systems “considered a threat to people and will be banned”.

    These systems are prohibited and so must be phased out, at the latest, six months after the Act comes into force.

    Examples include AI systems that pose a significant risk to fundamental rights, safety or health, e.g. social credit scoring systems, emotion-recognition systems using biometric data in the workplace and education institutions (exceptions apply), untargeted scraping of facial images for facial recognition, behavioural manipulation and circumvention of free will (note that the recital expressly states that this does not cover advertising), exploitation of vulnerabilities of persons, e.g. age, disability, biometric categorisation of natural persons to deduce or infer their race, political opinions, trade union membership, religious or philosophical beliefs or sexual orientation, certain specific predictive policing applications and real-time remote biometric identification in public by law enforcement (exceptions apply).

  • High risk – AI systems “that negatively affect safety or fundamental rights will be considered high risk”.

    This is where the majority of time, resource and focus will be needed as most of the text of the Act addresses these systems and this is where the most stringent “day-to-day” obligations apply (outside the outright ban of prohibited AI systems). It is essential to understand your role in order to understand which obligations apply, e.g. are you a Provider (Articles 16 – 25) or a Deployer (Article 29)? (see below for a summary of these different roles).

High risk systems are split into two –

  • the list in Annex II (i.e. AI systems considered to be high-risk because they are covered by certain EU harmonization legislation (an AI system in this Annex will be considered high-risk when 1) it is intended to be used as a safety component of a product, or the AI system is itself a product covered by the EU harmonization legislation; and 2) the product or system has to undergo a third-party conformity assessment under the EU harmonization legislation)); and
  • the list in Annex III of high systems which are “high risk” as classified as such by the Act.

Examples of high risk systems in Annex III include:

  • certain non-banned biometric identification systems (excluding biometric systems that confirm who a person says they claim to be);
  • certain critical infrastructure systems;
  • AI systems intended to be used in relation to education and vocational training; and
  • AI systems intended to be used in relation to access to and enjoyment of essential private and public services.

In addition, and very importantly as this will likely touch on every company with employees in the EU, Annex III makes clear that AI systems intended to be used in many employment contexts (amongst other things - recruitment or selection of individuals or to monitor and evaluation performance of employees and or their behaviour) are considered high risk.

Do note that Providers who believe their systems sit outside these high-risk parameters can (because for example they only perform “a narrow procedural task” or “improve the result of a previously completed human activity”) must document why they believe the high risk rules do not apply. We will see how many Providers take up this exemption as it will be subject to significant scrutiny from both the relevant authorities and Deployers.

  • Limited risk – “refers to the risks associated with lack of transparency in AI usage”.

    AI systems that are not high-risk but pose transparency risks will be subject to specific transparency requirements under the AI Act.

    Providers must ensure that users are aware that they are interacting with a machine, that AI-generated content is identifiable, that solutions are effective, interoperable, robust and reliable, while deployers must ensure that AI-generated text published with the purpose to inform the public on matters of public interest must be labelled as artificially generated and AI-generated audio and video content constituting deep fakes must also be labelled as artificially generated. Examples of AI systems in this category include chatbots, text generators and audio and video content generators.

  • Minimal risk – “the AI Act allows the free use of minimal-risk AI”.

    There are no additional requirements mandated by the Act. Examples of AI systems in this category are AI-enabled video games, spam filters, online shopping recommendations, weather forecasting algorithms, language translation tools, grammar checking tools and automated meeting schedulers.

Providers of General purpose AI (“GPAI”) models

In addition to the above categories of “risk” assessed AI systems, the Acts imposes specific obligations on providers of generative AI models on which general purpose AI systems, like ChatGPT, are based. Providers are required to: perform fundamental rights impact assessments and conformity assessments; implement risk management and quality management systems to continually assess and mitigate systemic risks; inform individuals when they interact with AI (e.g. AI content must be labelled and detectable); and test and monitor for accuracy, robustness and cybersecurity.

Where GPAI models are considered systemic (i.e. when the cumulative amount of computing power used for a model’s training is greater than 10>25 floating point operations per second) providers will be subject to additional requirements to assess and mitigate risks, report serious incidents, conduct state-of-the-art tests and model evaluations, ensure cybersecurity, provide information on the energy consumption of these models and engage with the European AI Office (se below for more information about this newly formed body) to draw up codes of conduct.

Differing roles, differing obligations?

Understanding your role in relation to the EU Act is crucial as there are different obligations. This is probably best illustrated by looking at the key provider and deployer obligations for high-risk systems where we believe the main compliance focus under the EU Act will fall:

High Risk AI systems 
 Providers  Deployers
  • designing the systems to allow for effective human oversight
  • designing the systems to ensure an appropriate level of accuracy, robustness and cybersecurity
  • drafting and maintaining technical documentation for the AI system
  • establishing, implementing, documenting and maintaining a risk management system and quality management system
  • meeting data governance requirements, including bias mitigation
  • record-keeping, logging and traceability obligations
  • complying with registration obligations
  • ensuring the relevant conformity assessment procedure is undertaken
  • provider’s contact information made available on the AI system, packaging or accompanying documentation
  • drawing up the EU declaration of conformity promptly
  • ensuring the “CE marking of conformity” is affixed to the AI system
  • informing individuals that the deployer plans to use a high-risk AI system to make decisions or assist in making decisions relating to such individuals. If in an employment context deployers must inform workers representatives and the impacted workers that they will be subject to a high-risk AI system
  • using information from the providers to carry out a DPIA (likely to be required for high risk system)
  • undertaking a fundamental rights impact assessment for certain deployers and high-risk systems, e.g. if evaluating the creditworthiness of individuals or establishing their credit score, or for life and health insurance when used for risk assessment and pricing in relation to individuals
  • assigning human oversight of the AI system to a person with the necessary “competence, training, and authority
  • if the deployers control input data, ensuring that the data is relevant and sufficiently representative
  • if a decision generated by the AI system results in legal or similarly significantly effects, the deployer must provide a clear and meaningful explanation of the role of the AI system in the decision-making process and the main elements of the decision

Enforcement

What are the relevant Institutions?

There are myriad different institutions, some old some new, that will take an interest in enforcing the Act.

The AI Office will be established, sitting within the Commission, to monitor the effective implementation and compliance of GPAI model providers, as well as in summary being responsible for production of codes of practice (in conjunction with other relevant parties).

In addition to the EU AI Office, the European Artificial Intelligence Board (analogous to the European Data Protection Board) will comprise high-level representatives of competent national supervisory authorities, the European Data Protection Supervisor, and the European Commission. Its role is to facilitate a smooth, effective, and harmonised implementation of the Act, to co-ordinate between national authorities and to issue recommendations and opinions.

Member State authorities (and it will not necessarily be current data regulators but may be a combination of regulators) will be responsible for local enforcement and indeed the Act allows for individuals to lodge an infringement complaint with a national competent authority.

With so many different bodies involved expect either perfect harmony or potentially contradictory and confusing guidance, decisions and commentary.

What are the penalties for non-compliance?

The penalties set out in the Act are as follows:

  • Non-compliance with prohibited AI practices, up to 7% of global annual turnover or €35 million.
  • Non-compliance with various other obligations under the Act, up to 3% of global annual turnover or €15 million.
  • Supplying incorrect information to authorities, up to 1% of global annual turnover or €7.5 million.

It will also be important to follow the progress of the AI Liability Directive, which aims to introduce rules specific to damages caused by AI systems to “ensure that victims of harm caused by AI technology can access reparation”. The current proposal introduces the “presumption of causality”, which means victims will not have to “explain in detail how the damage was caused by a certain fault or omission” and when dealing with high risk AI systems they will have access to evidence from suppliers and companies. This is an attempt to level the playing field for individuals who have suffered harm as having to meet the burden of proof under the existing fault-based liability regime could make it “excessively difficult if not impossible for a victim”.

There is also the revised Product Liability Directive to consider. It updates the EU strict product liability regime and will “apply to claims against the manufacturer for damage caused by defective products; material losses due to loss of life, damage to health or property and data loss”; It is limited to claims made by individuals but it is clear why those working in the AI space will need to be aware of the changes and how they affect their business.

More familiar data and privacy claims under the GDPR have to be considered as well (i.e. many AI use cases involve voluminous personal data process under GDPR) – as such the rights of individuals have to be considered in the round and there are multiple routes to enforcement and avenues claims that will be important to understand.

So what should you be doing now?

There is a lot to think about and whether you who already have AI systems in place or you are considering the use of new AI systems, in very broad summary and just as initial starting point thoughts, you need to think about the following: [note this is from the point of view of a Deployer – Providers et al will have different considerations].

1. AI Systems Audit: What AI systems do we use or are we planning to use?

2. EU AI Act Applicability Audit:

  • Does the Act apply to any of these?
  • What risk categories apply? [EU AI Act Applicability Audit]
  • What is our role under the Act? Are we a Provider? Are we a Deployer etc?

3. Leverage off Providers:

  • What due diligence have we done on Providers specifically re their compliance in relation the Act?
  • How can they help us with our own compliance with the Act?

4. Data Governance (current systems and new processes):

  • What is the best governance method for us to cover all our obligations under the Act? (i.e. do we need to consider a mixture of procurement due diligence on Providers with our own internal DPIA process running through all our obligations?)
  • Think about data & privacy and procurement compliance processes we already have in place that can take some of the slack
  • How are we going to repeat this again and again to ensure continued compliance?
  • Do we need, if we do not have one already, a multi stakeholder AI Governance committee?
  • Should we put in place AI audit systems to ensure such continuing compliance?

These are just ideas and every business will have its own approach but if initially a business is thinking about AI Governance Considerations, doing an AI Systems Audit and an EU AI Act Applicability Audit, considering how best to Leverage off Providers and then thinking what current compliance systems it can redeploy/re-tool to assist – it will be on the right track.

Jurisdiction agnosticism?

This article has been focused on the EU AI Act, which although wide in jurisdictional scope, is just an EU piece of legislation. There are other approaches around the word, for instance see our article comparing the UK vs EU vs US.

For international organisations a decision needs to be made – are you going to have a jurisdiction by jurisdiction approach to AI compliance and governance? Or will you attempt to create a jurisdiction agnostic approach (i.e. creating a compliance and governance structure that takes the best (or worst as the case may be) of the legislation around the world to create a high-level compliance and governance structure that can flex as and when legislation changes, emerges, disintegrates?

If you would like to know more about the EU AI Act, please join us for our next In-House Data Club event on 30 April 2024 at 4.00pm on Zoom or get in touch with your usual Lewis Silkin contact.

Related items

Related services

Artificial Intelligence (AI) – Your legal experts

AI has been all over the headlines in recent months. Generative AI can already produce written content, images and music which is often extremely impressive, if not yet perfect – and it is increasingly obvious that in the very near future, AI’s capabilities will revolutionise the way we work and live our lives.

Back To Top