Skip to main content

AI 101: The Regulatory Framework

20 February 2023

This is the fourth article in our “AI 101” series, where the team at Lewis Silkin unravel the legal issues involved in the development and use of AI text and image generation tools. In the previous article of the series, we looked at the infringement risks of using AI-generated works. In this article, we consider the regulatory framework for AI being proposed by the European Commission and how the UK might follow suit.

The Draft European AI Regulation

Back in April 2021, the European Commission published its proposal for the Artificial Intelligence Regulation (“AI Regulation), which is currently making its way through the European legislative process. This draft AI Regulation seeks to harmonise rules on artificial intelligence by ensuring AI products are sufficiently safe and robust before they enter the EU market.

The AI Regulation is intended to apply to what the EU terms “AI systems”. The most recent iteration of this concept is defined (in summary) as all systems developed through machine learning approaches and logic, and knowledge-based approaches. This is a wide definition aimed to accommodate future developments in AI technology but extends to much of modern AI software.

The broad scope of this definition is narrowed by the operational impact of the draft legislation, as the AI Regulation takes a ‘risk-based approach’ to governing AI systems. Not all AI systems will be subject to the obligations under the AI Regulation. The AI Regulation divides AI systems into different strands of risk, based on the intended use of the system:

  • Prohibited Practices: AI systems that use social scoring (i.e. creating a social score for a person that leads to unfavourable treatment), facial recognition, manipulation (by exploiting any vulnerabilities of specific groups of people, e.g. due to their age, to distort their behaviours) and dark pattern AI.
  • High-Risk AI Systems: AI systems with use cases in education, employment, justice, and immigration law among others use cases.
  • Limited Risk AI Systems: this includes, at the time of writing, chatbots, emotion recognition and biometric categorisation and systems generating 'deep fake' or synthetic content.
  • Minimal Risk AI Systems: this includes spam filters or AI-enabled video games.

Providers of such an AI system will be under an obligation to ensure that the AI system complies with the requirements under the risk level that corresponds to its risk allocation. For example, a provider of a “High-Risk AI System” will become subject to a whole host of requirements relating to risk management; the quality of data sets used to train the AI; performance-testing; record keeping; cybersecurity; and a requirement for effective human oversight of the AI.

Equally, users of “High-Risk AI systems” will be required to use the AI system in accordance with the provider’s instructions (including with regards to the implementation of human oversight measures); ensure that the input data is relevant for the intended purpose; monitor the operation for incidents or risks; “interrupt” the system in the case of serious incidents (or suspend its use if they consider that use may result in such a risk); and keep logs generated by the AI system. They will also be required to carry out a data protection impact assessment (DPIA) under the GDPR before using a high-risk AI system (although it feels the horse may have bolted on this front given the widespread public use of ChatGPT and other “GPAIS” tools already – see below).

The AI Regulation provides for substantial fines in the event of non-compliance as well as other remedies, which can scale up to the higher of EUR 30 million and 6% of the total worldwide annual turnover in the most serious cases.

The draft AI Regulation is intended to have broad territorial scope reaching far beyond the borders of just the EU – it is envisaged to apply to:

  • providers that first supply commercially or put an AI system into service in the EU, regardless of whether the providers are located inside or outside the EU;
  • users of AI located within the EU; and
  • providers and users located outside the EU, if the output produced by the system is used within the EU.

Will the draft AI Regulation impact “generative AI” tools (like ChatGPT)?

Particularly relevant to the AI tools we have discussed so far in this blog (i.e. text and image-generating AI) are the recent amendments to the AI Regulation in 2022 – which introduced the concept of General Purpose AI System ("GPAIS") and includes any AI system that can be used for many different purposes and tasks.

Again, this wide definition captures a variety of AI tools, including AI models for image and speech recognition, pattern detection, translation and also Text and Image Generating AI (like OpenAI’s ChatGPT and Dall-E). It is difficult to predict the potential applications for a GPAIS because these systems are versatile and can complete a variety of tasks when compared to 'narrow-based' AI systems, which have specific intended use cases. For example, a text-generating AI tool might be used to draft patient letters for medical professionals, utilising sensitive patient data, even if this was not its original intention. Whilst a GPAIS might be considered as a great technological development by AI enthusiasts, from the EU law-making perspective, such unpredictable applications are considered “high-risk”.

The AI Regulation previously designated an AI system as “high-risk” where its intended purpose was high-risk. However, bringing GPAIS within scope of the “high-risk” classification due to the (however unlikely) chance of a “high-risk application” means such systems are likely to become subject to tough compliance requirements and the associated cost consequences.

The concern with this amendment is that providers will be given impractical requirements, such as listing all possible applications of a tool and the requirement to develop mitigation strategies to deal with such applications. Some commentators have suggested that the full force of the high-risk section of the AI Regulation should apply only if a GPAIS is indeed used for high-risk purposes, rather than having a possible application.

What about in the UK?

As mentioned above, the EU’s draft AI Regulation will likely extend beyond the borders of the EU and may apply to providers and users based within the UK. Therefore, Brexit won't allow UK-based developers to avoid its effect completely.

Domestically, by way of its National AI Strategy, the UK government set out an ambitious ten-year plan for the UK to remain a global AI superpower – seeking to harness the enormous economic and societal benefits of AI while also addressing the complex challenges it presents.

Even though the UK has not outlined its regulatory framework for AI just yet, the Government’s AI Policy paper published last year ("Establishing a pro-innovation approach to regulating AI”) does provide cause for optimism (if you are a developer) as it sets out a new pro-innovation approach that is "context-specific, risk-based, coherent, proportionate and adaptable" – all buzzwords that imply a different approach to regulation when compared to the staunch regulatory rhetoric of the EU.

Content moderation

Content moderation has been a hot topic given the recently adopted EU Digital Services Act (“DSA”), which has redesigned the rules for offering online content, services and products to consumers in the EU, and the UK’s parallel but contrasting key domestic proposals in the form of the Online Safety Bill (“OSB”), which touches on many of the same aspects as the EU’s DSA.

The DSA has been created with the intention of setting a new standard for greater accountability of online platforms regarding illegal or potentially harmful online content and is due to take effect in 2024.

However, the DSA primarily applies to intermediary services (defined as internet access providers, caching services or hosting services) that store information provided by, and at the request of, a user. In the case of generative AI tools, which themselves generate harmful content, the software is not hosting content created by users and the DSA provisions on intermediary liability are ill-suited to deal with the harms. This may leave room for the EU to legislate (or adapt legislation) to capture the generation of harmful content using AI tools.

Data Protection

The draft EU AI Regulation will also overlap with the protections offered by the General Data Protection Regulation ("GDPR"). Our next blog will delve further into the applicability of the GDPR and privacy regulation to AI tools.

Read the next article in our series 'AI 101: What are the key data privacy risks and rewards for this new tech?'.

Related items

Data and Network Security

AI 101: What are the key data privacy risks and rewards for this new tech?

23 February 2023

This is the fifth article in our “AI 101” series, where the team at Lewis Silkin will unravel the legal issues involved in the development and use of AI text and image generation tools.

AI 101 What are the infringement risks of us AI-generated works

AI 101: What are the infringement risks of using AI-generated works?

07 February 2023

This is the third article in our “AI 101” series, where the team at Lewis Silkin will unravel the legal issues involved in the development and use of AI text and image generation tools. In the previous article of the series, we considered questions of ownership and authorship when it comes to generating AI works. In this article we consider how those tools might be infringing IP rights and how the users of those tools might find themselves in hot water.

AI 101 Who owns the output of generative AI

AI 101: Who owns the output of generative AI?

02 February 2023

This is the second article in our “AI 101” series, where the team at Lewis Silkin will unravel the legal issues involved in the development and use of AI text and image generation tools. In the first article of the series, we looked at how generative AI tools are trained and why lawsuits have been raised against some of the major AI companies in both the UK and US.

AI 101 How do AI tools work

AI 101: How do AI tools work and why are lawsuits being raised?

23 January 2023

In this “AI 101” series of blog posts, the team at Lewis Silkin unravel the legal issues involved in the development and use of AI text and image generation tools. This initial post explores how these tools operate and considers the legal cases being brought against major AI platforms in the UK and US. Later posts in the series will consider who owns the output of AI image generators, the infringement risks involved in using output created by AI; the ethical implications of programming AI; and what regulation is on the horizon – so keep an eye out for those!

Artificial Intelligence (AI) – Your legal experts

AI has been all over the headlines in recent months. Generative AI can already produce written content, images and music which is often extremely impressive, if not yet perfect – and it is increasingly obvious that in the very near future, AI’s capabilities will revolutionise the way we work and live our lives.

Back To Top