What is the EU AI Act?
Described by the EU as “the world’s first comprehensive legal framework on AI worldwide”, the EU AI Act has now been formally adopted by the European Parliament.
Expected to be adopted by the Council this month, it then enters into force 20 days after publication in the EU Official Journal, with the implementation staggered over the next two years.
The intention is that the AI Act ensures that AI systems are used safely and ethically, prohibiting certain practices that pose an unacceptable level of risk, and setting out clear requirements for other AI systems that have potentially harmful outcomes.
We have published a detailed guide to the AI Act here, answering questions such as what is an AI system, when will the Act be implemented, and looking in depth at some of the defined “roles” under the Act. This article focuses on the implications of the AI Act on the use of AI in employment.
Below we explore what the AI Act means for employers.
We’re no longer in the EU – why does it matter?
The intention of the AI Act is to protect people in the EU who are affected by AI systems. This means that the Act not only applies to employers located in the EU that use AI systems, but also applies to those located in non-EU member states where the output of that system is used in the EU.
In the employment context, in scope examples might be recruitment exercises managed from the UK, using AI tools for sifting, which are open to applicants from the EU. Or cross jurisdictional teams managed from the UK by AI supported performance evaluation software.
However, the impact of the AI Act will go beyond its technical legal reach. Many employers operating internationally will want to ensure that any AI system they use complies with the AI Act, rather than considering each use case individually and having some processes compliant and others not. The general expectation is that this is likely to become the international default, like the GDPR.
What are the levels of regulation based on risk?
The AI Act classifies AI systems (see more about this definition here) into four categories of risk: unacceptable, high, limited, and minimal. The rules that apply to the system will turn on that classification:
- Unacceptable risk: systems that contravene the EU's values and fundamental rights and their use is prohibited.
- High risk: these pose a significant threat to the health, safety, or fundamental rights of individuals or groups and are subject to strict requirements which we explain further below.
- Limited risk: these pose a moderate threat to the rights or interests of individuals or groups and are subject to transparency obligations.
- Minimal risk: these pose a negligible or no threat and are not subject to mandatory requirements.
Although these distinctions might seem clear cut, the AI Act is certainly not without legal uncertainty around some of the key definitions. Additional guidance from the Commission is expected over the coming months (for example, Codes of Practice are anticipated after 9 months) and this will be key to effectively preparing for its coming into force.
Where will employment use cases sit within the risk categories?
We are already seeing a proliferation of the use of AI in the employment context, with areas such as recruitment, performance management and monitoring and surveillance being key areas of use (and we consider these in further detail below).
The AI Act recognises the risk of these common use cases by categorising the following as automatically high risk:
- AI systems intended to be used for recruitment and selection, with job application analysis and candidate evaluation tools given as examples; and
- AI which is used to make decisions affecting the working relationship. This would include a range of key management decisions including promotion, termination and performance evaluation.
Similarly in the UK, the Department of Science, Innovation and Technology has recently published detailed guidance on Responsible AI in Recruitment. This notes the “novel risks” that AI enabled tools pose when used in the HR and recruitment contexts and sets out a range of recommended assurance measures to operationalise the UK’s AI regulatory principles.
Are there any potential exceptions to this?
If an AI system does not ‘pose a significant risk of harm, to the health, safety or fundamental rights of natural persons, including by not materially influencing the outcome of decision making’ it may be considered to fall outside of the high-risk category.
This depends on one of a number of criteria being fulfilled, which include, for example, lower risk scenarios such as the AI system being intended to perform a narrow procedural task or is intended to improve the result of a previously completed human activity. This exemption will apply by self-assessment; providers will need to document why they believe the use case is out of scope.
Clearly, this language is open to interpretation and the scope of the exemption remains uncertain. In the employment context, would AI supported shift scheduling or administrative tasks progressing a recruitment process be in scope? This remain uncertain but hopefully, clarity will be provided in guidance prior to the implementation date.
What type of AI systems are we seeing in the workplace?
The scope of the employment related ‘high-risk’ category in Annex III of the AI Act is likely to be fairly all pervasive in terms of impactful decisions made in the workplace. It catches “recruitment and selection”, the somewhat vague but broadly phrased “decisions affecting terms of work relationships”, and also more familiar functions such as “promotion”, “termination” and evaluating “performance and behaviour”.
This breadth reflects the wide range of AI systems already in use as workplace tools. These are interesting to consider when assessing the potential reach of the AI Act:
- Recruitment: Each stage of the recruitment process can now be supported by AI systems. Generative AI can draft job descriptions; algorithms can determine how to target adverts; and candidates might interact with a chatbot when submitting their application. In terms of selection, screening and shortlisting supported by AI systems has the capacity to hugely reduce the time spent sifting CVs, but of course presents legal and ethical risks – something we have written about here. As the process progresses, assessments and even interviews may now have less human input.
- Performance management: Collection and objective analysis of employee data means that AI is already widely used as a performance management tool. Algorithms can undertake comparisons, detect opportunities for improvement and efficiencies and generative AI could even generate a performance review. This kind of information has the potential to make management decisions better informed and more objective, but of course the role of regulation – such as the AI Act - is to ensure that safeguards are in place to ensure this is done ethically and fairly.
- Monitoring: The government’s 2023 paper on AI and employment law cites monitoring and surveillance as “[p]erhaps the most high profile use of AI in the workplace”. This acknowledges monitoring technology that has the potential to provide a safer workplace (for example, tracking delivery drivers’ use of seatbelts and driving speed) but also that which could be overly intrusive and erode trust (for example monitoring keystrokes and work rate).
What obligations will an employer be subject to in high-risk use cases?
It’s clear that employment uses cases will be very likely to fall into the more rigorous end of the AI Act’s requirements, but the obligations that flow from that turn on whether the employer is a ‘provider’ or ‘deployer’ of the AI system. We have considered these definitions in detail here.
In the vast majority of cases, employers will be considered a deployer, with the company that developed or procured the AI tool with a view to putting it onto the market being the provider.
This is key because the majority of obligations (set out in Chapter II of the Act) fall onto providers. As we have written previously, these obligations are extensive, with key examples being:
- Implementing, documenting and maintaining a risk management system.
- Ensuring that the training data meets quality criteria.
- Providing for logs that enable monitoring of the system.
- Designing tools so that they can be overseen by people.
- Registering the tool on an EU wide database.
In addition, GPAI models are subject to additional obligations – primarily around training data – see more about this here.
The requirements for deployers are less onerous (and less costly) but will still require significant planning. These include the following:
- Using the system in accordance with the instructions.
- Assigning someone to oversee the AI system who is trained, competent, and has the support and authority they need.
- Ensuring input data over which they have control is relevant and sufficiently representative.
- Monitoring the system according to the instructions and flagging incidents to the provider.
- If possible, keeping the logs automatically generated by the system.
- Informing workers’ representatives and affected workers that they will be subject to the system before it’s put into use.
- In certain cases, carrying out a fundamental rights impact assessment prior to use (if the deployer is a public body, providing a public service or operating in banking or insurance).
Deployers may also be required to comply with a request from an affected person for an explanation of the role of the AI system has played in a decision which has impacted them in a way that is detrimental to their health, safety or fundamental rights. Although the ICO has provided guidance on explaining AI decisions, this is still an emerging concept. Lack of settled practice as to what this kind of ‘explanation’ looks like may make compliance more difficult.
There are rules under which a deployer might be deemed to be a provider. This would apply if the deployer:
- substantially modifies the AI system;
- modifies the intended purposes of the system; or
- puts their name or trademark on it.
The risk of being drawn into this more onerous group of obligations by how the tool is changed or put into use is therefore something for employers to be alert to.
Are all work-related use cases high risk?
As we’ve noted, the high-risk categories set out above are likely to catch many of the core uses of AI in the workplace. But there are also some work-related use cases which would be deemed lower risk, and some potential uses that are prohibited completely:
- Limited risk use cases: lighter transparency obligations apply to use cases deemed ‘limited risk’. This includes the requirement that users are informed that they are interacting with an AI tool, which could be relevant when an HR team uses an in-house chatbot, for example.
- Prohibited uses: certain categories of use are banned due to the potential detrimental risk that they pose to individuals. Of potential relevance to the workplace or recruitment is the practice of biometric categorisation and inferring emotions in the workplace. This is defined quite narrowly and means a system that goes beyond verifying that a person is who they say they are, and uses personal physical characteristics to determine things like race, political views, religion, or sexual orientation.
What about addressing discrimination and bias within AI systems?
The potential for AI systems to ‘bake in’ discrimination and bias is well recognised. Indeed, in its recent paper on Using AI in the Workplace, the OECD highlights this as a key risk when using AI in this context, given the capacity for AI systems to “replicate and systematise human biases”.
Taking the recruitment use case, there is a risk of unfair bias and discrimination from sourcing, through screening and interviewing, and of course ultimately at selection. Decisions made at different stages - from design through to deployment – could therefore result in outcomes that are open to legal challenge.
Although the AI Act draws a distinction between producers and deployers, all stakeholders must take steps to avoid discriminatory outcomes in the use of AI systems; detecting and addressing this risk is a multi-stakeholder issue.
From the provider’s side, steps required in the AI Act that should address this risk include ensuring that a conformity assessment process is undertaken before the systems are supplied and the requirement that they are designed to permit appropriate human oversight. And from the deployer’s perspective, information and assurances about how the system has been trained and the safety and fairness of the data will be a key part of the procurement process.
In terms of putting the AI system into use, under the AI Act, deployers will be obliged to use systems according to their instructions and ensure that input data is representative and relevant. However, the risks of bias and discrimination may mean that many employers will go further than this, putting in place assurance mechanisms such as bias audits and performance testing to mitigate these risks.
Similarly, ensuring that AI supported decisions can be adequately explained is critical to maintaining trust in AI systems and enabling individuals to contest effectively decisions based on AI profiling. The obligations on deployers and deployers under the AI Act might not go far enough in ensuring that this need is met. Clarifying this will be very important in determining the scope and effectiveness of the regulations.
The importance of explaining decisions and building trust in AI is something we have explored further in our recent podcast on AI and employment. Explainability is also a key focus of the TUC’s recently published Artificial Intelligence (Regulation and Employment Rights) Bill. This sets out a blueprint for a potential legal framework to regulate the use of AI in the workplace. As well as automatic unfair dismissal protection and provisions shifting the burden of proof in AI-based discrimination claims, the Bill proposes a right to a personal statement explaining how an AI supported decision about them was made. It remains to be seen the extent to which this will form the basis of Labour policy in the future.
What happens if we don’t comply?
Fines for non-compliance are potentially significant:
- Up to the higher of €35 million or 7% of annual worldwide turnover for violations of the banned AI applications
- Up to €15 million or 3% for violations of the AI Act’s obligations
- Up to €7.5 million or 1.5% for the supply of incorrect information
The AI Act also establishes a governance framework for its oversight and enforcement, involving national competent authorities, a European Artificial Intelligence Board, and the European Commission. The AI Act provides for administrative fines of up to 6% of the annual worldwide turnover or €30 million, whichever is higher, for non-compliance with the prohibition of unacceptable risk AI systems or the requirements for high-risk AI systems.
Of course, failures might also amount to breaches of other regulations including data privacy and equality laws.
What should employers do to prepare?
We have written before about what deployers should be doing now, including:
- Understand what’s in scope. Conducting an audit of tools both used or planned is a crucial first step. This should look at both the use case and whether there is or might be in the future an EU connection to bring the tool within the territorial reach of the legislation.
- Identify what needs to change. AI systems may already be used in high-risk areas such as recruitment. Employers need to understand existing practices and processes, and whether anything needs to change to comply with the AI Act, which? will be a key step.
- Update policies and procedures. The obligations on deployers require a number of proactive steps to be taken – whether that’s the need for record keeping or ensuring that those who oversee the relevant tools are trained. Internal policies need to reflect these requirements. It’s likely that detailed data protection policies are already in place addressing obligations under the GDPR. Considering the overlap between these processes and those under the EU AI Act will be key.
- Training and awareness. Do those using AI systems understand how to use the tools properly and what their obligations are under the new rules?
- Conduct due diligence. Will new and existing tech providers – which will of course be subject to the more stringent requirements that sit at that end of the chain – be compliant? Do contracts need amending?
- Informing employee representatives. Under the Act, workers and their representatives must be informed that they are subject to an AI system.
- Understanding the AI preferences. We have written before about the importance of explainability and understanding the preferences generated by AI. Knowing the extent to which an individual AI decision can be explained will be an important part of combating the risk of bias when using an AI system.