AI in the workplace: mind the regulatory gap?
27 April 2023
The world of work looks set to be revolutionised by AI technology, with recent research suggesting that up to 300 million full time jobs globally could become automated. Yet strong voices of caution are sounding about the pace of change. Are legislatures stepping up to fill the regulatory gap? And what are the considerations for employers looking to step in and codify employees’ use of new technology themselves?
The end of March saw an unfortunate clash of approaches on the question of regulation: on the same day as the UK government published its pro-innovation (read, regulation-lite) White Paper on AI, the Future of Life Institute published an open letter calling for the development of the most powerful AI systems to be paused to allow for the dramatic acceleration of robust AI governance systems. Although it’s veracity was subsequently challenged, signatories included tech giants such as Elon Musk and Steve Wozniak.
The UK government’s White Paper is unlikely to satisfy this letter’s plea. The approach proposed is to empower existing regulators through the application of a set of overarching principles. As yet, no new legislation or statutory duty of enforcement is proposed. This sits in stark contrast with the direction of travel in the EU, which proposes the introduction of more stringent regulation.
AI, and specifically generative AI, has shot into the forefront of public consciousness since the launch of ChatGPT last November. Generative AI is now freely available to use and could have many beneficial uses in the workplace. But in the absence of clear rules from legislatures, employers would be wise not to leave the day-to-day use of this technology in their workplace to chance. We consider below the key issues and what could usefully be addressed in an AI policy.
Management by algorithm – the TUC’s concerns
Calls for stricter oversight of such developing technologies in the UK workplace have also recently been sounded by the TUC. The TUC argues that AI-powered technologies are now making “high risk, life changing” decisions about workers’ lives – such as decisions relating to performance management and termination. Unchecked, it cautions that the technology could lead to greater discrimination at work. The TUC is calling for a right of explainability to ensure that workers can understand how technology is being used to make decisions about them, and the introduction of a statutory duty for employers to consult before new AI is introduced
These comments on the oversight of AI in the workforce come two years after the TUC published three major reports into Work and the AI Revolution. In their reports, the organisation warned that the development of this technology could result in a loss of transparency and accountability in decision making and flagged the dangers a potential increase in alienation through the loss of human interaction. The recent explosion in generative AI technology, and the scope for further developed machine learning algorithms to be deployed, surely further underlines these risks.
Legal guardrails: existing and potential
A focus of one of these reports was an analysis of the legal implications of AI systems in the post-pandemic workplace, bearing in mind the use of AI and ADM (automated decision-making) to recruit, monitor, manage, reward and discipline staff had proliferated. The report identified the extent to which existing laws already regulate the use of this and what the TUC felt were significant deficiencies that need to be filled.
For example, the common law duty of trust and confidence arguably requires employers to be able to explain their decisions and for those decisions to be rational and in good faith. In terms of statutory rights, protection against unfair dismissal, data protection rights and the prohibition of discrimination under the Equality Act (amongst other things) all have relevance to how this technology is used at work. However, the 2021 report went on to identify 15 “gaps” if AI systems in the workplace are to be regulated by existing laws and made a number of specific recommendations for legislative changes in order to plug these perceived shortcomings. For example, it proposed the introduction of a requirement that employers provide information on any high-risk use of AI and ADM in section 1 employment particulars. But, as we will go on to consider, the approach taken by the government in the White Paper means that any such plugs are likely to be far from watertight.
The AI Whitepaper
The government describes the approach it is taking to the regulation of AI in the White Paper published last month as a "proportionate and pro-innovation regulatory framework", focussing on the context in which AI is used, rather than on specific technologies. We have written about the proposed legislative framework in detail here.
In summary, this approach is openly light touch and cautions against a rush to legislation, which might place undue burdens on businesses. What is being proposed is instead a principles-based strategy, identifying 5 principles to “guide and inform” the responsible development and use of AI. These are:
- Safety, security and robustness
- Appropriate transparency and explainability
- Fairness
- Accountability and governance
- Contestability and redress
Existing regulators are expected to implement these values through existing laws and regulations, taking an approach that is suitable for their specific sector. It is fair to say that “teeth” could be added to this duty through the introduction of a statutory duty on regulators to have regard to these principles, but this will only be brought in if it appears to be needed further down the line.
With no new rules, employers could be left unclear as to how the principles-based approach will impact on their proposed use of AI, including generative AI, at work. Indeed commentators have noted that the White Paper sets out broad principles for AI use, but makes minimal reference to general purpose models such as GPT-4 (the most recent version of ChatGPT).
One way in which employers will see this being put into practice is likely to be via guidance produced collaboratively by regulators. The need for collaboration is key, given the regulation of AI technology cuts across so many different areas of the law. In the employment context, this could see a practical framework being issued by the Information Commissioner and the Employment and Human Rights Commission, for example. This is illustrated in the White Paper in a case study regarding the use of AI systems in recruitment. But it remains to be seen when such guidance will be published (particularly as reports suggest that no additional funding accompanied these proposals) and the approaches that may be taken by different regulators.
EU Regulation
The regulation of AI at an EU level proposes a tougher line, with the Artificial Intelligence Act, currently under discussion in the European Parliament, described as the world’s most restrictive regime on the development of AI. This would take a risk-based approach to regulation, with non-compliance subject to potentially significant fines.
We have written in detail about these proposals here but in summary, the AI Act proposes a categorisation system which determines the level of risk different AI systems could pose to fundamental rights and health and safety. Restrictions imposed on the technology depend on which of the four risk tiers – unacceptable, high, limited and minimal – the technology is placed in.
Among the list of high risks uses includes some recruitment and employment use cases. An example of this technology might be a CV scanning tool, or AI driven performance management tools. So defined, these use cases would then be subject to a range or more detailed compliance requirements. These include the need for:
- a comprehensive risk management system
- relevant, representative and accurate data to train and validate the system
- transparency
- human oversight
Of course the UK is no longer directly bound by new EU regulation such as this but UK businesses will not be beyond the reach of this regulation (as is explained in more detail here).
For now, there is clear anxiety over the current level of regulation. Until the AI Act takes effect, perhaps we will see other countries following Italy’s position and temporarily blocking ChatGPT due to privacy concerns.
Time for a ChatGPT policy?
With the UK government looking to take a light touch approach to regulation, and more detailed guidance not expected to come soon, employers should ensure that they themselves understand how AI is being – and how they want it to be – used in their organisation.
Focussing on generative AI technology, the fact this is now readily accessible for individual use means that use of this technology in the workplace could easily fall under the radar. Workplace policies regulating the use of technology such as mobile phones, social media or third-party systems are commonplace; extending these to cover when and how programmes such as ChatGPT should be used at work makes sense.
In that case, what are the key risks to address in a generative AI policy that defines acceptable use?
- What it’s used for: It would be naive to assume that no use is being made of these technologies by your employees. What is and isn't acceptable will depend on the nature of the work and workplace, but clear guidelines would be beneficial. Controlling the use of generative AI is not just a consideration for existing employees but job applicants too. Are applications being fairly considered if this technology has been used by candidates in putting application forms and covering letters together? Recognising this risk, Monzo has taken the “pre-emptive and precautionary” measure of warning candidates that applications using external support – including ChatGPT – will be disqualified.
- Deskilling: Even if the generative AI can undertake a task previously done by a person, is this desirable from a skills perspective or is there a risk of staff deskilling?
- Confidentiality: As the generative AI systems are constantly learning, there could be risks in inputting confidential information into an open system. In part, this risk arises because the inputted data could be stored in the system’s memory and then be potentially accessible to third parties or be accessed by the AI model itself at a later time. This could particularly be a risk if ChatGPT were being used for HR matters, for example. Additionally, inputting employee data into ChatGPT could breach data protection obligations, something explored in more detail here.
- Copyright infringement: Employers should consider the risk that the system might use material that is protected by copyright which could impact on how any AI generated output can be used. The ownership of content created by AI is complex issue that we looked at here - seemingly another area in which regulation might need to play catch up with technology. We also consider ways to minimise infringement risks here.
- Accuracy: Whilst the technology is astonishing, a human filter is still essential, particularly in an employment context. A policy can address checking output for accuracy, bias and suitability for the specific context. This is likely to be all the more important in the employment context where the human impact of a decision can be high and also where those more intangible human and context factors (such as ethics and empathy) are often so important. It’s also important to remember that generative AI is designed to produce the most plausible output and that is not necessarily the same as that which is the most truthful or accurate.
Whilst the positives of this technology can be usefully embraced, a tailored policy will ensure it’s on the employer’s terms. At a time when the regulation of the technology more generally has been described as “little more than waving a small red flag at an accelerating train”, this could be critical.
Related items
Related services
Artificial Intelligence (AI) – Your legal experts
AI has been all over the headlines in recent months. Generative AI can already produce written content, images and music which is often extremely impressive, if not yet perfect – and it is increasingly obvious that in the very near future, AI’s capabilities will revolutionise the way we work and live our lives.