Skip to main content
Global HR Lawyers

Algorithms and employment law

25 August 2020

This article explains why claims about algorithms and discrimination are likely to become more common in the years ahead, whilst UK employment law and enforcement mechanisms are ill-equipped to deal with them.

The last few weeks have not been kind to the reputation of algorithms. The controversy over their use in determining school examination results has included clams by shadow attorney general, Lord Falconer that the algorithm discriminated unlawfully. These developments are highly relevant to employers.

Recent years have seen a rapid growth in the use of algorithms in employment, particularly in recruitment. Algorithms are now being used in interviews, for example to assess candidates on their facial and vocal expressions. Chatbots are replacing people in conducting interviews and textbots are communicating with candidates by SMS or email. The use of algorithms and AI is moving higher up the recruitment funnel to selection decisions and to other HR decisions such as redundancies, performance dismissals, promotions and reward. Algorithms are also being used for increasingly senior roles.

Do algorithms reduce or embed bias?

Academics, especially in the US, debate extensively the pros and cons of algorithms and whether they increase or diminish bias and unlawful discrimination in employment decisions. The proponents point out that, whilst some bias is inevitable, algorithms reduce the subjective and sub-conscious bias involved in decisions made by humans.

There is evidence that algorithms are capable of making better, quicker and cheaper decisions than humans. On the face of it, algorithms bring objectivity and consistency to decision-making. However, the Ofqual debacle highlights the potential for automated decisions to go badly wrong. Just because algorithms are capable of making better decisions, does not mean that they always will.

More than 30 years ago, St George’s Hospital in London had developed an algorithm designed to make admission decisions more consistent and efficient (sound familiar?). This was found to discriminate against non-European applicants for medical school. However, interestingly, the school, nonetheless, had a higher proportion of non-European students than most other London medical schools, suggesting that the traditional recruitment methods used by the other medical schools discriminated even more.

Amazon attracted a lot of attention when, in 2018, it abandoned an AI-developed recruitment tool that reportedly favoured male candidates. The tool had been developed over the previous four years and trained on ten years of hiring data. The AI programme, it was reported, had taught itself to favour terms used by male candidates. It has been said that even though the algorithm was not given the candidate’s gender it identified explicitly gender-specific words such as “women’s” as in “women’s sports” and, when these were excluded, it moved to implicitly gender-based words such as “executed” and “captured” which, apparently, are used much more commonly by men than women.

How do algorithms work?

An algorithm is merely computer code used to navigate and often develop a complex decision tree very quickly. Algorithms used in recruitment can be “off the shelf” which are appropriate when recruiting for jobs where the characteristics of successful candidates will be clear and will not vary from employer to employer. Alternatively, a recruitment algorithm can be created specifically for a client based on a data set taken from that client and customised to take account of the client’s own experiences and priorities.

The algorithms which are used by employers in making employment decisions are usually developed by third party specialist technology businesses. To do this the developer goes through various stages.

Firstly, it collects a dataset from which to develop its model. This is known as the “training set”. With bespoke recruitment algorithms, this dataset is usually based on previous applicants for a particular post.

Secondly, it must agree the outcome that the algorithm is intended to achieve. It could look at those recruited to define successful candidates or could use a sub-set of those recruited who are perceived to have been successful hires. Alternatively, the attributes of a successful candidate could be defined.

Thirdly, it will use the computer’s power to identify the best predictors of that outcome from the information contained in the dataset.

Finally, there will be testing and verification of the algorithm. This will use a different dataset from the training dataset to verify that the algorithm is generating good results. 

Algorithms vary from basic decision-trees (eg the NHS 111 “pathways” or the IR35 CEST algorithm) to complex opaque programmes which can incorporate AI “machine-learning” where the algorithm teaches itself to make changes to its code in order to better achieve the objectives set for it.

Bias and, indeed, unlawful discrimination can occur by reason of the objectives set for the algorithm; the data inputted to create the algorithm; the causal links identified by the algorithm; or the data used when running the algorithm.

For example, something clearly went awry in one reported case where one CV screening tool identified being called Jared and playing lacrosse at High School as the two strongest correlators of high performance in the job.

The use of algorithms to make employment-related decisions also raises difficult data privacy issues. The ICO has recently published fresh guidance on AI and data protection, highlighting the importance of processing personal data fairly, transparently and lawfully and, hence, in a non-discriminatory manner. The guidance illustrates how discrimination can occur if the data used to train a machine-learning algorithm is imbalanced or reflects past discrimination.

What is the likelihood of legal claims over the use of algorithms?

Legal cases in the UK, or even the US, challenging algorithm-based employment decisions have been very rare to date. However, that promises to change in the years ahead and UK employment laws and the UK legal system are ill-prepared to deal with this.

Cases are likely to become more common for a number of reasons and not just on account of the increased use of and the increased attention paid to algorithm-based decisions.

  • Algorithms, to date, have been used most often in recruitment decisions and these are less commonly challenged than decisions relating to pay, promotions or dismissals. As their use expands beyond recruitment, litigation will be more common.
  • The true basis on which a decision has been made can normally be determined, albeit not always easily, where it is data-based. Unpicking the true motivations behind human-based decisions is often not possible.
  • At least to date, there is evidence that people are more likely to mistrust a computer-based recruitment decision than a human-made one, a phenomenon known as “algorithm aversion”. People are more likely to challenge decisions which they do not understand. Though human decisions are not as transparent as they might initially seem as, whatever explanation might be given, there is plenty of evidence that employment decisions made by humans are influenced by sub-conscious factors and rationalised after the event.

Algorithm-based decisions are particularly vulnerable to discrimination claims and UK discrimination and employment laws were not designed to meet this challenge and are ill-equipped to do so.

Disadvantaged candidates or employees might argue that an algorithm-based decision unlawfully directly or indirectly discriminated against them. The obligation to make reasonable adjustments under disability laws poses further challenges. The employer may need to prove that it did not discriminate or that the indirectly discriminatory impact of the algorithm is objectively justified. In many cases, the employer will not understand how an algorithm works (or even have access to the source code).

How will an employer satisfy these tests? Many suppliers of algorithms reassure clients that their codes have been stress-tested to ensure that they do not discriminate. An employment tribunal is unlikely to accept a supplier’s word for this. Would independent verification be enough? US verification is unlikely to suffice in the UK as UK and European discrimination laws are very different from the US ones.

Would a tribunal order disclosure of test and verification data or even the code itself? Algorithm suppliers would no doubt regard these as important trade secrets to be withheld at all cost. Will experts be needed to interpret this information? Can the algorithm supplier be sued for causing or inducing a breach of equality laws or helping the employer to do so? The supplier will often be based in the US, introducing practical and legal complications.

Like it or not, the use of AI and algorithms in employment will inevitably increase and the conflict with existing laws and enforcement mechanisms will only become more evident.

James co-chairs the Employment Lawyers Association’s working group on AI and employment.

 

Related items

Related services

Back To Top