Skip to main content
Global HR Lawyers

Discrimination and bias in AI recruitment: a case study

31 October 2023

Barely a day goes by without the media reporting the potential benefits of or threats from AI. AI is being used more and more in workplace decisions: to make remuneration and promotion decisions; to allocate work; award bonuses; manage performance and make dismissal decisions. One of the common concerns is the propensity of AI systems to return biased or discriminatory outcomes. By working through a case study about the use of AI in recruitment, we examine the risks of unlawful discrimination and how that might be challenged in the employment tribunal.

Our case study begins with candidates submitting job applications which are to be reviewed and “profiled” by an AI system (the automated processing of personal data to analyse or evaluate people, including to predict their performance at work). We follow this through to the disposal of resulting employment tribunal claims from the unsuccessful candidates, and examine the risks of unlawful discrimination in using these systems. What emerges are the practical and procedural challenges for claimants and respondents (defendants) litigation procedures are ill-equipped for an automated world.

Bias and discrimination

Before looking at the facts, we consider the concepts of bias and discrimination in automated decision-making.

The Discussion Paper published for the AI Safety Summit organised by the UK government and held at Bletchley Park on 1 and 2 November highlighted the risks of bias and discrimination and commented:

Frontier AI models can contain and magnify biases ingrained in the data they are trained on, reflecting societal and historical inequalities and stereotypes. These biases, often subtle and deeply embedded, compromise the equitable and ethical use of AI systems, making it difficult for AI to improve fairness in decisions. Removing attributes like race and gender from training data has generally proven ineffective as a remedy for algorithmic bias, as models can infer these attributes from other information such as names, locations, and other seemingly unrelated factors.

What is bias and what is discrimination?

Much attention has been paid to the potential for bias and discrimination in automated decision-making. Bias and discrimination are not synonymous but often overlap. Not all bias amounts to discrimination and not all discrimination reflects bias.

A solution can be biased if it leads to inaccurate or unfair outcomes. A solution can be discriminatory if it disadvantages certain groups. A solution is unlawfully discriminatory if it disadvantages protected groups in breach of equality law.

How can bias and discrimination taint automated decision-making?

Bias can creep into an AI selection tool in a number of ways. For example, there can be: historical bias; sampling bias; measurement bias; evaluation bias; aggregation bias; and deployment bias.

To give a recent example, the shortlist of six titles for the 2023 Booker Prize included three titles by authors with the first name “Paul”. An AI programme asked to predict works to be shortlisted for this prize, is likely to identify being called “Paul” as a key factor. Of course, being called Paul will not have contributed to their shortlisting and identification of this as a determining factor amounts to bias. In doing so, the AI tool would be identifying a correlating factor which had not actually been a factor in the shortlisting; the tool’s prediction would therefore be biased as it would be inaccurate and unfair. In this case this bias is also potentially discriminatory as Paul is generally a male name and possibly also discriminatory on grounds of ethnicity and religion.

An algorithm can be tainted by historical bias or discrimination. AI algorithms are trained using past data. A recruitment algorithm takes data from past candidates and there will always be a risk of under-representation of particular groups in that training data. Bias and discrimination is even more likely to arise from the definition of success which the algorithm seeks to replicate based on successful recruitment in the past. There is an obvious risk of past discrimination being embedded in any algorithm.

This process presents the risk of random correlations being identified by AI algorithm, and there a several reported examples of this happening. One example from several years ago is an algorithm which identified being called Jared as one of the strongest correlators of success in a job. Correlation is not always causation.

An outcome may potentially be discriminatory but not be unfair or inaccurate and so not biased. If, say, a recruitment application concluded that a factor in selecting the best candidates was having at least ten years’ relevant experience, this would disadvantage younger candidates and a younger candidate may be excluded even if, in all other respects, they would be a strong candidate. This would be unlawful if it could just be justified on the facts. It would not, however, necessarily be a biased outcome.

There has been much academic debate on the effectiveness of AI in eliminating the sub-conscious bias of human subjectivity. Supporters argue that any conscious or sub-conscious bias is much reduced by AI. Critics argue that AI merely embeds and exaggerates historic bias.

The law

Currently in the UK there are no AI specific laws regulating the use of AI in employment. The key relevant provisions at present are equality laws and data privacy laws. We have written about these in detail here. This case study focuses on discrimination claims under the Equality Act 2010.

The case study

Acquiring shortlisting tool

Money Bank gets many hundreds of applicants every year for its annual recruitment of 20 financial analysts to be based in its offices in the City of London. Shortlisting takes time and costly HR resources. Further, Money Bank is not satisfied with the suitability of the candidates shortlisted each year.

Money Bank, therefore, acquires an AI shortlisting tool, GetBestTalent, from a leading provider, CaliforniaAI, to incorporate into its shortlisting process.

CaliforniaAI is based in Silicon Valley in California and has no business presence in the UK. Money Bank is attracted by CaliforniaAI’s promises that GetBestTalent will identify better candidates, more quickly and more cheaply than by relying on human-decision makers. Money Bank is also reassured that CaliforniaAI’s publicity material states that GetBestTalent has been audited to ensure that it is bias and discrimination-free.

Money Bank was sued recently by an unsuccessful job applicant claiming that they were unlawfully discriminated against when rejected for a post. This case was settled but proved costly and time-consuming to defend. Money Bank wants, at all costs, to avoid further claims.

Data protection impact assessment

Money Bank’s Data Protection Officer (DPO) conducts a data protection impact assessment (DPIA) into the proposed use by Money Bank of GetBestTalent given the presence of various high-risk indicators, including the innovative nature of the technology and profiling. Proposed mitigations following this assessment include bolstering transparency around the use of automation by explaining clearly that it will form part of the shortlisting process; ensuring that a HR professional will review all successful applications; and confirming with CaliforniaAI that the system is audited for bias and discrimination. On that basis, the DPO considers that the shortlisting decisions are not “solely automated” and is satisfied that Money Bank’s proposed use of the system complies with UK data protection laws (this case study does not consider the extent to which the DPO is correct in considering Money Bank’s GDPR obligations to have been satisfied in this case).

Money Bank enters into a data processing agreement with CaliforniaAI that complies with UK GDPR requirements. Money Bank also notes that CaliforniaAI is self-certified as compliant with the UK extension to the EU-US Data Privacy Framework.

AI and recruitment

GetBestTalent is an off-the-shelf product and CaliforniaAI’s best seller. It has been developed for markets globally and used for many years though it is updated by the developers periodically. The use of algorithms, and the use of AI in HR systems specifically, is not new but has been growing rapidly in recent years. It is being used at different stages of the recruitment process but one of the most common applications of AI by HR is to shortlist vast numbers of candidates down to a manageable number.

AI shortlisting tools can be bespoke (developed specifically for the client); off-the-shelf; or based on an off-the-shelf system but adapted for the client. GetBestTalent algorithm is based on “supervised learning” where the input data and desired output are known and the machine learning method identifies the best way of achieving the output from the inputted data. This application is “static” in that it only changes when CaliforniaAI developer’s make changes to the algorithm. Other systems, known as dynamic systems, can be more sophisticated and continuously learn how to make the algorithm more effective at achieving its purpose.

Sifting applicants

This year 800 candidates apply for the 20 financial analyst positions at Money Bank. Candidates are all advised that Money Bank will be using automated profiling as part of the recruitment process.

Alice, Frank and James are unsuccessful, and all considered themselves strong candidates with the qualifications and experiences advertised for the role. Alice is female, Frank is black, and James is 61 years old. Each is perplexed at their rejection and concerned that their rejection was unlawfully discriminatory. All three are suspicious of automated decision-making and have had read or heard about concerns about these systems.

Discrimination claims in the employment tribunal

Alice, Frank and James each contact Money Bank challenging their rejection. Money Bank asks one of its HR professionals, Nadine, to look at each of the applications. There is little obvious to differentiate these applications from the shortlisted candidates - and Nadine cannot see that they are obviously stronger - so confirms the results of the shortlisting process.

The Bank responds to Alice, Frank and James saying that it has reviewed the rejections, and that it uses a reputable AI system which they are reassured does not discriminate unlawfully but they do not have any more information as the criteria used are developed by the algorithm and are not visible to Money Bank. The data processing agreement between Money Bank and CaliforniaAI requires CaliforniaAI (as processor) to assist Money Bank to fulfil its obligation (as controller) to respond to rights requests, but does not specifically require CaliforniaAI to provide detailed information on the logic behind the profiling nor its application to individual candidates.

Alice, Frank and James all start employment tribunal proceedings in the UK claiming, respectively sex, race and age discrimination in breach of the UK’s Equality Act. They:

  • claim direct and indirect discrimination against Money Bank; and
  • sue CaliforniaAI for inducing and/or causing Money Bank to discriminate against them.

Despite CaliforniaAI having no business presence in the UK and despite the process being more complicated, the claimants can bring proceedings against an overseas party in the UK employment tribunal.

Unless the claimants are aware of each other’s cases, in reality, these cases are likely to proceed independently. However, for the purposes of this case study, all three approach the same lawyer who successfully applies for the cases to be joined and heard together.

Disclosure

Alice, Frank and James recognise that, despite their suspicions, they will need more evidence to back up their claims. They, therefore, contact Money Bank and CaliforniaAI asking for disclosure of documents with the data and information relevant to their rejections.

They also write to Money Bank and California AI with data subject access requests (DSARs) making similar requests for data. These requests are made by reason of their rights under UK data protection law over which the employment tribunal has no jurisdiction so is independent of their employment tribunal claims.

Seeking disparate impact data

In order to seek to establish discrimination, each candidate requests data:

  • Alice asks Money Bank for documents showing the data on both the total proportion of candidates, and the proportion of successful candidates, who were women. This is needed to establish her claim of indirect sex discrimination.
  • Frank asks for the same in respect of the Black, Black British, Caribbean or African ethnic group.
  • James asks for the data for both over 60-year-olds and over 50-year-olds.

They also ask CaliforniaAI for the same data from all exercises in which GetBestTalent has been used globally.

Would a tribunal order a disclosure request of this nature? In considering applications for the provision of information or the disclosure of documents or data, an employment tribunal must consider the proportionality of the request. It is more likely to grant applications which require extensive disclosure or significant time or cost to provide the requested information where the sums claimed are significant.

In this case, Money Bank has the information sought about the sex, ethnicity and age of both all candidates and of those who were successful which it records as part of its equality monitoring procedures. Providing it, therefore, would not be burdensome. In other cases, the employer may not have this data. CaliforniaAI has the means to extract the data sought, at least from many of the uses of GetBestTalent. However, it would be a time-consuming and costly exercise to do this.

Both respondents refuse to provide any of the data sought. Money Bank argues that this is merely a fishing exercise as none of the claimants have any evidence to support a discrimination claim. They also argue that the system has been audited for discrimination and, therefore, the clams are vexatious. CaliforniaAI also regards the information sought as a trade secret (of both itself and its clients) and also relies on the time and cost involved in gathering it.

In response the claimants apply to the employment tribunal for an order requiring the respondents to provide the data and information requested.

The tribunal orders Money Bank to provide the claimants with the requested documents.
It declines, however, to make the order requested against CaliforniaAI.

In theory, the tribunal has the power to make the requested order against CaliforniaAI. Although, it cannot make the order against an overseas person which is not a party to the litigation, in this case California AI is a party. However, the tribunal regards the request as manifestly disproportionate and gives this request short shrift.

The disparate impact data does not amount to the individuals’ personal data so is not relevant to their DSARs.

Seeking equality data

The claimants also request from Money Bank documents showing the details of: a) the gender, ethnic and age breakdown of the Bank’s workforce in the UK (as the case may be); b) the equality training of the managers connected with decision to use the GetBestTalent solution; and c) any discrimination complaints made against Money Bank in the last five years and the outcome.

Money Bank refuses all requests as it argues that the claim relates to the discriminatory impact of CaliforniaAI’s recruitment solution so that all these other issues are irrelevant. It could provide the information relatively easily but is mindful that the Bank has faced many discrimination claims in recent years and has settled or lost a number so does not want to highlight this.

The tribunal refuses to grant the requests for the equality data as it considers it unnecessary for the claimants to prove their case. The claimants will, however, still be able to point to Money Bank’s failure to provide this information in seeking to draw inferences. The tribunal also refuses the request for details of past complaints (though details of tribunal claims which proceeded to a hearing are available from a public register).

The tribunal does ask Money Bank to provide details of the equality training provided to the relevant managers as it was persuaded that this is relevant to the issues to be decided.

This information does not amount to the individuals’ personal data so is not relevant to their DSARs.

Disclosing the algorithm and audit

The claimants also asked CaliforniaAI to provide it with:

  • a copy of the algorithm used in the shortlisting programme;
  • the logic and factors used by the algorithm in achieving is output (i.e. explainability information relating to their individual decisions); and
  • the results of the discrimination audit.

In this case, CaliforniaAI has the information to explain the decisions, but this is not auto-generated (as it can be with some systems) or provided to Money Bank. Money Bank’s contract with CaliforniaAI does not explicitly require it to provide this information.

CaliforniaAI refuses to provide any of the requested information on the basis that these amount to trade secrets and also that the code would be meaningless to the claimants. The claimants counter that expert witnesses should be able to consider the code as medical experts would where complex medical evidence is relevant to tribunal proceedings.

The tribunal judge is not persuaded by the trade secret argument. If disclosed the code would be in the bundle of documents to which observers from the general public would have access (though couldn’t copy or remove). The tribunal has wide powers to regulate its own procedure and, in theory, could take steps in exceptional cases to limit public access to trade secrets.

However, the tribunal decides not to order disclosure of the code on the grounds of proportionality. It spends more time deliberating over the “explainability” information and the details of the auditing of the system.

Ultimately, it decides not to require disclosure of either. It considers that, in so far as the direct discrimination claims are concerned, it requires more than the assertion by the claimants that they have been directly discriminated against to make the requested order proportionate. If the sums likely to be awarded had been greater, it may well have reached a different decision here. In so far as Alice’s indirect claim is concerned, the explainability information and audit are more likely to be relevant to Money Bank’s defence than Alice’s claim so leaves it Money Bank to decide whether or not to disclose it.

Arguably, UK GDPR requires Money Bank to provide the explainability information in response to the data subject access request, and for Money Bank’s data processing agreement with CaliforniaAI to oblige the American company to provide this. However, both respond to the DSAR refusing to provide this information (this case study does not consider the extent to which they might be justified in doing so under UK GDPR).

What did the data show?

The data provided by Money Bank shows that of the 800 job applicants: 320 were women (40%) and 480 were men (60%); 80 described their ethnicity as Black, Black British, Caribbean or African (10%); and James was the only applicant over the age of 50.

Of the 320 women, only four were successful (20% of total shortlisted) whereas 16 men were shortlisted (80% of shortlisted). Of the 80 applicants from Frank’s ethnic group, three were appointed (15% of successful applicants). Therefore, the data shows that the system had a disparate impact against women but not against Black, Black British, Caribbean or African candidates. There was no data to help James with an indirect discrimination claim.

  Number of candidates (total candidates – 800) % of candidates  Number of successful candidates (total successful candidates – 20)  % of successful candidates
Gender – female candidates  320 40% 4 20%
Ethnicity – Black, Black British, Caribbean or African candidates 80 10% 3 15%
Age – candidates over 50 years old 1 <1% 0 -

After consideration of the data, Frank and James abandon their indirect discrimination claims.

Establishing discrimination: who needs to prove what?

Indirect discrimination

Alice needs to establish:

  • a provision, criterion or practice (PCP);
  • that the PCP has a disparate impact on women;
  • that she is disadvantaged by the application of the PCP; and
  • that the PCP is not objectively justifiable.

1. PCP

Alice relies on the AI application used by Money Bank as her PCP.

If the decision to reject her had been “explainable” then, as is the case with most human-decisions, the PCP could also be the actual factor which disadvantaged her.

Putting this into practice, let’s say it could have been established from the explainability information that the algorithm had identified career breaks as a negative factor. Alice has had two such breaks and might, in such circumstances, allege that this was unlawfully indirectly discriminatory. A tribunal may well accept that such a factor disadvantages women without needing data to substantiate this. Money Bank would then need to show either that this had not disadvantaged Alice or that such a factor was objectively justifiable.

Neither defence would be easy in this case. It is possible that the respondents could run on a counterfactual to show that Alice had not been disadvantaged by her career breaks. This would mean applying the application to an alternative set of facts – so here, running it against Alice’s application but without career breaks to show she would not have been shortlisted in any event.

In our case, however, Money Bank does not have an explanation for Alice’s failure to be shortlisted.

2. Disparate impact

Alice relies on the data to show a disparate impact.

The respondents could seek to argue that there is no disparate impact based on the auditing undertaken by CaliforniaAI using larger numbers of aggregate audits and argue that the Money Bank’s data reflects a random outcome. A tribunal is certain not to accept this argument at face value. Further, the legal tests in the UK and the US are not the same so any auditing in the US will be of reduced value.

In such a case, the respondents could seek to introduce data from its verification testing or the use of the platform by other employers. This may then require expert evidence on the conclusions to be drawn from the audit data.

In our case, neither the audit nor evidence of the impact of GetBestTalent on larger numbers are before the tribunal. Indeed, here, California AI refused to disclose it.

3. Disadvantages protected group and claimant

Alice does not have to prove why a PCP disadvantages a particular group. The Supreme Court in Essop v Home Office (2017) considered a case where black candidates had a lower pass rate than other candidates under a skills assessment test. The claimants were unable to explain why the test disadvantaged that group, but this was not a bar to establishing indirect discrimination.

The PCP (the GetBestTalent solution) clearly disadvantages Alice personally as her score was the reason she was not shortlisted.

4. Justification

Alice satisfies the first three steps in proving her case.

The burden will then pass to Money Bank to show that the use of this particular application was justified – that it was a proportionate means of achieving a legitimate aim.

What aims could Money Bank rely on? Money Bank argues that its legitimate aim is decision-making which is quicker, cheaper, results in better candidates and discriminates less than with human-made decisions.

Saving money is tricky: it cannot be a justification to discriminate to save money, but this can be relevant alongside other aims. Nonetheless, Money Bank is likely to establish a legitimate aim for the introduction of automation in its recruitment process based on the need to make better and quicker decisions and avoid sub—conscious bias. The greater challenge will be showing that the use of this particular solution was a proportionate means of achieving those aims.

In terms of the objective of recruiting better candidates, Money Bank would have to do more than merely assert that the use of GetBestTalent meant higher quality short-listed candidates. It might, for example, point to historic problems with the quality of successful candidates. This would help justify automation, but Money Bank would still have to justify the use of this particular system.

Money Bank seeks to justify its use of GetBestTalent and satisfy its proportionality by relying on its due diligence. However, it did no more than ask the question of CaliforniaAI who reassured Money Bank that it had been audited.

It also points to the human oversight where a HR professional reviews all candidates which the system proposes to shortlist to verify this decision. The tribunal is unimpressed with the human oversight as this did not extend to oversight of all the unsuccessful applications.

Pulling this together, would a tribunal accept the use of this platform satisfied the objective justification test? This is unlikely. In all likelihood, Alice would succeed, and the matter would proceed to a remedies hearing to determine her compensation.

Direct discrimination

Alice is also pursuing a direct sex discrimination claim and Frank and James, not deterred by the failure to get their indirect discrimination claims off the ground, have also continued their direct race and age discrimination claims respectively. The advantage for Alice in pursuing a direct discrimination claim is that this discrimination (unlike indirect discrimination) cannot be justified, and the fact of direct discrimination is enough to win her case.

Each applicant has to show that they were treated less favourably (i.e., not shortlisted) because of their protected characteristic (sex, race, age respectively). To do this, the reason for the decision not to shortlist must be established.

They have no evidence of the reason, but this does not necessarily defeat their claims. Under UK equality law, the burden of proof can, in some circumstances, transfer so that it is for the employer to prove that it did not discriminate. To prove this, the employer would then have to establish the reason and show that it was not the protected characteristic of the claimant in question. In this case, this would be very difficult for Money Bank as it does not know why the candidates were not shortlisted.

What is required for the burden of proof to transfer? The burden of proof will transfer if there are facts from which the court could decide that discrimination occurred. This is generally paraphrased as the drawing of inferences of discrimination from the facts. If inferences can be drawn, the employer will need to show that there was not discrimination.

Prospects of success

Looking at each claimant in turn:

  • Frank: he will struggle to draw inferences as there is no disparate impact inferring any less favourable treatment. The absence of any disparate impact does not mean that Frank could not have been directly discriminated against but without more his claim in unlikely to get anywhere. He does not have an explanation for the basis of the decision or the ethnic breakdown of its current workforce. He has limited information about Money Bank’s approach to equality. He can’t prove facts which in the absence of an explanation show prima facie discrimination so his claim fails.
  • James: his claim is unlikely to be rejected as quickly as Frank’s as the data doesn’t help prove or disprove his claim. James could try to rely on the absence of older workers in the workforce, any lack of training or monitoring and past claims if had the information as well as the absence of an explanation for his rejection but, in reality, this claim becomes pretty hopeless.
  • Alice: her may be on stronger ground. She can point to the disparate impact data as a ground for inferences but this will not normally be enough on its own to shift the burden of proof. Alice can also point to the opaque decision-making. Money Bank could rebut this if the decision was sufficiently “explainable” so the reason for Alice’s rejection could be identified. However, it cannot do so here. The dangers of inexplicable decisions are obvious.

Would the disparate impact and opaqueness be enough to draw inferences? Probably not – particularly if Alice does not have any of the equality data or information about past discrimination claims referred to above and the equality training information does not show a total disregard to equality. She could try and get information about these on cross-examination of witnesses and could point to MoneyBank’s failure to provide the equality data as grounds for drawing inferences and reversing the burden of proof. However, after carefully balancing the arguments, the tribunal decides in our case that Alice can’t prove facts which, in the absence of an explanation, show prima facie discrimination. This means that her direct discrimination claim fails.

If inferences had been drawn and Money Bank had been required to demonstrate that the protected characteristic in question had not been the reason for its decision, Money Bank would have argued that it anonymises the candidate data and ensures age, sex and ethnicity of candidates is omitted and that, therefore, the protected characteristic could not have informed the decision. However, as studies have shown how difficult it is to supress this, the tribunal would give this argument short shrift. If inferences had been drawn, Alice would have, in all likelihood, succeeded with her direct discrimination claim as well as her indirect discrimination claim.

Causing or inducing discrimination

If Money Bank is liable then CaliforniaAI is likely to also be liable for causing/inducing this unlawful discrimination by supplying the system on which Money Bank based its decision. CaliforniaAI cannot be liable if Money Bank is not liable.

Conclusion

The case of Alice, Frank and James highlights the real challenges for claimants winning discrimination claims where AI solutions have been used in employment decision-making. The case also illustrates the risks and pitfalls for employers using such solutions. It illustrates how both existing data protection and equality laws are unsuited for regulating automated employment decisions.

Looking forward, as the UK and other countries are debating over the appropriate level of regulation over AI in areas such as employment. It is to be hoped that these regulations recognise and embrace the inevitably of increased automation but, at the same time, ensure that individuals’ rights are protected effectively.

Related items

Related services

Artificial Intelligence (AI) – Your legal experts

AI has been all over the headlines in recent months. Generative AI can already produce written content, images and music which is often extremely impressive, if not yet perfect – and it is increasingly obvious that in the very near future, AI’s capabilities will revolutionise the way we work and live our lives.

Back To Top