We argue that in the employment sphere, explainability at an individualised level is key. Our comparison of the different categories of AI safeguards, and a case study considering how local explainability works in practice (which can be seen here), demonstrates that this kind of safeguard has the potential to both to engender trust in AI and to exploit its potential ability to identify and reduce bias and discrimination.
The evolution of AI regulation
The advance of AI into employment decision-making has been tempered by concerns about AI safety and, in particular, the potential for bias and discrimination. AI has the potential to contribute to quicker, more efficient and even better-quality decisions. However, it also has the potential to embed bias and discrimination in decision-making.
Governments and policymakers across the world are looking at various safeguards to improve AI safety. Current and proposed rules favour a variety of measures to protect individuals. The balance between, on the one hand, encouraging innovation and being a competitive home for business and, on the other hand, engendering trust in automated decision-making and providing individuals with appropriate levels of protection, is not easy.
What about existing regulation? As it stands, in many jurisdictions, existing safeguards arise from data protection laws, such as GDPR in the UK and the EU. These laws were introduced at a largely pre-AI time and, in some respects, are ill-suited to a world in which AI threatens to dominate so many activities. It is therefore a priority for legislatures across the world put in place regulation that is tailor-made for AI.
Current and proposed laws around the World
To date, in so far as employment is concerned, AI-specific laws already in force are very limited, with New York City being a notable example. However, in many jurisdictions, proposed laws are being actively debated.
UK
As mentioned above, UK GDPR, as well as equality and employment protection laws, already regulates the use of AI in employment and we have written about this in detail here. But data laws, which oblige lawful, fair and transparent processing of personal data, were not designed with AI in mind. They set up a complicated regime whose applicability to AI employment decisions is not clear.
In looking at AI-specific laws, the UK represents a microcosm of the global debate about how best to regulate AI. Rishi Sunak’s government advocates a “pro-innovation” (in other words a lightly regulated) approach. Its white paper proposals published in August set out principles to guide regulators in enforcing existing laws but avoids going as far as AI-specific legislation.
Going a step further, however, and arguably reflecting the evolving debate around AI regulation even in the last four months, Lord Holmes of Richmond has now introduced the Artificial Intelligence (Regulation) Bill as a private member’s bill in the House of Lords. This bill proposes the creation of an AI Authority which would have powers to ensure that regulators (such as the Information Commissioner’s Office which regulates data protection laws in the UK) have regard to the principles set out in the white paper. These principles include that the AI Authority should deliver: 1) safety, security and robustness; 2) appropriate transparency and explainability; 3) fairness; 4) accountability and governance; and 5) contestability and redress.
Keir Starmer’s opposition Labour Party, which current polling suggests will be in power before the end of the year, supports a more tightly regulated approach. Announcements to date suggest that Labour will focus on safeguards through independent auditing and monitoring of AI processes.
The TUC, the UK federation of trade unions, many of whose member unions are affiliated to the Labour Party, published its own 2021 manifesto, ‘Dignity at Work and the AI Revolution’, on safeguarding workers as AI usage becomes more common in the workplace. It remains to be seen to what extent the TUC proposals will inform Labour’s approach in the event that it does form the next government.
EU
In contrast, the EU is proposing an AI Act aimed specifically at regulating AI. This has been hailed by the EU Council as a “flagship” legislative initiative. On 8 December this passed a key stage in the legislative process after a deal was reached between the EU Parliament and Council.
The preamble to this proposal sets out that this Act is intended to complement EU GDPR rules which are mirrored in UK GDPR. In its current draft, AI systems used in employment will be regarded as High Risk and be subject to detailed safeguards regarding monitoring and risk management.
North America
Regulation in the US looks likely to be more decentralised with states and cities considering their own laws as well as the possibility for federal legislation. As mentioned, New York City stands apart from other jurisdictions whether in the US or elsewhere, having already introduced an AI-specific law safeguarding individuals in employment. Although this leads the way, various states have published their own proposals for AI-specific laws.
Perhaps the most interesting example is the state of California, home to many AI-systems developers. Proposed Bill AB 331 would impose detailed obligations on developers and deployers (such as employers) of automated decision tools. The proposed bill survived two committee votes before being killed off for 2023 by the State’s Assembly Appropriations Committee. It remains to be seen whether it will be revived in 2024 but it does provide an idea of the likely shape of legislation in California and probably elsewhere.
At a federal level in the US, Joe Biden’s government has published a Blueprint for an AI Bill of Rights and an Executive Order on Safe, Secure and Trustworthy Artificial Intelligence. And across the border, Canada has a proposed Artificial Intelligence and Data Act (AIDA) which is not expected to be in force until 2025.
Safeguarding individuals’ rights
For developers and users, it would of course be helpful to have one global standard when it comes to AI regulation. But this is not the world we live in. That said, the existing and proposed safeguards intended to protect individuals where AI systems are introduced into the workplace can at least be categorised.
Categories of safeguards include:
- impact assessments, auditing and monitoring: Essentially, this safeguard obliges developers or users of AI systems to check that the system is “safe
- human oversight and intervention: this obliges users to interpose a human between the preferences revealed by the AI system and the final decision, which, in the case of employment decisions, is made by the employer.
- contestability: In other words, a right for any individual affected by an AI-influenced decision to challenge it effectively; and
- transparency and explainability: All of the above safeguards rely on this to be effective. As I set out below, the duty here can vary significantly but requires some level of openness about the use of AI..
These protections will, of course, supplement existing laws and, where detailed data laws are already in place, considerable safeguards will already be in place.
Yet, it is important to recognise that within each is a spectrum of interventions. As explained below, exactly how the safeguard is required to operate will result in significantly different levels of protection to individuals. I explore below how different jurisdictions are proposing to implement these safeguards, and whether assumptions about their relative effectiveness are correct.
Impact assessments, auditing and monitoring
Impact assessments
A valuable step in mitigating against the safety risks of AI systems is through impact assessments and monitoring and auditing. Impact assessments will be familiar to many as a feature of data protection laws and, as a result, will already apply in many cases.
An impact assessment would normally:
- assess necessity, proportionality and compliance measures;
- identify and assess risks to individuals including risks of bias and discrimination; and
- identify any additional measures to mitigate those risks before the introduction of any AI.
This could be required of the system developer, as well as from any user of the AI system.
The TUC manifesto, mentioned above, proposes that with the introduction of any AI system, “Equality Impact Audits in the workplace should be made mandatory as part of the Data Protection Impact Assessment process”. Similarly, the US federal Blueprint refers to “proactive equity assessments” and “reporting in the form of an algorithmic impact assessment, including disparity testing results and mitigation information.”
Amongst the obligations in California’s proposed AB 331 bill, is one on both deployer and developer to conduct an initial impact assessment and then further assessments annually. This assessment would have to be provided to the State’s Civil Rights Department within 60 days.
Auditing and monitoring
Moving on from an initial impact assessment, many of the current and proposed laws rely heavily on auditing and monitoring to police the safety of an AI system in the workplace, including minimising bias and discrimination. However, what is proposed in terms of monitoring requirements varies significantly:
- New York City Law 144: this is one of the few AI-specific laws already in force. This includes detailed rules for an annual independent audit of the AI process (the Automated Employment Decision Tool (AEDT)) for discrimination which it must publish.
- Proposed California bill AB 331: the proposed bill would require an analysis of the potential adverse impacts on various protected characteristics.
- EU AI Act: the draft text sets out in article 9 detailed requirements relating to the monitoring and testing of systems.
- Canadian AIDA: this proposes amongst its regulatory requirements monitoring obligations.
- UK proposals: though the principles set out in the UK government’s white paper and the Lord Holmes’s private members’ bill (above) do not expressly include monitoring and auditing, the explanatory notes in the white paper make it clear that “risks should be continually identified, assessed and managed” to satisfy the safety, security and robustness principle and that “conducting impact assessments or allowing audits where appropriate” will be in important in satisfying the accountability and governance principle.
How will this work in practice?
In terms of where responsibility for the process will sit, as with any impact assessment, the duty to monitor/audit can be the developer’s responsibility or the user’s or both. AI systems used in informing employment decisions can be bespoke (i.e. they are developed for a specific client) or can be purchased off the shelf. This variety means that the appropriate monitoring and auditing process will vary from system to system. However, in most of the laws/proposed laws, this auditing/monitoring requirement also includes a duty to publish the results.
What data would be used for testing purposes? Before a system has been in use, it may be possible to use real testing and verification data to test for bias and discrimination. For example, with a system to profile job candidates, the testing and verification data to audit can be based, say, on past candidates. In a bespoke system this is likely to be past candidates from the user. With an off the shelf system, the data may come from other organisations. In some cases, real data will not be available and “synthetic” or fictional data will be used instead.
After a system has been in use, the user is likely to have available real data against which to monitor outcomes and identify disparate impact against designated groups.
How effective will this safeguard be?
No doubt, impact assessments and the monitoring and auditing of AI systems will be an important element in mitigating risks. Making outcomes publicly available, will also help to engender trust and overcome algorithm aversion. There will, however, always be limitations to these safeguards alone:
- Has the algorithm evolved? Some AI algorithms are continually evolving as the algorithm learns how better to achieve the desired results. Whilst this may ensure the technology is effective in achieving the desired outcome, it also means that any historic auditing will be less valuable if the algorithm has since changed.
- Is the audit independent? In terms of the burgeoning auditing industry, if the company which has developed the system also audits, how independent will it be? Would an employment court accept the results of such an audit? Auditing AI systems not least for bias and discrimination is becoming a big business which is only likely to grow as AI-usage becomes more commonplace.
- Is the system jurisdiction appropriate? A further reservation is that equality laws in the US (where most systems are developed and audited) are not the same as those in say the UK or the EU. In the US, blunt application of a four-fifths disparate impact rule of thumb and ignoring groups which constitute less than 2% of the total would not necessarily satisfy UK/EU equality laws.
- Why is there disparate impact? Further, an audit may reveal disparate impact on, say, the grounds of gender. However, it will not necessarily identify the cause of the disparate impact. Not all disparate impact on protected groups is unlawful. Without knowing the cause of the disparate impact, it is not possible to assess whether the relevant factor might nonetheless be objectively justified. It is also very difficult to say whether a duty to make reasonable adjustments/accommodations to aid an individual with a disability might be satisfied if the factors which were taken into account are not known.
Human oversight, intervention or review
Human oversight or intervention is often promoted as a key safeguard. This is not, however, a settled legal or practical concept and can mean different things in different jurisdictions. A closer examination of what this really entails begs the question of how effective it is likely to be.
For example, with the draft EU AI Act, the necessary oversight relates to human oversight of the implementation and use of the system (art 14). This is effectively “human monitoring” of the system. This is very different – much further removed from individualised decisions – from a right to have a human review any automated decision. The draft Canadian AIDA goes one step further than the EU by requiring meaningful human oversight including “a level of interpretability appropriate to the context”.
Moving away from this kind of ‘bird’s eye’ oversight, the duty could extend far beyond oversight of the operation of the system to potentially having a human review any decision made using an automated process. For example, the TUC Manifesto includes a right to human review and a right to human contact or “in-person engagement where important high-risk decisions are made about people at work”.
Interaction with automated decision-making
UK and EU GDPR already provide specific rules on automated decision-making and there is no indication of any AI law going further than this. Under GDPR, solely automated decision-making, including profiling, which has a legal or similarly significant effect, is generally prohibited. This will include employment decisions.
Whether a decision is ‘solely automated’ comes down to the level of human involvement – if someone considers the result of an automated decision before applying it to an individual, then it will not be ‘solely automated’. If, however, the human involvement is a token gesture simply rubber-stamping the automated decision, then it could well be.
However, human monitoring of the process (as opposed to individual decisions) will not, in itself, be enough for a decision not to be solely automated.
Solely automated employment decisions can be permitted under GDPR where certain conditions are satisfied including the right for the data subject to require human intervention in the decision. We have written about this in more detail here.
A red herring?
As a safeguard, in my view, human oversight of AI-influenced decisions should be seen as a “red herring”. At an operational level, human oversight of the AI system is merely a feature of effective monitoring and auditing (above). Yet at an individualised level, human oversight of each decision is likely either to be ineffective or to render the efficiencies and benefits from the AI system redundant.
Human oversight, in practice, will often amount merely to rubber stamping the AI recommendation or, at the very least, being heavily influenced by it. If a human reviews an unsuccessful application, how can they possibly evaluate the fairness of a decision which has been influenced by AI system scores, without understanding the factors which have been taken account in respect of each individual in arriving at those scores?
In shortlisting exercises where the efficiency savings from AI can be considerable, the number of candidates to be considered can run into the thousands. No employer is going to invest in an AI system to profile these and then have a human review individually every one of the thousands of rejected candidates. Otherwise, why bother with the AI? For these rejected candidates, automated decisions to reject are being made without any human intervention.
And in terms of liability, an AI developer will not be able to absolve itself by arguing that the profiling undertaken by AI merely amounted to a recommendation to a human or a factor in a human’s decision. Where the human decision has been influenced by the AI-generated “predictions” then both the employer and the software business are likely to be liable where, even in part, unlawfully discriminatory factors played a part in the predictions from which the human made a decision. We have explored this complex potential for liability here.
Contestability
Effective safeguards should mean that anyone wronged by an AI-influenced decision has the effective means to challenge the decision. Opaque “black box” decisions, where an individual does not know and cannot establish why a decision was made, not only undermine trust but also pose major obstacles to proper contestability.
The UK white paper, for example, states that: “Regulators will be expected to clarify existing routes to contestability and redress and implement proportionate measures to ensure that the outcomes of AI use are contestable where appropriate.”
California’s proposed AB 331 bill would confer a private right of action to infringed individuals who considered their rights infringed, a provision which, it was reported, was met with opposition from business and tech groups.
Effective contestability will depend on explainability (see below). If an individual does not understand the factors which have led to a particular determination, they cannot effectively challenge the outcome.
Transparency and explainability
Many of the new and proposed laws regulating AI require some transparency. This is used in different cases to mean different things, but generally covers openness about how the AI system is used. In terms of how this relates to explainability, the terms have different meanings but explainability is best seen as a sub-set of transparency.
Transparency
Looking again at the global approaches, we can see differences in how the concept is being applied by legislatures across the world as well as in existing legislation:
- UK: The AI White Paper states, “Transparency refers to the communication of appropriate information about an AI system to relevant people (for example, information on how, when, and for which purposes an AI system is being used).” “Appropriate transparency and explainability” is one of the core principles set out in the UK white paper.
- Canada: AIDA defines transparency as, “providing the public with appropriate information about how high-impact AI systems are being used. The information provided should be sufficient to allow the public to understand the capabilities, limitations, and potential impacts of the systems.” Transparency is amongst the principles set out for high-impact AI systems.
- EU: The draft EU AI Act provides that, “Providers shall ensure that AI systems intended to interact with natural persons are designed and developed in such a way that natural persons are informed that they are interacting with an AI system, unless this is obvious from the circumstances and the context of use” (art 52(1)).
- US: The New York City law requires an element of transparency in that users must advertise their use of AI. California’s AB 331 bill is more detailed in its list of information that would have to be made available as part of the impact assessment and also the information about the automated decision tool which would have to be provided to any person subjected to a “consequential” decision (which would include an employment decision) based on an automated decision tool.
- GDPR: Existing UK and EU data privacy laws include an obligation to ensure personal data is processed lawfully, fairly and transparently. In this context, transparency is explained by the UK’s regulator, the ICO, as “being clear, open and honest with people from the start about who you are, and how and why you use their personal data.” Under GDPR, in addition to satisfying the transparency obligations, with solely automated decisions (see above) meaningful information must be provided about the logic involved and the consequences arising from the automated decision, but this does not go as far as requiring local explainability.
Explainability
Turning to the final category of safegurds, the UK AI White Paper explains explainability as: “the extent to which it is possible for relevant parties to access, interpret and understand the decision-making processes of an AI system”.
However, explainability comes in different forms:
- Global explainability: This relates to the factors a system uses to achieve its predictions across the entire model.
- Cohort explainability: This sets out how the predictions are being made over a subset of individuals covered.
- Local explainability: This relates to the application of the model to an individual – how the system made its prediction about this person.
Perhaps ironically, the pro-innovation UK white paper, which promotes a light touch to regulation, seems to go furthest amongst proposals to regulate in highlighting the importance of explainability. One of the core principles it advances is “appropriate transparency and explainability”. It states, “Parties directly affected by the use of an AI system should also be able to access sufficient information about AI systems to be able to enforce their rights”. This connects explainability and contestability (see above).
Whilst the UK white paper does talk about “system” explainability as opposed to local explainability, it is difficult to see how these objectives it sets out can be reached without local explainability. This highest level of explainability is therefore implicit in this approach.
Few other laws or proposed laws currently require this. The EU AI Act does not, relying instead on the role of “market surveillance authorities” to address risks and issues. The draft EU legislation (Article 64) goes as far as entitling these authorities access to training, testing and verification data and even, where necessary, the source code itself.
California’s proposed AB 331 bill includes comprehensive transparency obligations, but these do not extend to requiring individual explainability.
The silver bullet?
Monitoring and auditing, contestability and transparency are all important elements of a framework of safeguards and, as we have seen, consistently feature as part of evolving AI regulatory systems around the world. However, all safeguards are not made equal: it is my view that explainability and, in particular, local explainability is key to unlocking concerns about the increased use of AI in the workplace.
Again, even the explainability safeguard operates on a spectrum: advising a job candidate that they have not been shortlisted and that this decision was informed by profiling through an AI system and even setting out the logic underpinning the system’s predictions will not generate trust in the decision-making or enable an effective challenge. Local explainability goes a step further.
Explainability needs to function in such a way that it addresses the mistrust which would otherwise result from “black box” decisions. This would enable those who consider themselves unlawfully discriminated against to understand why an outcome was reached and potentially challenge this outcome. It would also enable an employer or the software developer to defend any discrimination claim without risking inferences being drawn from the inability to explain and without the consequential need to demonstrate any outcome was not discriminatory.
The benefits of local explainability, however, go further and would serve to harness the benefits of AI while overcoming some of the key dangers of its use:
- Better decisions: It should result in better decisions and overcome the major concerns with AI and discrimination. And, crucially, it should result in decisions far less tainted by discrimination than those made by humans.
- Clear reasons: It also opens up the possibility of being able to analyse each of the material factors which the algorithm has identified as correlating to the desired outcome. By looking at the disparate impact of each material factor against protected characteristics (or even other characteristics) adjustments can be made or consideration given to justifying any potentially discriminatory factors. How this would work in practice is illustrated in our worked example which can be seen here. This example shows the information that would be generated by a locally explainable system in respect of an individual candidate, demonstrating how the basis on which a decision has been made can be explained accurately and precisely.
- Discoverable explanations: Explainability can also distinguish automated decisions from human decisions. With automated decisions, there is a true and potentially discoverable explanation. With human decisions, reliance is made on the records of the decision-maker or their witness evidence. There is obviously more scope for a human-decision to be disguised in terms less vulnerable to challenge whatever the individual’s true reasoning. AI applications can “auto-generate” an explanation as to how decisions have been made. Any requirement of local explainability in employment decisions would need to be coupled with an obligation to make this information available in an understandable format on request.
Conclusion
As AI systems become more commonplace in employment decisions-making and as AI regulations are developed, it is to be hoped that the undoubted potential for better, more efficient and less discriminatory decision-making can be harnessed with controls to mitigate against the obvious dangers.
In order to confront the potential for bias and discrimination and to engender trust, these controls, it is argued, need to include effective local explainability.