This article was co-authored by James Davies and Airlie Hilliard (Senior Researcher, Holistic AI).
How is AI being used?
To date, by far the most common use of AI in employment related decisions is in recruitment. This is unsurprising given the potential cost and time savings that come from using algorithms to whittle down thousands or even tens of thousands of candidates to a handful that receive job offers.
The recruitment life cycle
AI is used throughout the recruitment lifecycle:
- Advertising: Algorithms can be used from the very start of the recruitment funnel to target job advertisements. LinkedIn, for example, uses an algorithm to match applicants with skills and other job requirements based on information available on their profile. This matching is driven by keywords and can be used to determine to whom to show adverts and to filter out unqualified applicants. AI tools have also been developed to help combat gender-coded language in job advertisements.
- Screening: Algorithms can then be used for initial screening of CVs in order to shortlist candidates. By way of example, Textkernel uses algorithmic reasoning to interpret information from CVs and job descriptions to filter, rank and match candidates based on the similarity of their previous experience to the requirements in the job description.
- Assessment: At the next stage, AI can be used to score assessments completed by candidates to measure job-relevant traits, skills, and competencies. This can be through chatbot interviews, video interviews and even gaming scenarios. Bryq, for example, uses a chatbot-driven assessment that measures candidate personality, cognitive ability, and other competencies and evaluates the match between a candidate’s profile and the competencies required from the job. These competences are identified using a profile predictor tool that analyses the job description or the psychometrics of incumbents. Game-based assessments, on the other hand, immerse candidates in a game scenario that are scored using algorithms based on performance as well as gameplay data such as clicks and timings. For example, Harver offers gamified behavioural assessments that use AI to infer candidates’ cognitive and emotional attributes using thousands of data points that measure non-verbal behaviour.
In terms of who is using this technology, research indicates that AI is currently mainly being used by very large organisations. The platforms are often (but not always, e.g., Greece-based Bryq) from US technology companies and US owned corporations are leading the way in their adoption. And their use is on the rise. In April 2024, SHRM, the US human resources body, surveyed its members and 1 in 4 reported using AI to support HR-related activities including hiring. Additionally, a 2024 study by Harvard Business Review Analytical Services found that 52% of respondents said that had some automation in the recruitment process. And a recent study showed that nearly all Fortune 500 companies in the US use Applicant Tracking Systems which use AI and machine learning to make recruitment processes more efficient.
Beyond recruitment
Although, to date, AI has been used most in recruitment, this is beginning to change. Algorithms are being used for internal mobility to identify employees best placed for opportunities which arise. An example is Fuel50, one of the few non-US players in this market – it is based in Auckland, New Zealand. Fuel50 creates a map of skills across the organisation and automatically identifies how good a match individuals are to open roles based on their skills, rather than their current role. Skills are inferred from a number of data sources across the organisation.
Algorithms are used to allocate work most notably in the platform economy by companies such as Uber. They can be used in setting pay and assessing performance including for promotions, redundancy selection or bonuses. Beqom’s talent intelligence is an example of machine learning being used in setting pay, with algorithms being used to optimise budgets, objectives, and pay equity, and Zavvy AI is an example of AI being used to provide data-driven performance management.
Extending the use of AI from the hiring process to career and compensation decisions is likely to result in an increase in legal claims. Pressure to move from laws ill-equipped for automated decision-making towards AI-specific rules, which balance unlocking the undoubted potential benefits of AI in supporting employment decisions with adequate protection for the subjects of those decisions, is sure to rise up the political agenda. This is something that is already occurring in the US in particular, with New York City Local Law, which requires independent bias audits of automated employment decision tools used for promotions or hiring, enforced from 5 July 2023 and already influencing several similar laws in other states.
Benefits and Challenges of AI
Benefits
The potential benefits of AI-driven workplace decisions are obvious. There is some evidence that not only does AI result in faster and more efficient decisions, saving employers considerable sums, the decisions can be better and less discriminatory. Indeed, human biases are notoriously difficult to overcome, even with targeted training. However, bias in algorithms can be audited and mitigated. Moreover, unlike human-decisions, AI profiling can be “locally explainable” in that the factors and weighting applied in scoring candidates is potentially available, identifying the true basis on which decisions have been made and where unfair discrimination might have crept into output.
Challenges
Harnessing these potential benefits does mean recognising and addressing the challenges.
- Use of AI by candidates – what’s sauce for the goose can be sauce for the gander. It’s widely recognised that applicants are using generative AI tools to help them pull together a CV, write a covering letter, answer written questions and even generate headshots. Headshotpro is an example of the latter. However, the risk is that this results in generic applications that fail to stand out, and also that employers fail to get a true picture of a candidate who maybe gaming the system. There is also a risk that over-sceptical employers may incorrectly accuse applicants of using AI in their applications and screen them out, potentially resulting in them missing out on top talent. While employers could use approaches such as tracking candidate eye movements or mouse movements to detect cheating, these methods typically are not favoured as they violate candidates’ privacy. However, non-text-based assessment methods, such as game-based assessments or work sample tests, are much more resistant to faking.
- AI safety - key among concerns is the risk of bias and discrimination permeating decisions and AI usage breaching data privacy laws. Despite the range of jurisdictions across which regulation is being developed, common mechanisms to address this risk have emerged. These include mandatory auditing and monitoring process, the ability to contest decisions, and the need to undertake impact assessments. Our summary of global AI safety measures can be seen here. The risks of legal claims will increase as AI use extends more and more beyond recruitment, something we have explored in our article here and case study here. AI safety in recruitment has also been the focus of some of the most detailed government guidance: Responsible AI in Recruitment guide - GOV.UK and also the ICO audit: AI in Recruitment Outcomes Report).
- Trust – attracting and retaining the best people requires the maintenance of trust. Employers using AI to drive employment decisions whether in recruitment or particularly decisions about compensation and careers will need to bring the people subjected to the decisions with them and this will require transparency and consultation. AI laws are increasingly imposing transparency and notification requirements to inform candidates and employees that automated tools are being used to assess and monitor them, but some vendors, such as HireVue and Beamery, have gone a step further and published AI explainability statements that go into detail about how their tools work and the technologies they rely on.
Looking to the future
The integration of AI into the recruitment lifecycle and broader employment decisions presents both significant opportunities and challenges. As the technology continues to evolve and its use in employment decisions expands, the legal landscape is likely to adapt accordingly. The potential for future regulation under the new Labour government is high, particularly given the increasing pressure to ensure that AI-driven decisions are fair, transparent, and accountable. This regulatory evolution will be crucial in balancing the benefits of AI with the protection of individual rights, ensuring that the technology is used responsibly and ethically in the workplace.