AI in recruitment: ICO’s recommendations and key questions
27 November 2024
On 6 November 2024, the Information Commissioner’s Office (ICO) issued an audit outcomes report (Report) on AI tools in recruitment.
The audit outcomes report sets out recommendations for AI providers and developers of AI powered sourcing, screening and selection tools used in the recruitment process. The recommendations follow a series of consensual audits conducted by the ICO between August 2023 and May 2024, forming part of the ICO’s monitoring of the “wider AI ecosystem”, ensuring AI recruitment tools comply with data protection laws and enhance protection for “job seekers’ information rights”.
AI Technology and Recruitment
With AI powered platforms being increasingly used in recruitment, the ICO has voiced their concerns about the “new risks that may cause harm to jobseekers” if AI technology is not used “lawfully and fairly”.
The ICO audits found “some considerable areas for improvement in data protection compliance and management of privacy risks in AI”, and revealed that certain features in some of the tools could lead to discrimination or bias, e.g. by filtering out candidates with protected characteristics or inferring gender identity based on a candidate's name. The ICO also uncovered additional risks associated with such tools such as the excessive collection of personal data, and in some cases “scraping job networking sites and social media” and building large databases based on candidate profiles without their knowledge.
The Recommendations
The ‘output’ of the ICO’s Report was seven recommendations that focus on the core principles of data protection compliance, e.g. collecting and processing data “fairly” by monitoring the performance of the AI tool to overcome issues pertinent to its accuracy, bias and potential to discriminate.
The seven recommendations for providers and recruiters are as follows:
1. Fairness
As it says on the tin, both providers and recruiters must ensure they process personal data fairly. The ICO expects to see monitoring for potential or actual fairness, accuracy or bias - both in the AI itself and its outputs, as well as any mitigations put in place to address any issues.
Special category data must be both adequate and accurate if it is used to monitor for bias or discrimination outputs. The ICO is clear inferred or estimated data “will not be adequate and accurate enough” to comply with data protection law. It will be important to ensure any providers you are working with are using adequate, accurate and up-to-date data sets when monitoring for bias or discrimination outputs.
2. Transparency and explainability
Applicants have the right to be informed that AI is being used, how it is being used and their right to challenge any decision made. It is good practice for businesses to produce a privacy policy exclusively for candidates that describes the AI platform - setting out the logic involved in the algorithms and how they use personal data in the training, testing and developing of the AI. This is in addition to how the requirements of the GDPR are met - what personal data is processed, for what purpose, how is it processed etc.
Any privacy policy should be reviewed on a regular basis to ensure accuracy, particularly before implementing any changes to the AI data processing. The ICO expects the provider to be proactive in supplying relevant technical information or details about the AI logic to the recruiter, and the recruiter and the provider must agree and
document which party is responsible for providing the privacy information to the candidates.
Transparency has been a factor in many of the large regulatory fines recently, whether the Irish Data Protection Commission’s €390 million Meta fine or Tik Tok’s €345 million fine, the CNIL’s €32 million Amazon fine or the multiple fines from various data protection authorities for Clearview AI totalling ~€98 million - the stakes for breaches of transparency are high!
3. Data minimisation and purpose limitation
Again nothing unexpected or unusual here, more a reminder that the data minimisation principle must be adhered to, e.g. for providers only the minimum personal data required "to develop, train, test, and operate each element of the AI" should be collected, while for recruiters only the minimum personal data “necessary to achieve the AI's purpose” should be collected.
As for purpose limitation the ICO provides a reminder about the original purpose, and not sharing, storing or reprocessing personal data “for an alternative incompatible purpose” - so no scope creep allowed!
Retention is also a thorny issue with the ICO noting that the audit showed not only did some AI tools collect far more personal data than necessary but they then went on to retain it “indefinitely” and in some cases use it to “build large databases of potential candidates without their knowledge” – data minimisation, purpose limitation, transparency, fairness, storage limitation, accuracy, accountability…so many issues, so little compliance!
This recommendation from the ICO makes it clear providers and recruiters need to get this right – so if you haven’t already asked the questions on these issues, now is a good time to do so.
4. Data protection impact assessments (DPIAs)
The ICO restates its advice to businesses recommending they complete a DPIA “early in AI development and prior to processing”, ideally at the procurement stage, where the processing is likely to result in a “high risk to people”.
The ICO has already included AI as an example of an innovative technology likely to result in high risk, and in the same guidance states “The ICO also considers it best practice to do a DPIA, whether or not the processing is likely to result in a high risk”, so on a practical level a DPIA for an AI tool seems prudent, and certainly something the ICO will look for should there be any questions or issues raised. We need only think of the Irish Data Protection Commission’s cross-border statutory inquiry into Google’s PaLM 2 AI model, which is investigating whether a DPIA was undertaken prior to any processing of personal data, to recognise this is an issue firmly in the regulatory spotlight.
It is also important to remember a DPIA should also be updated as the “AI develops and when processing changes”, to address and mitigate potential privacy risks or harms. This approach also helps to meet accountability obligations, evidencing technical and organisational controls as well as assessing and mitigating risks.
5. Data controller and processor roles
While the relationship between the parties will likely be the recruiter as controller and the AI provider as processor, there may be instances where there is joint controllership. The ICO is clear that the roles must be defined for each specific processing activity and that a contract detailing the relationship must be in place.
6. Explicit processing instructions
The ICO also specifies if the relationship is C-P, with the AI provider as the processor, the recruiter (as the controller) must set “explicit and comprehensive written instructions for them to follow”, e.g. as well as the usual means and purposes of the processing, the controller also needs to identify the specific data fields required, the output they require and set minimum safeguards to protect the personal data processed.
The controller must also set out how they will ensure that the provider is complying with the instructions, as well as considering setting out performance measures, e.g. “statistical accuracy and bias targets”. The ICO again reminds both providers and recruiters there should be no scope creep.
7. Lawful basis and additional condition
As we know businesses are required to identify an appropriate lawful basis to rely on for processing personal data, with special category data requiring an additional condition. The ICO reminds businesses of the need to document, record and share the lawful basis and condition (if relevant) in the privacy notice.
Six key questions
The ICO also published a blog post providing six key questions “organisations should ask when procuring AI tools to help with their employee recruitment.” As you would expect they follow the recommendations from the Report providing practical, easy (to ask if not always to answer!) questions that businesses looking to integrate AI recruitment tools should familiarise themselves with. If you haven't already done so, asking these questions will be helpful to evaluate your level of compliance when considering data protection and AI technology.
The questions are:
1) Have you completed a DPIA?
2) What is your lawful basis for processing personal information?
3) Have you documented responsibilities and set clear processing instructions?
4) Have you checked the provider has mitigated bias?
5) Is the AI tool being used transparently?
6) How will you limit unnecessary processing?
Conclusion
As detailed by the previous government in their Responsible AI in Recruitment Guide, “there is no one size fits all approach to AI assurance,” however, as a starting point it is necessary for businesses to ensure that they have embedded the ICO’s recommendations into their recruitment strategy.
The ICO has provided lots of practical guidance, in the form of their recommendations, as well as examples and case-studies for businesses looking to use, or for those already using, AI tools in their recruitment processes. With such a clear indication of what the ICO expects businesses to be doing it would be prudent to review the ICO recommendations and revisit your current recruitment processes and procedures, especially where AI tools are used, in order to ensure compliance. For those contemplating using AI tools in this way, getting ahead of the game by incorporating the recommendations from the outset seems a sensible approach.
In the Report the ICO stated that “AI providers and recruiters should follow our recommendations, to ensure AI recruitment tools comply with UK data protection law.” If the regulator comes knocking there will be little wiggle room for those who have not implemented the recommendations!
If you have any questions about how to ensure compliance when deploying AI tools in your recruitment process, or the wider workplace context, please get in touch with your usual LS contact. You can also read our latest article on Navigating the job market with AI, as well as more from our experts on our AI hub.