ICO publishes new guidance on content moderation obligations
06 March 2024
On 16 February 2024, the ICO published new guidance on the interplay between content moderation obligations and data protection (“the Guidance”). This is the ICO’s first guidance on content moderation and outlines how data protection law applies to content moderation processes under the Online Safety Act 2023 (OSA) and the possible impacts on people’s information rights.
Click here to view the ICO's new guidance.
The Guidance forms part of the ICO’s ongoing collaboration with Ofcom on data protection and online safety technologies. The ICO will be keeping the guidance under review and will update it where appropriate to reflect technological developments and Ofcom’s online safety codes of practice as these are finalised.
Why has the ICO produced this guidance?
Content moderation is commonly used by organisations to analyse content generated by users to check if it’s appropriate for publication on their platforms. This process involves using people’s personal information and can cause harm if incorrect decisions are made. Decisions based on the wrong information could lead, for example, to a user's content being incorrectly identified as being illegal or people being removed from online platforms without reason.
The Guidance is aimed at organisations who are carrying out content moderation to meet their obligations under the OSA. However, it also applies to organisations who are carrying out content moderation for other reasons, such as monitoring compliance with terms and conditions.
How does the guidance relate to the OSA?
The Guidance is intended to help organisations within the scope of the OSA to comply with data protection law as they carry out content moderation to meet their online safety duties.
In summary, the OSA has a broad scope and applies to providers of:
- internet services which allow users to encounter content generated, uploaded or shared by other users;
- search engines which enable users to search multiple websites and databases; and
- internet services which publish or display pornographic content (meaning pornographic content published by a provider).
For more information on the applicability of the OSA, please see our article Lewis Silkin - The Online Safety Act.
The OSA sits alongside data protection law. If an organisation is carrying out content moderation that involves personal information, it must comply with data protection law.
What is content moderation?
The Guidance uses the term ‘content moderation’ to describe:
- the analysis of user-generated content to assess whether it meets certain standards; and
- any action a service takes as a result of this analysis. For example, removing the content or banning a user from accessing the service.
The Guidance focuses on the moderation of user-generated content on user-to-user services (as defined under the OSA) and applies to content moderation that is manual, partly automated and solely automated.
Moderation action could involve content removal, service bans, feature blocking or visibility reduction.
What does the new guidance say?
The Guidance makes very clear that content moderation systems will involve processing of personal information, whether the user generated content is about someone, or is connected to the account profile of the user who uploaded it (e.g. online username, IP address, age, previous activity on the service, profile of interests and interactions etc.).
In order to comply with data protection legislation, organisations must be able to demonstrate that using personal information in content moderation:
- is necessary and proportionate; and
- complies with the data minimisation principle (i.e. not processing more personal information than is necessary for content moderation purposes).
The Guidance outlines a number of considerations that organisations will need to address from a data protection perspective when engaging in online content moderation which involves processing personal data. These will typically include:
- DPIAs: The Guidance states that content moderation is likely to result in a high risk to the rights and freedoms of individuals, meaning that a DPIA will generally be required prior to the processing. As part of any DPIA, it will be important to consider the specific harms that might arise through the use of content moderation, such as adverse effects on rights and freedoms (e.g. the right to freedom of expression), financial harm (e.g. through loss of income or employment), or discrimination based on a moderation system’s outputs.
- Controller/processor roles: Third party service providers may be used for content moderation, in which case the respective roles of the various parties as controller or processor must be clearly defined. For example, the allocation of roles may differ depending on whether the third party moderation provider acts only on the organisation’s instructions (which include specific content policies owned by the organisation that the moderation provider has to classify content against) or alternatively if the content moderation is more complex (for example if the content moderation system involves the use of AI or if multiple entities are involved).
- Lawful basis for processing: Organisations must identify a lawful basis to process personal data for content moderation purposes. In this context, the most appropriate lawful basis for processing is likely to be either (i) legal obligation (for example compliance with the OSA); or (ii) legitimate interests (for example to enforce terms of service), subject to carrying out a legitimate interest assessment. Where an organisation is processing personal data in order to apply the measures recommended in Ofcom’s codes of practice under the OSA, the Guidance clarifies that it will likely be possible to rely on the legal obligation lawful basis in this context as well.
- Fairness: Personal data must be processed in accordance with the fairness principle. This includes (i) ensuring content moderation systems perform accurately and produce unbiased, consistent outputs; (ii) regularly reviewing how personal information is used in content moderation processes to minimise the risk of unfair outcomes for users; and (iii) ensuring that any technologies used to process personal information are sufficiently statistically accurate and avoid discrimination.
- Transparency: Organisations must comply with the transparency principle and ensure that users are provided with clear and accessible information about how their data is processed for content moderation purposes, including details of any solely automated decision-making (see below). This information must be easy to understand, particularly in the context of online services likely to be accessed by children. In addition to the transparency requirements under data protection law, providers of regulated user-to-user services under the OSA must also include provisions in their terms of service which inform users about any proactive technology (such as content identification technology) that they use to comply with safety duties for illegal content or for protection of children under the OSA.
- Data minimisation: The Guidance notes that content moderation technologies and methods are capable of gathering and using more information than is necessary, which risks unnecessary intrusions into users’ privacy. In many cases, it will likely be possible to make accurate content moderation decisions based solely on the content, in which case organisations should avoid using personal information associated with the content or user’s account to make those decisions. However, the ICO also recognises that the moderation of content can be highly contextual, and that in some cases it will be necessary to use personal information (beyond just the content) to decide whether moderation action is needed (such as previous posts on the service, records of previous content policy violations, and interactions on the service such as likes and shares). The ICO acknowledges that it may be possible to justify the use of users’ personal data in this way and demonstrate compliance with the data minimisation principle, provided that the organisation can demonstrate that the processing is necessary (e.g. to ensure that content moderation decisions are accurate and fair) and that no less intrusive option is available to achieve its purpose.
- Data subject rights: Organisations must ensure that they have processes in place to enable individuals to exercise their rights in relation to personal data processed for content moderation purposes (such as the right of access or the right to have inaccurate personal data corrected). For example, in the context of data subject access requests (DSARs), it may be challenging to respond to these where content moderation systems use large amounts of information or if the information contains details of other users; organisations should therefore store information processed by content moderation systems in a way that makes it easy to locate when responding to a DSAR. Similarly, where third party content moderation providers are used, organisations should ensure that any personal information that they use or generate is readily retrievable.
- Automated decision-making: Content moderation systems may extensively use automation to support content analysis and moderation actions, particularly if the system is AI-based and is used to classify and take action on content without a human being involved in those decisions. Where organisations deploy content moderation systems involving solely automated decisions with legal or similarly significant effects, it will be necessary to consider whether it is possible to rely on an exception under Article 22 UK GDPR (which permits such processing only where authorised by domestic law, necessary for performance of a contract, or based on the individual’s explicit consent). Content moderation systems may not always involve solely automated decision-making, for example where material is being matched to a known database of prohibited material that has been determined by humans, or there are pre-defined parameters that have been set by humans. However, content moderation systems can be very sophisticated (e.g. making their own predictions based on context and circumstances) and so the capabilities of the system should be carefully reviewed and considered before ruling out Article 22 automated decision-making. Where the system does involve Article 22 automated decision-making users must be provided with information about the decisions made, including meaningful information about the logic involved and the anticipated consequences for individuals.
What do organisations need to do?
If organisations a) come within the scope of the OSA; or b) carry out content moderation for other purposes, then the Guidance should be adhered to. In particular, organisations processing personal data for content moderation purposes should:
- carry out a DPIA to assess and mitigate the risks of the processing, which can then be used to inform a data protection by design and default approach to implementing the content moderation system;
- ensure that each party has a clear understanding of their role as controller, joint controller or processor where third party content moderation providers are used, and that this is reflected in any relevant contracts;
- identify and document an appropriate lawful basis for processing personal data in this context, as well as a condition for processing special category personal data and/or criminal offence data where relevant;
- regularly review how personal data is used in the content moderation process to minimise the risk of unfair outcomes for users. This can be achieved through auditing moderator decisions and regularly checking systems for accuracy and bias;
- ensure privacy notices contain appropriate information regarding the processing of personal data for content moderation purposes, including any automated decision-making processes, the information used, the purposes that the information is used for and what the effect on users might be;
- determine whether any content moderation systems used involve solely automated decisions with legal or similarly significant effects, in which case the use of those systems will be subject to restrictions under Article 22 UK GDPR; and
- ensure compliance with the other data protection principles including
- data minimisation (processing adequate, relevant and limited information – taking particular care with children’s information),
- data accuracy (keeping data and records of violation up to date and considering any challenges from users about the accuracy of their information),
- data retention (not keeping data ‘just in case’ and having a clear policy for how long data is stored based on its purpose),
- data security (ensuring appropriate technical and organisational measures are in place to protect personal data),
- ensuring data subject rights can be complied with,
- ensuring there is a lawful basis to share personal data where relevant (for example if sharing personal data with other organisations or with the authorities) and
- putting in place appropriate measures for international data transfers where applicable (e.g. if using third party content moderation providers located in third countries).
This is the first in a series of products the ICO has planned about online safety technologies. This is part of the ICO’s ongoing commitment to publishing guidance on such technologies, alongside the ICO’s work to ensure regulatory coherence between the data protection and online safety regimes, as announced in the ICO’s 2022 joint statement with Ofcom on online safety and data protection. With further developments, products and guidance expected, it seems this will be an area to watch in 2024!