social media
It is a lofty ideal to impose a duty of due impartiality on social media platforms. Decisions of where and whether to allow or to block various content on major social media platforms are extremely important. The flow of information, misinformation and disinformation clearly has the power to influence societies and the outcome of elections.

Regulating this area is complicated further by the fact that what each individual may see, or not see, is highly personalised using algorithms which are not wholly transparent.

Governments in Europe, the US and elsewhere are still working out ways in which to tackle these important and increasingly pressing issues. The UK government recently stated its intention to put the UK at the forefront of effective online regulation, and last year it put forward the Online Harms White Paper for consultation. However, these proposals to tackle online harms seem to be making very slow progress for a number of reasons, not least because the White Paper aims to deal with a wide range of issues at the same time, from terrorism, child exploitation and other illegal content, through to harmful but not necessarily ‘illegal’ content, which might undermine civil discourse, and abuse or bully other people. If and when they are finally implemented, these new rules will need to be enforced by a regulator. The government has stated that it is ‘minded’ to appoint Ofcom as the online harms regulator, but this is by no means certain, and we could yet see a brand new regulator being set up to fulfil this role.

Faced with these myriad challenges, it is very surprising that the government is now contemplating throwing in an additional complicating factor, which is to introduce a ‘duty of due impartiality’ on social media platforms.

Details of the proposal have not yet been made public, so at this stage it is not clear whether this duty is aimed solely at tackling so called ‘algorithmic bias’ (either by making algorithms more transparent to individual users or even by imposing restrictions on the way they work), or whether the government will attempt to go further by imposing a duty of due impartiality on social media platforms in relation certain types of content (or even all content) which is visible to UK users.

While it is sensible to consider how to make ‘algorithmic bias’ more transparent, beyond that it seems anathema to anyone in a democratic society that values free speech for the government or a government-appointed regulator to attempt to control the free exchange of opinions and ideas by imposing a blanket duty of ‘due impartiality’ on social media platforms.

To provide some context, it is not a new concept for Ofcom. This duty can already be found in the Ofcom Broadcasting Code. For example, news programmes broadcast on television or radio by Ofcom-licensed services are currently subject to a duty of ‘due impartiality’ - and even a duty of ‘due accuracy’, which we won’t go into here. Furthermore, in relation to “matters of major political or industrial controversy and major matters relating to current public policy”, the duty of ‘due impartiality’ is extended beyond news programming – it applies to most other types of programmes on Ofcom-licensed television and radio services, including comedy programmes, televised dramas and even cinema films shown on television.

Ofcom states that it is an editorial matter for the broadcaster how due impartiality is preserved, as long as it is preserved. When there are apparent transgressions however, Ofcom can investigate.

Ofcom deals with alleged breaches of this duty under its current Broadcasting Code by investigating them and publishing its findings online. Ofcom does not censor (i.e. block content in advance), but instead censures broadcasters for breaches of the Code by requiring that corrective statements are issued after the fact. In extreme cases it may also impose financial sanctions.

It isn’t clear whether the same approach would be taken in relation to social media platforms, or how effective such corrective statements or fines might be. 

The fact that Ofcom has experience of imposing a ‘due impartiality’ standard on television and radio programming in the UK does not mean that it would be desirable, advisable or practical to do so in relation to international social media platforms, which are designed to facilitate a free flow of ideas, opinions, information, videos of people falling over and cats doing funny things.

On a serious note, ‘due impartiality’ might appear to be an objective, balanced and universal concept, but in practice this can be interpreted in a range of ways, depending on the topic under discussion and depending on a variety of geopolitical factors. If this duty is introduced, the social media platform may have to anticipate how the relevant national regulator would expect it to apply that duty of due impartiality in relation to different topics at different times, and they might ultimately need to take a different approach in different countries.

In the UK, the regulator might decide that certain matters of ‘major political or industrial controversy’ have already been settled one way or the other and should therefore be discussed or treated accordingly on all social media platforms. A regulator in another country might disagree or might hold the opposite position. For example, issues around abortion or the rights of LGBTQ+ individuals to marry, serve in the military or to adopt children might be considered settled in the UK today, and can be discussed without the need to reference a counterpoint to explain that some people disagree, but in the US (or at least some parts of the US) and some other countries, the situation might be different. The need for the social media platform to comply with its duty of due impartiality might mean it can’t be seen to ‘take a side’ on these issues or others.  And while it might be sensible to require social media platforms to be transparent about ‘taking a side’, transparency and impartiality are not the same thing.

It is difficult to argue for introducing a ‘due impartiality’ approach in the UK while condemning the approach taken by countries where certain platforms and search engines are blocked, and where authorities monitor social media platforms and screen messages which are deemed ‘politically sensitive’.

Of course, in the UK and elsewhere, the social media platform and the regulator must have due regard to the human rights of users, including freedom of expression, so it may seem fanciful to imagine the UK reaching that stage. However, it can happen quickly when a government or regulator may decide that certain topics should be treated differently or should be ‘off limits’ entirely because they are too ‘politically sensitive’.

If you don’t believe that could happen in the UK, consider Labour’s proposal in the past few days to ban ‘anti-vax’ content on social media platforms, which Labour suggests should be enforced using fines and criminal prosecutions. Many of us will agree that the anti-vax movement is a bit… ‘fringe’. Despite the emotional appeal of the idea of banning anti-vax posts during a pandemic, and despite the argument that freedom of expression on this topic could be outweighed by the need to protect public safety, it is an unattractive path for a government in a free society to follow. Would such a ban also extend to those who object to the idea of compulsory vaccinations, for example? If we go down this route, what other topics might be treated in the same way when the government of the day considers it expedient?

Leaving aside the principled and conceptual concerns around a duty of ‘due impartiality’ curtailing freedom of expression by the back door, the practical considerations around enforcing such a duty are also enough to make one’s head spin.

If the UK government were able to impose this duty of ‘due impartiality’ on social media platforms that are accessible to UK residents, the sheer volume of content on those platforms appears to make it impossible for the platforms to interpret and apply this duty in relation to all content fairly and consistently. And how would the platforms achieve compliance? By deleting one-sided content or by matching it with an alternative opinion or ‘fact checker’ on every topic? Would there be a right of appeal or redress if it is applied unfairly or incorrectly? How long would such a process take?

The press will not be subject to this ‘duty of due impartiality’ therefore is it right that some views and opinions that could be freely expressed in the press might not be allowed on social media? If newspaper content is shared on social media platforms could this result in a breach of the duty of due impartiality? 

How would Ofcom or the relevant regulator investigate each and every alleged failure by social media platforms to apply this duty correctly?  Ofcom currently publishes investigations into a handful of alleged breaches of this duty of due impartiality in a broadcast context each year. Those decisions pass by largely unnoticed by the public. If Ofcom has to investigate hundreds or thousands of alleged breaches by social media platforms each year, each month or each week, how would it deal with the volume?  Would the individuals or companies whose content is involved also be allowed to make representations in defence of their views and posts?

How much will this system cost, how will it be funded and by whom?

If other countries imposed a similar rule, would global platforms have to take a different approach in different countries, depending on the national and cultural sensitivities and political mood?  Will content have to be geo-gated and is this all conducive to a global exchange of information and ideas?

Perhaps most importantly, will we even be allowed to ask these questions in years to come?

A duty of due impartiality might sound like a worthy ideal, but it remains to be seen how the government proposes this would be applied in practice and whether sufficient safeguards could be introduced to ensure that it doesn’t give rise to bigger problems than it solves.

 

Authors