Decisions of where and whether to allow or to block various content on social media sites are extremely important. Such decisions can impact society in a number of ways, from influencing the outcome of elections to discourage people from following official advice and recommendations during a pandemic.

Over the weekend, the Labour party suggested that 'fake news' about vaccinations should be banned from social media platforms altogether. This might be an emotionally appealing move, but it would set a potentially dangerous precedent if politicians are to assume control over what social media users can and cannot discuss via those platforms.  For example, could such a ban be extended to block any criticism of mandatory vaccination programmes or the allocation of PPE contracts?  

Regardless of the merits of such a move, the concept of the government only allowing one side of any argument to be heard is a clear societal threat, and one which would (and should) keep many of us awake at night. 

Along with a wide range of other 'online harms' (from bullying to terrorism), the UK government is keen to get to grips with these issues.

The UK government recently stated its intention to put the UK at the forefront of effective online regulation, and last year it put forward the Online Harms White Paper for consultation. These proposals are making slow progress, but even in the face of these myriad challenges, it is very surprising that the government is now contemplating throwing in an additional complicating factor, which is to introduce a ‘duty of due impartiality’ on social media platforms.

The details have not yet been made public. While this move could be limited to bringing transparency to algorithms (to tackle 'algorithmic bias'), there is a risk that it goes further.

In my brief article for The Times, I explore what a 'duty of impartiality' might look like, how it might be applied, and whether it would be realistic in practice.

Alternatively, you can find an in depth analysis on our website, here.