On 23 July, a new Joint Committee was established by the House of Lords and the House of Commons. Its purpose and function is to consider the Draft Online Safety Bill. The Committee comprises six MPs and six Peers. It will report its findings by 10 December 2021, and it has a lot of work to do before then.

But no sooner was this Joint Committee announced than MPs set up their own separate inquiry into the same draft Bill, which will be conducted by a House of Commons DCMS Sub-Committee

The latest Committee (let's call it Committee #2) has made clear that its work will be distinct from any work by the Joint Committee on the Draft Online Safety Bill (Committee #1).  Committee #2 will take a broad approach to scrutinising the Draft Bill, and may cover areas such as how it will interlock with other areas of government policy. Committee #2 will investigate how focus has shifted since the introduction of the Online Safety Strategy Green Paper in 2017, including concerns that the definition of 'harm' is now too narrow and may fail to address issues such as non-state intervention in elections, racist abuse and content that contributes to self-harm and negative body image.

It will also explore key omissions of the draft Bill, such as a general duty for tech companies to deal with reasonably foreseeable harms, a focus on transparency and due process mechanisms or regulatory powers to deal with urgent security threats and how any gaps can be filled before the Bill is finalised. Another focus will be on where lessons can be learnt from international efforts to regulate big tech, such as in France, Germany and Australia.

In my view, Committee #1 will need to take an even broader approach if it is to scrutinise this Bill effectively.

The Online Harms Bill is arguably one of the most significant pieces of draft legislation since Brexit, though it is hard to be certain because this government has been churning out shiny new ideas (and less shiny new ideas) and strengthening and harrying regulators, left, right and centre for the past couple of years.

The draft Bill would compel social media sites and search engines to remove harmful content such as terrorist content, child sexual exploitation and abuse and disinformation that causes individual harm.

Julian Knight MP, chair of Committee #2 said:  “The Online Safety Bill has been long overdue, and it’s crucial that the Government now gets it right. As a Sub-Committee we look forward to conducting scrutiny work prior to legislation being introduced. We’re seeking evidence on what the Bill doesn’t currently address and how improvements can be made to better serve users now and in the future. We’re concerned about how the regime will respond to new dangers, which must be a priority in a fast-changing digital environment, and that critical issues such online racist abuse could fall out of scope.

However, we should be under no illusions about what is at stake with the online harms regime (now rebranded the 'online safety regime'), as politicians grapple with enormously important themes and decide just how British people's freedom of expression will be curtailed and balanced against as-yet vaguely defined 'online harms'. 

We are all certain that online harms exist of course (including those mentioned above), and that much more must be done to protect against various types of obvious harms, many of which don't seem to be prosecuted adequately at present, such as awful racist abuse and other types of obvious harms that are (or ought to be) already illegal.  But 'online harms' is a much more amorphous concept - not only dealing with illegal activity, but with activity that is not illegal but 'harmful' in some other way that falls short of being illegal. Someone or some entity, be it platform or regulator or Secretary of State (or combination of the three) will need to decide what must be banned or removed for being 'harmful' but not unlawful for these purposes - sidestepping Parliament and possibly the courts - and therein lies a very significant challenge.

Both committees have important work to do, and we hope that both committees will provide valuable perspectives and input into this process as the Bill takes shape, and that they will take a long term view, to protect us all without eroding our fundamental rights more than is necessary and proportionate. 

Quis custodiet ipsos custodes?

On a lighter note, I wonder what the collective noun might be for 'committees'? An committee of committees? An embarrassment of committees? A murder of committees?  After all, as Barnett Cocks sardonically remarked: “A committee is a cul-de-sac down which ideas are lured and then quietly strangled.

In any event, it is not surprising that we have a couple of committees 'inquiring' into these vital issues, which are at the heart of our democratic system. This also hints at one of the main issues at the heart of the proposed online safety regime itself, i.e. who will come out of this with ultimate control over what we can and can't say on social media?  

The government of the day?  Secretary of State?  Social media companies?  Ofcom (the designated regulator)? Or will this leave us with an Escher-style staircase, whereby there is only ever an illusion of one of these powers standing above  the others?  

And if one of them wins out - who will keep them in check?!  

We will have to wait and see.  

In the meantime, it might  be time for the UK to finally have a written constitution to guard against rogue governments of the future. 

The only snag with that, of course, is that it involves trying to codify an entire constitution, spanning centuries of history and four home nations (along with various territories). The mind boggles as to how many committees that particular exercise would spawn.