THE Online Safety Bill was finally published this month after the longest of gestations. Three years ago the British government published a white paper on “how to make the UK the safest place in the world to be online”. There followed a public consultation, then a draft Bill, then a long round of parliamentary inquiries and committee reports and now, finally, we have an actual Bill.
Its proponents claim that legislation is needed because, as one parliamentary committee put it, “self-regulation of online services has failed”. If this seems a contestable proposition, the government’s proposed solutions raise a host of thorny issues which, even now, ministers do not appear to have fully worked through.
The core of the problem is this: why should online service providers – in particular, social media sites – be placed under legal duties not to allow content on their platforms which is not itself illegal but which somebody, somewhere says is none the less “harmful”?
Equivalent duties do not exist in the offline world. If this or any other newspaper in Britain publishes a story that is defamatory, in contempt of court, or contrary to the Official Secrets Act, that newspaper will be liable in law. Freedom of speech does not extent to publishing (or broadcasting) material whose dissemination is unlawful.
But, short of that, publishers of books, magazines and newspapers are not required by statute to monitor their publications to ensure that material which is not illegal but which is somehow otherwise harmful does not appear in print. So why has the government introduced proposals to do just that for social media websites in its Online Safety Bill?
The main impetus seems to have come from a fear that children and other vulnerable people are exposed to too much material online which they should not be able to see (or watch) and which may either damage them or encourage them to damage themselves. There is a concern that the algorithms used by sites such as Facebook and Instagram push people down rabbit holes – if a stressed-out teenager starts to click on posts that discuss aspects of self-harm or suicide, algorithms ensure that that teenager is shown more and more such posts, creating a vicious spiral. Such was the force of an influential NSPCC report published in 2019.
In addition, there are concerns about disinformation (commonly referred to as “fake news”) circulating widely on social media, some of which is manifestly contrary to the public interest. Anti-vax fake news about Covid vaccines, for example, has led to large numbers of unvaccinated people finding themselves requiring hospital treatment, and sometimes intensive care treatment, at great public expense, on top of the even greater public expense that has already been poured into the vaccination programme itself.
That “something must be done” about such matters has all the hallmarks, however, of a moral panic, rather than of reasoned policy. Hence the rising tide of concern that, even if it is well-intentioned, the Online Safety Bill may do more harm than good. Among the critics of the Bill stand not only libertarian groups such as the Institute of Economic Affairs, but also free speech organisations like Big Brother Watch and the Index on Censorship. To these, we can now add the Spectator magazine and Toby Young’s Free Speech Union.
The Bill’s critics have a point. Since John Stuart Mill wrote about it in the middle of the 19th century, we in Britain have always understood that free speech is fundamental, yet not absolute. Freedom of speech is a basic right and, ordinarily, our speech should in no way be censored by the state. The exception, of course, is when our speech causes identifiable harm – and this “harm principle” was JS Mill’s great insight.
The classic example is shouting “Fire!” in a crowded theatre, causing a stampede in which members of the public are crushed and injured. Of course your right to free speech does not extend so far. But the harm of shouting “Fire!” in a crowded theatre is both obvious and easy to define: you may not use words in such a way that causes imminent injury (nor, for that matter, fear of imminent injury, which is why threatening or abusive speech may rightly be subject to criminal sanction).
But when statutory regulators are given powers to police speech that is not itself unlawful, or criminal, or threatening, but merely “otherwise harmful” we have moved decisively away from a legal framework based on free speech, towards a law based on the licensing and censoring of speech. And that is what the Online Safety Bill does.
No one has any problem with the proposition that criminal speech should not appear on social media platforms. I cannot use Twitter to threaten you any more than you can use Facebook to send a menacing message to my family. Such posts would be criminal offences – and quite right too. Sending threatening or menacing message in the post via the Royal Mail would likewise be an offence – there is no special treatment for the online world here.
And if that is where the Bill stopped, it would surely be uncontroversial. But it goes a great deal further, subjecting social media websites to new regulation, which will be undertaken by Ofcom, on the basis of as yet unspecified “harms” which are not themselves unlawful. Not only is this impermissibly vague: it also lacks any sort of transparency. To whom will Ofcom be accountable, in the exercise of their novel regulatory powers?
There are those who consider that the social media giants are already too censorious – or, at least, that they are incoherent and inconsistent in their censorship (Donald Trump is banned from Twitter, for example, but not Ayatollah Khamenei, who has more than once used the platform to call for political violence altogether more dangerous than Trump’s idiotic incitement of the Capitol Hill riots). Parliament should think long and hard before enacting into law a new regulatory regime for online speech that will make this even worse.
Our columns are a platform for writers to express their opinions. They do not necessarily represent the views of The Herald.
Why are you making commenting on The Herald only available to subscribers?
It should have been a safe space for informed debate, somewhere for readers to discuss issues around the biggest stories of the day, but all too often the below the line comments on most websites have become bogged down by off-topic discussions and abuse.
heraldscotland.com is tackling this problem by allowing only subscribers to comment.
We are doing this to improve the experience for our loyal readers and we believe it will reduce the ability of trolls and troublemakers, who occasionally find their way onto our site, to abuse our journalists and readers. We also hope it will help the comments section fulfil its promise as a part of Scotland's conversation with itself.
We are lucky at The Herald. We are read by an informed, educated readership who can add their knowledge and insights to our stories.
That is invaluable.
We are making the subscriber-only change to support our valued readers, who tell us they don't want the site cluttered up with irrelevant comments, untruths and abuse.
In the past, the journalist’s job was to collect and distribute information to the audience. Technology means that readers can shape a discussion. We look forward to hearing from you on heraldscotland.com
Comments & Moderation
Readers’ comments: You are personally liable for the content of any comments you upload to this website, so please act responsibly. We do not pre-moderate or monitor readers’ comments appearing on our websites, but we do post-moderate in response to complaints we receive or otherwise when a potential problem comes to our attention. You can make a complaint by using the ‘report this post’ link . We may then apply our discretion under the user terms to amend or delete comments.
Post moderation is undertaken full-time 9am-6pm on weekdays, and on a part-time basis outwith those hours.
Read the rules hereLast Updated:
Report this comment Cancel