Mob violence has many fathers. It is a complex and often confusing phenomenon, both organised and disorganised, both intentional and unthinking. The causes of the far-right rioting and - let us be unequivocal about the nature of these acts - the political violence verging into terrorism that we have seen over the past fortnight will take time and careful analysis of the evidence to disentangle.

But what is clear already is the central role played by social media. These riots, which have afflicted communities across England and Northern Ireland, were organised by networks of far-right agitators through social media platforms like Telegram and messaging apps like WhatsApp. Wider groups were mobilised by advertising "protests" via more mainstream platforms like Facebook. Other platforms like X (Twitter) have hosted content supporting and encouraging the unrest and have repeatedly refused to take down racist abuse and incitement to violence.

The largely unchecked power of social media platforms and oligarchic owners like Elon Musk, himself responsible for amplifying much of the hateful, divisive, and downright lying content he should be moderating, has become a critical issue. These platforms have become breeding grounds for hate speech and political violence, with algorithms that amplify divisive content in the name of clicks and advertising dollars.

This isn’t just a UK problem, it’s a long-standing global crisis. In 2022, researchers at the Center for Countering Digital Hate used the reporting tools provided by Facebook, Instagram, TikTok, Twitter, and YouTube to report 530 posts containing dehumanising content targeting Muslims with racist caricatures, conspiracy theories, and false claims. The social media platforms failed to act on 89% of these reports, including many using hashtags such as #deathtoislam, #islamiscancer and #raghead.


READ MORE BY MARK McGEOGHEGAN

Social media isn't real life but English riots show effects are real

We have much to fear from an unchecked Donald Trump


Similar prior exercises in reporting antisemitic content found that no action was taken on 84% of such posts. Indeed, the CCDH found that no action was taken on 89% of reported posts spreading the "Great Replacement" conspiracy theory blaming Jewish people for purposefully "importing" non-white immigrants to majority-white countries to displace the existing population. Analysis of reporting misogynistic abuse over direct messaging found no action being taken against 90% of such abuse.

Particularly disgracefully, they found that social media companies were continuing to allow content glorifying the 2019 Christchurch massacre, in which a white supremacist gunman attacked two Mosques and murdered 51 people in Christchurch, New Zealand.

In the years since, the situation has only got worse, particularly on X following Elon Musk’s takeover and gutting of its content moderation teams amid his firing of 80% of its workforce. In recent days, Sunder Katwala, Director of the think tank British Future, has reported on a slew of examples of racist abuse and incitement to violence found to supposedly be in line with X’s user guidelines. One such post called for civil war using the hashtag #fightforwhite. Another accused former First Minister Humza Yousaf of being a “spokesman for an invading force” who should be “exiled”. Mr Yousaf himself has recently spoken of being unsure whether the United Kingdom is still a safe place to raise his children.

Enough is enough. Misinformation, disinformation, hateful content and incitement to violence have become a clear and present threat to the security of our communities, and ethnic minority communities in particular. We cannot continue to let such content pollute our public square unchecked.

It is time for the UK Government to step in with robust regulations. One possible solution is to impose mandatory content moderation standards. Platforms should be legally required to remove hate speech and incitement to violence within a specified timeframe. The German Network Enforcement Act offers a precedent, requiring social media companies to remove "clearly illegal" content within 24 hours of notification or face fines of up to €50 million. We should consider adapting similar rules.

Moreover, transparency and accountability must be at the heart of any new regulations. Social media companies should be required to publish regular reports detailing their content moderation practices and compliance with legal standards. This would not only hold them accountable but also begin the hard task of rebuilding public trust. The European Union’s Digital Services Act includes provisions for transparency reports, which could serve as a blueprint for us.

Financial penalties alone, however, are not enough. There must be personal liability for senior executives who fail to enforce these regulations. The UK’s Online Safety Bill includes a clause that could make tech CEOs criminally liable for failing to protect users from harmful content. This is a step in the right direction, but it needs to be strengthened and rigorously enforced.

Additionally, new regulations should address the role of algorithms. Platforms should be required to disclose how their algorithms work and take steps to prevent the amplification of harmful content. This could include algorithmic audits by independent bodies to ensure compliance with ethical standards. The UK’s Centre for Data Ethics and Innovation has suggested similar measures, which should be incorporated into any new regulatory framework.

Elon MuskElon Musk (Image: PA)

Can we rely on the Government to properly tackle harmful online content? The key reason why it has not been done before is that it is a devilish ask. The implications for freedom of speech must be considered, though we must remember that hate speech is not free speech, as the European Court of Human Rights has consistently upheld. Striking a balance is essential to protect both individual rights and public safety and is not easy to do. The weakness of content-related provisions in the Online Safety Act are a testament to that fact.

In the absence of government action, the responsibility falls to each of us as users of these platforms to act in line according to our conscience, and to vote with our feet (or smartphones, as the case may be). Our presence on platforms that offer safe haven to hateful and harmful content legitimises them and generates their revenue, and we should deprive them of both.

We owe it to ourselves and future generations to ensure that social media platforms are held to account and that the internet is a space where we can continue to connect and exchange ideas without that same space incubating hate and unleashing violence. We cannot tolerate tech oligarchs threatening our security, and we cannot wait for another spate of hate-fuelled rioting and terrorising. The time for action is now.