Elon Musk recently expressed a desire to remove the ability to block people on the site formerly known as Twitter, saying of the block feature “it makes no sense.”
As someone perpetually plugged into the matrix I can unfortunately see a lot of sense in having the ability to prevent abusive, annoying and intolerant people from accessing you, but then again I’m not a billionaire who owns the entire site.
There is an interesting discussion to be had about the effectiveness and limitations of safety features online, and interestingly enough, the proposed removal would stand in direct contrast to the terms of service apps must follow in order to be hosted on both the Google play store and the Apple app store, which requires apps to contain, “the ability to block abusive users from the service”. This policy is there for a reason, and that reason is any time there is a space made in the online sphere, that space will inevitably be home to people we do not want to interact with for our safety and sanity.
Though other safety measures would still be available in the event blocking is removed, using the report function is a bit like playing the lottery. I’ve reported death and rape threats, racism, misogyny, antisemitism, anti LGTBQ+ hate, ableism, and a whole host of general nastiness that violates the terms of service of each app I’m on (and might even be considered a criminal offence outwith the virtual sphere). Nine times out of ten the report will come back as not being upheld so, sadly, despite the constant advice to do so from well-meaning onlookers, reporting is overwhelmingly a waste of time.
Many apps have transitioned to using automated systems to check content which is reported to them, passing off the responsibility of checking reports onto AI or a system without human supervision. As someone who has made use of this kind of reporting across various platforms I can tell you it yields results which are mediocre at best, and completely useless at worst. It is incredibly easy to F00I an automatic system, by altering the spelling even slightly even the most offensive slurs are rendered benign in the eyes of the automated system, completely bypassing filters placed on individual w0rdz and phr@sez.
There are those who prioritise the right to offend above else, and set out to be as cruel, offensive and edgy as possible with absolutely no consequences. When people inevitably block them as a result of their insults, denial of humanity and general nastiness, they retreat behind the old sticks and stones mentality and see the use of safety features like blocking as a weakness.
Maybe words are just words and can’t really hurt us on a corporel level, but unfortunately the human brain was designed to accept pain in many forms, even when the source is from words, even when those words are written on a screen. People live and die over words, they start wars and break hearts, and if you’d rather not be subjected to hatred or annoyance in your free time on sites which are ostensibly for recreation and enjoyment, that should be your prerogative. People have a right to speak, but not to be listened to, you have a right to express yourself, but not to an audience.
There is an argument to be made that removing the blocking feature is about avoiding criticism, but on a very fundamental level for the average user of social media there is no obligation to respond to criticism, hate, or any comment for that matter. For most people, social media is meant to be fun. We invented it to connect with each other, but in the interest of safety and sanity, this connection must not be without limits.
It’s important to keep in mind that someone can create an account on Twitter, Instagram, Facebook or Tiktok at the age of 13, but due to the way accounts are set up there is nothing to stop children even younger than that from lying about their age to access the site; 6.6% of Twitter’s users are between the ages of 13-17, and considering the site has a predicted user base of over 497.48 million by 2025, that’s a lot of young people using a site with very few restrictions.
Having the ability to block people both allows young people to develop and enforce boundaries by having the agency to decide who they interact with, but on a very real and practical level it helps keep them safe and prevent people from engaging with their content.
The block feature is not failsafe, however. Anyone who’s ever had cause to block someone knows they only need to log out of the app to see your posts, and create another account to engage with you. Due to the relatively simple set-up, it is incredibly easy to make multiple accounts on sites like Twitter, and these “burner” accounts are often used to circumvent a block.
Instagram currently has a feature through which users can block every account associated with one user, which is a much more effective safety feature and one which is sorely needed elsewhere. Despite Musk’s preference that people mute instead of blocking, practically speaking, muting someone just doesn’t have the same efficacy. It’s a little like putting earplugs in while someone shouts abuse at you: you might not be able to hear them, but everyone else can, and the ensuing dialogues that occur as a result of their presence cannot be effectively moderated.
I’ve muted people only to find out they were using the visibility of my posts to platform their racist beliefs in the comment section, now I simply have a zero tolerance blocking policy. My online experience is much healthier for it. It’s important to acknowledge that it goes both ways: people block me all the time, but it’s really not an issue or something I’d take personally. I respect and understand people curating their online space in the way that’s healthiest for them, just as I’d appreciate their understanding when I do the same.
The internet is amazing, but one of its most beautiful benefits is also one of its most annoying drawbacks: we have an unprecedented level of access to each other. The online space is just like any other: boundaries are incredibly important to set and enforce. You have no right to access someone else or to talk to them, and if they do not want you to engage with them they can and should be able to put a boundary, or block, in place.
Why are you making commenting on The Herald only available to subscribers?
It should have been a safe space for informed debate, somewhere for readers to discuss issues around the biggest stories of the day, but all too often the below the line comments on most websites have become bogged down by off-topic discussions and abuse.
heraldscotland.com is tackling this problem by allowing only subscribers to comment.
We are doing this to improve the experience for our loyal readers and we believe it will reduce the ability of trolls and troublemakers, who occasionally find their way onto our site, to abuse our journalists and readers. We also hope it will help the comments section fulfil its promise as a part of Scotland's conversation with itself.
We are lucky at The Herald. We are read by an informed, educated readership who can add their knowledge and insights to our stories.
That is invaluable.
We are making the subscriber-only change to support our valued readers, who tell us they don't want the site cluttered up with irrelevant comments, untruths and abuse.
In the past, the journalist’s job was to collect and distribute information to the audience. Technology means that readers can shape a discussion. We look forward to hearing from you on heraldscotland.com
Comments & Moderation
Readers’ comments: You are personally liable for the content of any comments you upload to this website, so please act responsibly. We do not pre-moderate or monitor readers’ comments appearing on our websites, but we do post-moderate in response to complaints we receive or otherwise when a potential problem comes to our attention. You can make a complaint by using the ‘report this post’ link . We may then apply our discretion under the user terms to amend or delete comments.
Post moderation is undertaken full-time 9am-6pm on weekdays, and on a part-time basis outwith those hours.
Read the rules hereComments are closed on this article