“And that’s when he unalived himself, as well as his wife and children,” says one true crime TikTok user as she demonstrates her new Wonderskin eyebrow pencil.

“Told my therapist I tried to unalive myself and they told my parents lol,” captions another TikTok user over a sombre shot of a teenager who must be no older than 14 years old gazing longingly into the selfie camera.

Welcome to the world of algospeak, where the most serious of topics become watered down in language to avoid the algorithmic social media censors.

Platforms such as TikTok and YouTube insist on such language if users want to avoid being de-prioritised in their algorithms, or in the case of bigger accounts, cut off from monetising videos. Users have had to adapt to these rules, and the terms and workarounds such rules create have now entered the popular lexicon.

After all, serious topics such as death (‘unalived’), suicide (‘sewerslide’), paedophiles (referred to as ‘PDF file’), and sexual assault (referred to as ‘mascara’) are not appealing to advertisers. Advertisers need to know their ad spends are on safe grounds, and in safe hands, and social media companies are more than happy to close off the dark alleys of their platforms to accommodate.

Read more:

We’re stuck in an infinite nostalgia loop… where is the culture of tomorrow?

In 2017, YouTube experienced a mass exodus of advertisers which came to be known as the ‘adpocalypse’. After large-scale media coverage of questionable content in videos geared towards children, many big ad spenders got cold feet and YouTube was left shaking watching their business model crumble. The consequence was a severe tightening of what can be deemed ‘advertiser-friendly’, restricting content dealing with an array of subjects, from crime to mental health to politics.

It was pandemonium for YouTube but a lesson learned for its competition. ‘Unalive’ and its relations spread the most on TikTok, where a younger generation more receptive to colloquial language dwelled. Stories abound online of teachers hearing ‘PDF file’ casually in their classrooms, of foreign students not understanding the word ‘suicide’ but comprehending ‘unalive’ perfectly. As silly as they sound, these words are very much becoming real. It’s a strange linguistic mutation, a quirk, bred out of the cultural boundaries of a heavy social media age.


But it’s hard to shake just how trivialising or infantilising these code words are when discussing matters that deserve respect and thoughtfulness. The space to freely and openly discuss serious topics and the ability to use language that conveys the reality and gravity of what’s being talked about is important. Some things should be exempt from sanitising corporate practices.

Social media companies now hold the power to change language itself based on the whims of their advertisers. Language shifting, evolving, and expanding is an organic part of the deal in how we communicate with each other, and while commercial culture certainly plays its role and influence, algospeak hinges on a very peculiar, modern arrangement.

However linguistic and language experts see no concern for those participating. Linguistics researcher Andrea Beltrama believes its use doesn’t alter how seriously the matter is perceived, squaring it under the much more positive “lexical innovation”.

“Whoever says ‘unalive’ intends to communicate something about suicide, and knows that, and assumes that whoever is on the other end will be able to retrieve that intention,” Beltrama said.

The idea of young people not taking matters seriously enough while using these words could be judged as misguided, immature – yet the way young people communicate and use language seems to be an easy scapegoat. The issue remains of social media companies limiting speech regarding important topics to overprotect their commercial viability.

Read more:

Derek McArthur: OpenAI’s Sora could kill commercial film – good riddance?

“The reality is that tech companies have been using automated tools to moderate content for a really long time and while it’s touted as this sophisticated machine learning, it’s often just a list of words they think are problematic,” UCLA School of Law lecturer Angel Diaz told the Washington Post.

As complex as the use of algorithms has become, they are still incapable of context and making individual judgements. They cannot ascertain whether something is ‘important’ or ‘serious’ to the concerns of the human mind.

The moderation shortcut of a banned words list may filter out a lot of content that should rightfully not be on a public platform, but it results in barriers being placed when people express their thoughts, feelings, and experiences on important matters. Those in the know of this word substitution game may be able to effectively communicate through them, but plenty won’t.

It is hard to predict how a heavy social media age will continue to alter our cultural language in the future. It’s uncharted territory. Changing language to the ebb and flow of the digital economy will likely become more necessary. Certainly expect peculiar aberrations such as ‘unalive’ to evolve and become more commonplace.