This week Formula One legend Michael Schumacher spoke for the first time since suffering a catastrophic brain injury in a skiing accident nearly 10 years ago.
In an interview with Die Aktuelle, the seven-time world champion said: “My life has completely changed since [the accident]. That was a horrible time for my wife, my children and the whole family.
“I was so badly injured that I lay for months in a kind of artificial coma, because otherwise my body couldn’t have dealt with it all.”
Except Schumacher never said any of this. His condition a closely guarded secret by his family, only a select few know whether the driver can speak. The words, as Die Aktuelle revealed at the end of their article, were generated by artificial intelligence.
The backlash was fierce and immediate, with Schumacher’s family declaring their intention to take legal action against the publication, but the ‘interview’ may be the thin end of the wedge when it comes to artificial intelligence.
This week also saw the debut of AIsis, marketed as a ‘lost’ Oasis album between 1997’s Be Here Now and its follow-up Standing on the Shoulder of Giants. The music was produced by a real band called Breezer, who released several of the tracks to not much fanfare during the pandemic.
That would probably have been that, until Breezer singer Bobby Geraghty came up with the idea to have Liam Gallagher perform the songs. Burnage’s foremost parka enthusiast wasn’t available, but he and guitarist Chris Woodgates took acapella vocal tracks from the first three Oasis albums and fed them into an artificial intelligence.
They taught the computer to mimic Gallagher’s vocal style and tics, until they reached a point where they could simply type in the lyrics to their own songs and produce ‘new’ Oasis music.
Read More: Thanks to Dylan Mulvaney, beer and bras are the new fronts in the war on woke
The results are frankly uncanny. Gallagher’s vocal style is a distinct one, his voice one of the most recognisable in the past three decades of British music. Yet the AI has managed to replicate it nigh-on perfectly, from the nasal intonation to the elongation of vowels which famously turned the word “sunshine” into “soon-shee-iiiine” on ‘Cigarettes and Alcohol’.
On opener ‘Out Of My Mind’ a Noel Gallagher-esque guitar riff underpins ‘Liam’ sneering about things being “cleee-aarrr” and the “sen-saaa-tiooon” he’s feeling. Play it to anyone without prior knowledge of its provenance, as its creators have done repeatedly, and you’d be sure it was, indeed, a lost Oasis track.
Mad as fuck I sound mega
— Liam Gallagher (@liamgallagher) April 19, 2023
Even the man himself was impressed. Asked for his thoughts on AIsis on his Twitter account Gallagher responded: “mad as f***, I sound mega”. Some fans hopeful of a reunion posited that the Breezer boys may have finally come up with a Liam that brother, Noel, can work with.
Similar AI-generated songs ‘performed’ by Drake and The Weeknd were removed from Spotify, while French DJ David Guetta unveiled Emin-AI-em in February, a soundalike of Detroit rapper Eminem declaring “This is the future rave sound / I’m getting awesome and underground”.
Let me introduce you to… Emin-AI-em 👀 pic.twitter.com/48prbMIBtv
— David Guetta (@davidguetta) February 3, 2023
There’s more. German artist Boris Eldagsen was awarded a prestigious prize at the Sony world photography awards for his picture, ‘Pseudosomnia’, showing two women of different generations in black and white. He refused the prize, stating he’d been a “cheeky monkey” and entered an AI-generated image to see if the judges could tell the difference. Clearly, they could not.
The implications of the rise in AI are obvious. In an era of fake news, how do you trust that someone is saying what they appear to be saying, or that a video is showing what it purports to show? In 2017 the University of Washington created an artificial Barack Obama, using a neural network to map the shape of the former president’s mouth when speaking. Using such technology, a person with the right skills could make virtually anyone with enough available vocal and visual data say virtually anything they wanted. Of course, technology to detect fakes will improve along with the technology to create them but, as Mark Twain famously said, a lie can travel halfway around the world while the truth is putting its shoes on.
One can only imagine what kompromat the KGB could have produced in the Cold War with that kind of technology, or how the CIA could have used it to aid its various coups in Latin America. The thought of it now being harnessed by Qanon, Vladimir Putin and others is not a reassuring one.
It’s not just in the political sphere that the rise in artificial intelligence has troubling implications. Producing a fake interview with Michael Schumacher is, in the end, a gimmick in poor taste but it puts words in the mouth of someone who is unable – or unwilling – to speak for themselves.
Read More: Friends again: will the Harry Potter TV spin-off let Rowling hit reset on her legacy?
Courtney Love faced a backlash in 2009 after seemingly signing off on the use of an avatar of her late husband, Kurt Cobain, in the Guitar Hero series. Fans were understandably perturbed by the image of their counter-culture hero singing songs by Bon Jovi in a music simulator, but at least that was clearly only a representation of the singer.
What, we may well ask, is the future of advertising once AI gets its claws into it? Jimi Hendrix hawking iPhones? Che Guevara driving a Ford Focus? John F Kennedy advertising hard hats?
That's without getting into the murky world of deepfakes and sexual consent.
Several countries around the world are already developing artificial intelligence for use in a military context, with the U.S and China leading the way. In 2015 the UK government opposed a ban on so-called Lethal Autonomous Weapons – clearly no-one at Number 10 has seen Terminator.
James Cameron’s 1984 film and the series it spawned tells of an apocalyptic future in which an artificial intelligence system, Skynet, designed to identify enemy threats comes to view the human race itself as a threat and, when its operators try to disable it, launches a nuclear war which wipes out most of earth. The US National Security Agency currently runs a program which uses machine learning to find information on potential terror suspects. Its name? SKYNET.
Apocalypse aside, AI looks to be bad news for those in the creative sector. Karl Marx wrote of the alienation felt by workers when their individuality is removed from the process of production, turning them from a person into a tool, a thing, a cog in the machine.
The increasing use of programmes like ChatGPT in the creation of written content has several implications for writers and journalists. Firstly, it can lead to job displacement and a reduction in the need for human labour in the industry. This can result in increased competition for a decreasing number of jobs, leading to lower wages and poorer working conditions.
At least one major publisher has already produced AI generated content, though has denied it intends this to replace staff.
Secondly, the use of ChatGPT and other such programmes can also contribute to a sense of alienation from labour among writers and journalists. Alienation occurs when workers feel disconnected from the products of their labour, leading to a loss of meaning and purpose in their work. When a machine learning model is used to generate written content, the role of the writer or journalist is reduced to that of a supervisor or editor, rather than an active creator of original content.
In addition, the use of such programmes can also lead to a homogenisation of written content, as the machine learning model is trained on a large corpus of existing texts, leading to a replication of established patterns and styles.
This can result in a reduction of diversity and originality in written content, further contributing to a sense of alienation among writers and journalists.
Four of the last five paragraphs were written by an AI chatbot.
Why are you making commenting on The Herald only available to subscribers?
It should have been a safe space for informed debate, somewhere for readers to discuss issues around the biggest stories of the day, but all too often the below the line comments on most websites have become bogged down by off-topic discussions and abuse.
heraldscotland.com is tackling this problem by allowing only subscribers to comment.
We are doing this to improve the experience for our loyal readers and we believe it will reduce the ability of trolls and troublemakers, who occasionally find their way onto our site, to abuse our journalists and readers. We also hope it will help the comments section fulfil its promise as a part of Scotland's conversation with itself.
We are lucky at The Herald. We are read by an informed, educated readership who can add their knowledge and insights to our stories.
That is invaluable.
We are making the subscriber-only change to support our valued readers, who tell us they don't want the site cluttered up with irrelevant comments, untruths and abuse.
In the past, the journalist’s job was to collect and distribute information to the audience. Technology means that readers can shape a discussion. We look forward to hearing from you on heraldscotland.com
Comments & Moderation
Readers’ comments: You are personally liable for the content of any comments you upload to this website, so please act responsibly. We do not pre-moderate or monitor readers’ comments appearing on our websites, but we do post-moderate in response to complaints we receive or otherwise when a potential problem comes to our attention. You can make a complaint by using the ‘report this post’ link . We may then apply our discretion under the user terms to amend or delete comments.
Post moderation is undertaken full-time 9am-6pm on weekdays, and on a part-time basis outwith those hours.
Read the rules hereLast Updated:
Report this comment Cancel