Rishi Sunak has warned that Artificial Intelligence could soon become “superintelligent” and shrug off humanity’s yoke.
In a speech on the emerging technology ahead of next week’s global summit at Bletchley Park, the Prime Minister shared assessments by the UK's intelligence services which said AI could “make it easier to build chemical or biological weapons” and become a devastating tool for terrorist groups.
The Tory leader said he did not want to be “alarmist” and that it was “not a risk that people need to be losing sleep over right now” but that he needed to be “honest” with the public.
READ MORE: Scottish Government concerned over 'hands-off' approach to AI laws
The paper from the spooks said that within two years, the technology could “increase sharply the speed and scale of some threats.”
“The rapid proliferation and increasing accessibility of these technologies will almost certainly enable less-sophisticated threat actors to conduct previously unattainable attacks.
“Risks in the digital sphere (e.g. cyber-attacks, fraud, scams, impersonation, child sexual abuse images) are most likely to manifest and to have the highest impact to 2025.
“Risks to political systems and societies will increase in likelihood as the technology develops and adoption widens. Proliferation of synthetic media risks eroding democratic engagement and public trust in the institutions of government.”
The spies admitted that the “difficulty of predicting technological advances” meant there was “significant potential for technological surprise.”
READ MORE: Why regulation of AI is becoming ever more important
In his speech, Mr Sunak said: “Get this wrong and it could make it easier to build chemical or biological weapons.
“Terrorist groups could use AI to spread fear and disruption on an even greater scale,” he said.
“Criminals could exploit AI for cyber attacks, disinformation, fraud or even child sexual abuse.
“And in the most unlikely but extreme cases, there is even the risk that humanity could lose control of AI completely through the kind of AI sometimes referred to as ‘super intelligence’.
“Indeed, to quote the statement made earlier this year by hundreds of the world’s leading AI experts, mitigating the risk of extinction from AI should be a global priority, alongside other societal scale risks such as pandemics and nuclear war.”
Ahead of next week’s summit, Mr Sunak announced the Government would establish the “world’s first” AI safety institute, which the Prime Minister said would “carefully examine, evaluate and test new types of AI so that understand what each new model is capable of” and “exploring all the risks”.
He said tech firms had already trusted the UK with privileged access to their models, making Britain “so well placed” to create the world’s first AI safety institute.
The Prime Minister said the Government would use next week’s summit to push for a first international statement about the nature of AI risks, and said leaders should follow the example of global collaboration around climate change and establish a global expert panel on the issue.
But Mr Sunak said the Government would not “rush to regulate” AI, although he added that countries should not rely on private firms “marking their own homework”.
“Only governments can properly assess the risks of national security,” he said.
READ MORE: University of Glasgow launches new Centre for AI and Data Science
He also defended the decision to invite China to the AI summit.
Responding to questions from journalists, the Prime Minister said: “I can’t say with 100% certainty that China will be there.
“But I do believe that it is absolutely the right thing to have invited them.
“China is unquestionably the world’s second AI power behind the US. That is just a fact when you look at the amount of research investment and activity that is happening there.”
He added: “That doesn’t mean that it is going to be successful, it doesn’t mean you’re going to agree on everything.
“But you should certainly try and engage with them because for a proper solution to AI over time, it is going to require an international solution. Whether China attends is obviously up to them.”
Why are you making commenting on The Herald only available to subscribers?
It should have been a safe space for informed debate, somewhere for readers to discuss issues around the biggest stories of the day, but all too often the below the line comments on most websites have become bogged down by off-topic discussions and abuse.
heraldscotland.com is tackling this problem by allowing only subscribers to comment.
We are doing this to improve the experience for our loyal readers and we believe it will reduce the ability of trolls and troublemakers, who occasionally find their way onto our site, to abuse our journalists and readers. We also hope it will help the comments section fulfil its promise as a part of Scotland's conversation with itself.
We are lucky at The Herald. We are read by an informed, educated readership who can add their knowledge and insights to our stories.
That is invaluable.
We are making the subscriber-only change to support our valued readers, who tell us they don't want the site cluttered up with irrelevant comments, untruths and abuse.
In the past, the journalist’s job was to collect and distribute information to the audience. Technology means that readers can shape a discussion. We look forward to hearing from you on heraldscotland.com
Comments & Moderation
Readers’ comments: You are personally liable for the content of any comments you upload to this website, so please act responsibly. We do not pre-moderate or monitor readers’ comments appearing on our websites, but we do post-moderate in response to complaints we receive or otherwise when a potential problem comes to our attention. You can make a complaint by using the ‘report this post’ link . We may then apply our discretion under the user terms to amend or delete comments.
Post moderation is undertaken full-time 9am-6pm on weekdays, and on a part-time basis outwith those hours.
Read the rules hereLast Updated:
Report this comment Cancel