The UK’s national security and law enforcement agencies help to keep our people and institutions safe from harm.
It’s aways been a difficult job, but in a world where technology is changing rapidly, it is a more challenging task than ever. To keep pace, national security and law enforcement agencies are increasingly harnessing the power of artificial intelligence to create automated analytics, which analyse data and provide insight to help reduce threats to public safety.
Those automated analytics rely partially on data gathered from electronic monitoring and surveillance, which involves a degree of intrusion into some people’s private lives.
Balancing the goals of law enforcement and national security with individuals’ right to privacy is foundational to a liberal, democratic society. In the UK, that balance is regulated by the Investigatory Powers Act 2016, which specifies that agencies consider two criteria: necessity and proportionality.
The principle of proportionality is a familiar one: we use it every day to determine our behaviour by balancing the risk and reward for our actions. It’s an instinct connected to what we judge to be fair and just.
However, the stakes in the national security context are high – literally life and death. A question that national security and law enforcement are faced with ever-more-regularly is the extent to which automated analytics and AI change the nature of proportionality and how we assess it.
On one hand, we have obligations of national security and law enforcement agencies to keep citizens safe in a challenging operational environment.
On the other hand, as digital information on individuals becomes more widely available both from what they choose to share online and personal data that is collected by services, institutions, and surveillance, the capacity to intrude on individuals’ lives is becoming greater.
These high stakes call for correspondingly high standards of clarity and accountability.
I am the co-author of a new report from The Alan Turing Institute, the UK’s national institute for data science and artificial intelligence, which examines how to balance the needs of national security with individual’s human rights.
The report offers a new structured framework to help better understand and assess the level of privacy intrusion when AI analytics are used.
To build that framework, we held focus groups and conducted interviews to gather opinions and feedback from stakeholders across the UK government, national security and law enforcement, and legal experts outside government.
The framework focuses on six key factors that will help individuals and organisations assess the risk of how automated analytics are impacting privacy intrusion. These are: the datasets; the results; the role of human inspection and decision making; tool design; data management; and urgency, timeliness and resources.
Our hope is that the framework could be integrated into existing authorisation and compliance processes, offering another guarantee of privacy against overreach into our private lives. Careful consideration of our six factors will help to provide assurances that automated analytics are performing in accordance with the letter and the spirit of existing regulation.
Artificial intelligence is set to become an increasingly integral part of our lives, our jobs and our institutions. It’s vital that reports like these are part of the ongoing dialogue about how to use AI as a tool that works for us, that helps keep us safe, and also allows us control over our privacy.
Professor Dame Muffy Calder is head of Glasgow University’s College of Science and Engineering and professor of formal methods at the School of Computing Science.
Why are you making commenting on The Herald only available to subscribers?
It should have been a safe space for informed debate, somewhere for readers to discuss issues around the biggest stories of the day, but all too often the below the line comments on most websites have become bogged down by off-topic discussions and abuse.
heraldscotland.com is tackling this problem by allowing only subscribers to comment.
We are doing this to improve the experience for our loyal readers and we believe it will reduce the ability of trolls and troublemakers, who occasionally find their way onto our site, to abuse our journalists and readers. We also hope it will help the comments section fulfil its promise as a part of Scotland's conversation with itself.
We are lucky at The Herald. We are read by an informed, educated readership who can add their knowledge and insights to our stories.
That is invaluable.
We are making the subscriber-only change to support our valued readers, who tell us they don't want the site cluttered up with irrelevant comments, untruths and abuse.
In the past, the journalist’s job was to collect and distribute information to the audience. Technology means that readers can shape a discussion. We look forward to hearing from you on heraldscotland.com
Comments & Moderation
Readers’ comments: You are personally liable for the content of any comments you upload to this website, so please act responsibly. We do not pre-moderate or monitor readers’ comments appearing on our websites, but we do post-moderate in response to complaints we receive or otherwise when a potential problem comes to our attention. You can make a complaint by using the ‘report this post’ link . We may then apply our discretion under the user terms to amend or delete comments.
Post moderation is undertaken full-time 9am-6pm on weekdays, and on a part-time basis outwith those hours.
Read the rules hereLast Updated:
Report this comment Cancel