The fact the police could do something in the name of solving crime doesn’t mean they should. The police could, if we allowed them to, fingerprint every single one of us after a murder. They could check all our bank accounts for stolen money. They could look at all our internet histories for illegal content. Doing these things would help catch criminals and reduce crime. But the right to prevent the police going too far matters just as much as the duty of the police to investigate crime. More. It matters more.

I accept, obviously, that balancing rights and duties isn’t easy but what we shouldn’t accept is leaving it up to the police. You may remember the case earlier this year of the preacher Angus Cameron, arrested in Glasgow for breach of the peace. He also had a “non-crime hate incident” logged against him but he sued and won and quite right too: it’s not, and shouldn’t be, illegal to publicly quote from the Bible. And in case you think cases like that don’t matter so much now that the hate crime legislation has been such a flop and a non-event, the law is still open to confusion and abuse, like all bad, vague laws. The danger is still there.

Something similar, I think, applies to the use of facial recognition technology, which Police Scotland have taken to with the enthusiasm of a 13-year-old boy given an Xbox for Christmas. What the recent figures show is that searches of the facial matching function on the police database rose from under 1,300 in 2018 to nearly 4,000 in 2022 and is still rising: more than 2,000 searches were carried out in the first four months of 2023. In fact, Police Scotland isn’t far behind the Met so you can see the direction of travel here.

Scotland’s Chief Constable, Jo Farrell, is also pretty clear about her enthusiasm for the technology. It can help take violent criminals off the street quicker, she says, and keep children safe. She’s also compared the use of facial recognition to the treatment of cancer. “If within the NHS we get told that AI will help detect cancer quicker,” she said, “we’ll probably say to ourselves ‘that sounds like a really good thing’.”

But is that a sound argument? When a new drug or medical technique is discovered, it’s subject to extensive and serious testing to ensure it’s safe and we’re aware of all the side-effects. But the introduction of facial recognition technology has been subject to no such testing. The Chief Constable may well be right that the tech will help detect crime in some cases but that, in itself, is not good enough: as with medical breakthroughs, we need to consider the dangers, the downsides, the side-effects.

This is effectively the argument which a group of politicians, including unlikely allies such as David Davis and Joanna Cherry, have been making about facial recognition in the last few days, but before we get into that a bit more, let’s look at how the technology works. It effectively divides into two types: retrospective and live. Retrospective is where people caught on camera at a crime scene are compared to images of people who’ve been in custody. Live facial recognition on the other hand scans people passing through an area in real time allowing the police to move in quickly to make an arrest if there’s a match with someone who the police think needs to be arrested or controlled.


Read more

Mark Smith: I’m partial to a Dobbies now and then - but why are they in trouble?

Murder, riots, gang fights and the missing tenement – the hidden story of Byres Road 


The figures I quoted earlier on Police Scotland are for the retrospective type, but Jo Farrell has made it clear she’s keen on using the live type too, which is where it gets particularly troubling. Liam McArthur of the Lib Dems has been especially good on this. He’s asked to see the evidence, if any, for the claims that the tech will get criminals off the street quicker; he’s also expressed his concern that decisions that dramatically reframe the relationship between the police and the public are being taken without debate and treated as an inevitable consequence of the march of technology.

That last point, about the “march of technology”, is particularly important because we’ve been here before with the internet and social media. For a long time, we accepted its development as inevitable or positive or both and it was only long after we discovered the negative effects, on young people, on mental health, on the dissemination of misinformation, and all the rest of it, that we started to have the debate about what we should do. The same applies to tech used by the police: we’re told it’s inevitable or positive or both and there’s no public debate and when the negative effects become obvious, it’s too late.

One of the potential negative effects – inaccuracy – was raised by Joanna Cherry, David Davis, and the others. The use of the technology by South Wales Police for example has resulted in 72 arrests but 2,833 false alerts. Some of the research that’s been done so far also indicates that the algorithms used in facial recognition are much more likely to misidentify black men and women than they are white men and the last thing the police in this country need is technology with an apparent built-in racial bias.

No doubt, in time, the technology will get better but will it be in time for people accused of crimes on the basis of the cameras? One of the inherent problems is that the justice system has a bias in favour of technological and scientific evidence based on the premise that it’s always more dependable and accurate than unreliable and untrustworthy humans and to some extent that’s understandable. You can’t bribe a computer (yet) and techniques like fingerprinting and DNA conform to international scientific standards and are known to be reliable. But technology is not infallible.

(Image: Surveillance cameras)

The use of facial recognition also raises the broader issue of how we do policing in the UK. As you know, it’s supposed to be based on the idea of a licence from the public, permission or consent, to act in certain ways – arrest people, search them, etc – to prevent crime, injury or violence. However, true consent is based on knowledge and debate, which then forms the legal basis for the limitation, in some circumstances, of our civil liberties. The problem with facial recognition is that the knowledge is low and the public debate has been almost non-existent.

Instead, in its place, we have the remarks of the Chief Constable, which are a classic of the genre. To be fair to her, she’s doing her job, but incursions into civil liberties are often based on a little gentle stirring of our fears and so it is with the Chief Constable. The use of facial recognition, she says, will get dangerous criminals off the street and help keep our children safe, and so we feel the fear and allow the police, and the government, to do what they say they must do to make the fear go away.

The Chief Constable is clearly aware of these concerns – the goal, she says, is for people to have confidence that the technology can be used appropriately, without bias and for the greater good of keeping people save. And that’s fine. But what the police must do is look at the evidence so far, and work out what it’s telling us. And it’s this: we do not know enough about this technology yet, how biased it could be, how likely it is to make a mistake, and whether it's proportionate. So the message for the police from the public is clear: do not push ahead regardless. Because you have no right to. You do not have our consent.