Fourteen campaign groups have published an open letter requesting the Metropolitan Police Commissioner to stop using “useless and highly invasive” facial recognition technologies.
On his first day on the job, 14 campaign groups have written to Metropolitan Police Commissioner Sir Mark Rowley requesting an end to the use of facial recognition technologies by police forces. They called the technology “privacy-eroding, inaccurate and wasteful”.
Big Brother Watch, Liberty and Black Lives Matter UK were among the organisations that signed the letter.
Facial recognition technologies have increasingly been used by police authorities around the world to fight against crime. The Metropolitan Police and the South Wales Police are some of the forces that have been known to use these technologies. However, this use of facial-recognition technologies has led to civil rights challenges and condemnation from human rights groups, who argue that the technology is often mistaken and biased.
“During observations at deployments, Big Brother Watch has witnessed multiple false positive matches, which have led to innocent individuals being forced to prove their identity to police officers,” the campaigners’ letter said. “If the use of this technology becomes more widespread, these incidents will become commonplace, resulting in further injustices and increased public mistrust of the Met.”
The groups claimed that 87 per cent of the alerts generated by current facial recognition technologies result in misidentifications. One such case was the identification of a 14-year-old black schoolboy in uniform, and a French exchange student who had only been in the country for a few days.
The technology is said to be less accurate for women and people of colour, the group said, despite it being used in areas with a higher density of ethnic minorities.
“Public trust in the police has collapsed in the capital and is being further damaged by the Met’s repeated use of Orwellian facial recognition technology which is both useless and highly invasive,” said Silkie Carlo, director of Big Brother Watch.
“These Minority Report-style cameras have done absolutely nothing to reduce high rates of violent crime but risk putting our police on a par with those in surveillance states like China and Russia. They have no place in a democracy.”
Moreover, in addition to the technology’s failures, campaigners have also criticised the breach of citizens’ privacy that comes along with the use of such technologies.
Earlier this year, an independent review commissioned by the Ada Lovelace Foundation of UK legislation called for the government to pass laws that will govern biometric technologies and ensure their ethical use.
Last year, the UK’s Information Commissioner’s Office (ICO) expressed similar concerns regarding the reckless and inappropriate use of facial recognition in public spaces, banning facial-recognition company Clearview AI and demanding that the company delete all the data it held that related to UK citizens. Clearview AI’s business model was based on scraping billions of publicly available images from social media to train its facial-recognition software. It was later sold to law enforcement agencies to help identify people from closed-circuit television footage.
“We all have the right to go about our lives without being surveilled by the police,” said Martha Spurrier, director of Liberty. “But the Metropolitan Police’s use of live facial recognition is violating our rights and threatening our liberties.”
A Met spokeswoman responded by saying: “Live Facial Recognition (LFR) is a technology that has been helping the Met to locate dangerous individuals and those who pose a serious risk to our communities.
“The Met has primarily focused the use of LFR on the most serious crimes; locating people wanted for violent offences, including knife and gun crime, or those with outstanding warrants who are proving hard to find.
“Operational deployments of LFR technology have been in support of longer-term violence reduction initiatives and have resulted in a number of arrests for serious offences including conspiracy to supply Class A drugs, assault on emergency service workers, possession with intent to supply Class A & B drugs, grievous bodily harm and being unlawfully at large having escaped from prison.
“False alert rates across our operational deployments are between 0% and 0.08%.”
In 2021, the Council of Europe, a 47-country human rights and democracy organisation, published a set of guidelines [PDF] for governments, lawmakers, providers, and businesses laying out its proposals for use of facial recognition technologies.
Sign up to the E&T News e-mail to get great stories like this delivered to your inbox every day.