The future of the left since 1884

Face the facts

Live facial recognition technology could compound racism in society, writes David Smith.

Share

Opinion

If you have one of the latest smartphones, you are likely to have used facial recognition to unlock your device. If you have flown recently, you are likely to have experienced it at airport security thanks to the machines that check your face against your passport.

But what you might not have noticed so readily is that your face may well have been scanned in public by live facial recognition cameras which are increasingly being used by the police to identify ‘wanted’ people. This growth is happening without proper public or parliamentary approval. Not only is this detrimental to the civil liberties and privacy rights of the population as a whole, but it opens the door to discrimination and institutional racism. Research shows that this may be particularly dangerous for young, black men.

Live facial recognition cameras work by continuously scanning to find faces. Once a face has been found, facial features are measured to create a unique ‘numeric representation’ or ‘facial map’ which is then compared against a watchlist, in order to identify a match.

South Wales police and the Metropolitan police have been trialling live facial recognition since 2016 in shopping centres, at music events and on high streets, leading to several arrests. Private companies are also using the technology in public places, such as Sheffield’s Meadowhall shopping centre and the Trafford centre in Manchester.

But the use of this technology has been widely criticised by academics, lawyers, campaigners and politicians. Last autumn, a cross-party group of MPs, including then shadow home secretary Diane Abbott and former Brexit secretary David Davis, said the use of facial recognition surveillance was incompatible with human rights and should be stopped immediately. And it is not just a UK issue: in February it was reported that the EU was considering a five-year ban on facial surveillance before backing away from the idea.

Yet despite all of the concern, earlier this year, the Met announced it would be increasing its use of live facial recognition to help ‘tackle serious crimes’ including ‘violence, gun and knife crime’ and ‘child sexual exploitation’.

The announcement has provoked widespread criticism from civil liberties groups such as Big Brother Watch, StopWatch and Liberty, which argue that the technology may have an inbuilt bias against people of colour. In response, Met Police commissioner Cressida Dick claimed that, unlike other artificial intelligence, the advanced facial recognition software being deployed by the police did not discriminate against people of colour with no inbuilt ‘ethnic bias’.

However, it is important to remember that a technological error is not the only form that discrimination can take.

To understand the dangers of live facial recognition technology we have to ask the critical questions: when is data being stored, what does it mean to be ‘wanted’ and, crucially, who is being targeted?

One major controversial database used by the Met for facial recognition technology is a risk-assessment tool known as the gangs matrix: a system used to monitor suspected gang members, or young people who may be affiliated to or ‘at-risk’ of gang activity. Amnesty International revealed in 2018 that 78 per cent of people listed on the gangs matrix were black – despite the fact that only 27 per cent of serious youth violence is committed by black people. This statistic is not surprising given inaccurate media and public portrayals of black youth culture as dark and criminal, and the Met’s loose and racially loaded use of the ‘gang’ label.

What we see therefore is live facial recognition technology using flawed datasets that target a certain minority, leading to the continued criminalisation of people of colour.

We interact with many well-meaning, thoughtful police officers as part of our work as a police monitoring project. Many of them celebrate being ‘colour blind’; driven by facts not emotions, by actions not complexion, by data not bias – but, as the case of the gangs matrix shows, just sticking to the ‘data’ does not save you from racism.

Anti-racism activist Joseph Barndt popularised the theory that racism = prejudice + power. With facial recognition, the danger of discrimination comes from the prejudice in police data combining with the power of this new technology. As Barndt argues, this is all it takes for institutional racism to take hold.

Live facial recognition technology is therefore a real risk for young people in our communities. The time for scrutiny, action and accountability is now.

David Smith

David Smith is head of research at Hackney Account, a youth-led police monitoring group.

Fabian membership

Join the Fabian Society today and help shape the future of the left

You’ll receive the quarterly Fabian Review and at least four reports or pamphlets each year sent to your door

Be a part of the debate at Fabian conferences and events and join one of our network of local Fabian societies

Join the Fabian Society
Fabian Society

By continuing to use the site, you agree to the use of cookies. more information

The cookie settings on this website are set to "allow cookies" to give you the best browsing experience possible. If you continue to use this website without changing your cookie settings or you click "Accept" below then you are consenting to this.

Close