Facial Recognition Technology: Nothing to Hide, Nothing to Fear?

Shutterstock

Dr Alexeis Garcia-Perez, Professor Sally Dibb, and Professor Maureen Meadows, Centre for Business in Society

A watchdog has warned that the Police National Database (PND), containing facial recognition photographs of at least 19 million individuals, risks ‘targeting innocent people’. The PND has become a significant step in the Home Office’s efforts to use information technology to move from traditional record keeping to an intelligence-based policing. The information system gains relevance in the current environment, characterised by an increase in threats and decreased funding to fight crime.

However, while keeping individuals and cities safe remains the highest priority for the police, doing so while also meeting public safety standards for intelligent-based policing brings significant challenges. As a searchable, national database containing not only details of all individuals having been arrested as in the previous Police National Computer (PNC) system, the PND also includes photographs of different levels of image quality taken by facial imaging technology used in public places across the land. Many of these images are captured by Smart CCTV, a surveillance based security technology that gathers digital images which are then matched against known images held in a database.

shutterstock_516820219

CCTV/Shutterstock

The UK public is aware that photos are taken using the latest facial imaging technology in places such as airports. Generally, they support the use of these technologies, especially following terrorist or other breaches of national security. But this support is premised on the assumption that those who have nothing to hide also have nothing to fear. Concerns about privacy are inevitably heightened when there are reasons to believe that storage and analysis of those photos could bring negative consequences to innocent individuals. Facial recognition algorithms are still not mature enough to avoid bias in their interaction with other information systems such as an automated passport renewal system.   Joy Buolamwini has written about how bias in algorithms, like human bias, results in unfairness and can lead to exclusionary experiences and discriminatory practices. Joy, a student at MIT, explains how a man of Asian descent experienced exclusion from seemingly neutral machines programmed with algorithms. The man erroneously had his eyes registered as closed by facial recognition software that analysed his photo during an interaction with an automated passport renewal system. The same principles could lead to wrongful allegations against innocent people.

Concerns have often been raised about the lack of specific regulation relating to the use of these technologies. There is consensus on the need for a revision of the current legislation to reflect the changing nature of the technological landscape and its implications in crime fighting, particularly with regard to the legality of storing personal data of innocent people in a searchable database. As new legislation comes into force to guarantee the privacy and confidentiality of individuals’ data, the law must be interpreted for all bodies responsible for using data – including the police, to follow strict rules or data protection principles. This would protect not only the data handling body (e.g. the reputation of the police) but also the rights of innocent people.

——————————-

Find out more

Our team of researchers in the Data, Organisations and Society cluster within the Centre for Business in Society, explore every aspect of society, from the workplace to the marketplace, to the way people live, communicate and learn, which is being transformed by the acquisition and analysis of data and their transformation into actionable insight. Visit the website to learn more.

Comments

comments

Coventry University