Collection of Biometric Data and Facial Recognition
Module 4: Privacy and Security Online
Collection of biometric data for the National Integrated Identity Management System (NIIMS) in Kenya
Source: Privacy International, Data Protection Impact Assessments and ID systems: the 2021 Kenyan ruling on Huduma Namba’, accessible at https://privacyinternational.org/news-analysis/4778/data-protection-impact-assessments-and-id-systems-2021-kenyan-ruling-huduma
The collection and retention of biometric data present a unique set of privacy concerns. As biometric data can remain relevant for the course of a person’s life, the security of this data is paramount. Biometric data breaches can result in serious harm to people’s rights and interests, including identity theft or fraud, financial loss or other damage.
In January 2020, the High Court of Kenya handed down judgment on the validity of the National Integrated Identity Management System (NIIMS), also known as the Huduma Namba, a national identity registration programme which includes the collection of biometric information. The court ruled that the rollout of NIIMS should not continue without further legislation to guarantee the security of biometric data and to ensure that the system is not exclusionary.
In a subsequent ruling in October 2021, the High Court again halted the NIIMS rollout, albeit temporarily, when it ordered that the programme must be subject to a data impact assessment in terms of Kenya’s Data Protection Act.
Facial recognition is a form of biometric system that attracts particular concern for its use in surveillance.(1)Facial recognition technology refers to a wide range of software that can be linked to camera networks; the software analyses live or recorded images and footage of people from a camera network and matches these against images in a pre-existing database in order to identify specific people from the footage.(2) As noted by Privacy International, facial recognition cameras are far more intrusive than regular CCTV: they scan distinct, specific features of your face, such as face shape, to create a detailed map of it – “which means that being captured by these cameras is like being fingerprinted, without your knowledge or consent.”(3)
Facial recognition in practice in the United Kingdom
Source: Privacy International, ‘Catt v The United Kingdom’, 2016, accessible at https://privacyinternational.org/legal-action/catt-v-united-kingdom
The growing use of facial recognition by police in the United Kingdom has attracted several notable legal challenges.
In 2019, in Catt v the United Kingdom, the European Court of Human Rights found that the UK government had violated the right to privacy in the course of monitoring and profiling a peace activist. In a third-party intervention, Privacy International drew the court’s attention to the potential digital technology such as facial recognition to increase any such violation of the right to privacy. The Court noted that the potential for such emerging technologies to violate human rights requires examination “where the powers vested in the state are obscure, creating a risk of arbitrariness especially where the technology available is continually becoming more sophisticated”.
In Bridges v CC South Wales & others, British civil liberties organisation Liberty acted in a legal challenge against the use of facial recognition technology by police in South Wales. In 2020, the UK Court of Appeal overturned an earlier ruling by finding that the police’s use of facial recognition technology breaches privacy rights, data protection laws, and equality laws and that there were “fundamental deficiencies” in the legal framework governing its use.(4)
In this regard, unlike many other biometric systems, facial recognition can be used for general surveillance in combination with public video cameras, and it can be used in a passive way that doesn’t require the knowledge, consent, or participation of the subject.(5) As noted by the American Civil Liberties Union, this creates the risk for the technology to be used for general surveillance of a population that is not suspected of any specific wrongdoing. For example, most motor vehicle agencies have high-quality photographs of large numbers of people, which can be a natural source for facial recognition programmes and could easily be combined with public or private surveillance camera networks to create a comprehensive system of identification and tracking. Law enforcement agencies also regularly use photographs scraped from social media sites as well.
Interpol has noted that computerised facial recognition as a relatively new technology which was introduced by law enforcement agencies around the world in order to identify persons of interest, including criminals, fugitives and missing persons.(6) The Interpol Facial Recognition System contains facial images received from more than 160 countries, and coupled with an automatic biometric software application, the system is capable of identifying or verifying a person comparing and analysing patterns, shapes and proportions of their facial features and contours.(7) Unlike fingerprints and DNA, which do not change during a person’s life, facial recognition has to take into account different factors, such as ageing, plastic surgery, cosmetics, the effects of drug abuse or smoking, and the pose of the subject.(8)
However, facial recognition technology has also been linked to inaccuracies and biases which raise serious discrimination concerns. A study commissioned by a public agency in the United States found “empirical evidence” that most widely used facial recognition algorithms exhibit “demographic differentials that can worsen their accuracy based on a person’s age, gender, or race.(9) Some of the specific findings included the following:
- Facial-recognition systems misidentified people of colour more often than white people. Asian and African American people were up to 100 times more likely to be misidentified than white men, depending on the particular algorithm and type of search.
- The faces of African American women were falsely identified more often in the kinds of searches used by police investigators, where an image is compared to thousands or millions of others in hopes of identifying a suspect.
- Women were more likely to be falsely identified than men, and the elderly and children were more likely to be misidentified than those in other age groups.
Privacy International notes that the use of facial recognition technology impacts the exercise of at least the following rights:(10)
- Privacy: According to Privacy International, “[t]he use of facial recognition in public spaces makes a mockery of our privacy rights”. It is a disproportionate crime-fighting technique, as it scans the face of every person who passes by the camera, whether or not they are suspected of any wrongdoing. The biometric data that it collects can be as uniquely identifying as DNA or a fingerprint, and is typically done without consent or knowledge of the data subject.
- Freedom of expression: Being watched and identified in public spaces is likely to lead us to change our behaviour, limiting where we go, what we do and with whom we engage. For example, persons might be unwilling to participate in a particular protest action if facial recognition is being used in the area.
- Equality and non-discrimination: It has been found that facial recognition software is more likely to misidentify women and black people. There are also concerns that the police use facial recognition to target particular communities.
The roll-out of facial recognition technology is often done without any empowering legal framework to authorise it and is arguably a disproportionate limitation on the right to privacy and other associated rights. In this regard, potential litigation to challenge the use of facial recognition technology may seek to show that it does not meet the threshold of the three-part test for a justifiable limitation, even when used for security purposes.