Back to main site

    Collection of Biometric Data and Facial Recognition

    Module 4: Privacy and Security Online

    Collection of biometric data for the National Integrated Identity Management System (NIIMS) in Kenya

    Source: Privacy International, ‘Why the Huduma Namba ruling matters for the future of digital ID, and not just in Kenya’, accessible at https://privacyinternational.org/news-analysis/3350/why-huduma-namba-ruling-matters-future-digital-id-and-not-just-kenya

    The collection and retention of biometric data presents a unique set out of concerns.  As biometric data can remain relevant for the course of a person’s life, the security of this data is paramount.  Biometric data breaches seriously affect individuals in a number of ways, whether identity theft or fraud, financial loss or other damage.

    On 30 January 2020, Kenya’s high court handed down judgment on the validity of NIIMS, which includes the collection of biometric information.  The court ruled that the implementation of NIIMS should not continue without further legislation to guarantee the security of biometric data and to ensure that the system is not exclusionary.

    As noted by Privacy International, “[i]t is essential that the government meaningfully addresses the issued raised by the Court, and that the solutions presented genuinely address the Court’s concerns.”

    Facial recognition is one form of biometric system that is gaining increased prevalence and being used for general surveillance.(1) Facial recognition technology uses cameras loaded with software to match live footage of people in public with images on a ‘watch list’.(2) As noted by Privacy International, facial recognition cameras are far more intrusive than regular CCTV: they scan distinct, specific features of your face, such as face shape, to create a detailed map of it – “which means that being captured by these cameras is like being fingerprinted, without your knowledge or consent”.(3)

    Facial recognition in practice in the United Kingdom

    Source: Privacy International, ‘Catt v The United Kingdom’, 2016, accessible at https://privacyinternational.org/legal-action/catt-v-united-kingdom

    “Facial recognition technology has been trialled by UK police forces. A trial was conducted by Leicestershire Police at a music festival in 2015.  In August 2016, the Metropolitan Police Service used automated facial recognition technology to monitor and identify people at the Notting Hill Carnival.  This technology, which is classed by police forces as “overt surveillance”, works by scanning the faces of those passing by overt cameras and then comparing the images against a database of images populated by the police force in question.  At the Notting Hill Carnival, the database was populated with images of individuals who were forbidden from attending Carnival, as well as individuals who the police believed may attend Carnival to commit offences.  The combination of image databases and facial recognition technology could be used to track people’s movements by combining widespread CCTV and access to a huge searchable database of facial images.”

    In this regard, unlike many other biometric systems, facial recognition can be used for general surveillance in combination with public video cameras, and it can be used in a passive way that doesn’t require the knowledge, consent, or participation of the subject.(4) As noted by the American Civil Liberties Union, the biggest danger is that this technology will be used for general, suspicion-less surveillance systems. For example, most motor vehicle agencies possess high-quality photographs of most citizens, which could be a natural source for facial recognition programmes and could easily be combined with public surveillance or other cameras in the construction of a comprehensive system of identification and tracking.

    Interpol has noted that computerised facial recognition is a relatively new technology, being introduced by law enforcement agencies around the world in order to identify persons of interest, including criminals, fugitives and missing persons.(5) The Interpol Facial Recognition System contains facial images received from more than 160 countries, and coupled with an automatic biometric software application, the system is capable of identifying or verifying a person comparing and analysing patterns, shapes and proportions of their facial features and contours.(6) Unlike fingerprints and DNA, which do not change during a person’s life, facial recognition has to take into account different factors, such as ageing, plastic surgery, cosmetics, the effects of drug abuse or smoking, and the pose of the subject.(7)

    However, the use of facial recognition technology raises serious concerns. According to a report published in the Washington Post,(8) a recent study in the US, conducted by the National Institute of Standards and Technology, found “empirical evidence” that most of the facial recognition algorithms exhibit “demographic differentials that can worsen their accuracy based on a person’s age, gender or race. Some of the specific findings included the following:(9)

    • Facial-recognition systems misidentified people of colour more often than white people.
    • Middle-aged white men generally benefited from the highest accuracy rates.
    • Asian and African American people were up to 100 times more likely to be misidentified than white men, depending on the particular algorithm and type of search.
    • Native Americans had the highest false-positive rate of all ethnicities.
    • The faces of African American women were falsely identified more often in the kinds of searches used by police investigators, where an image is compared to thousands or millions of others in hopes of identifying a suspect.
    • Women were more likely to be falsely identified than men, and the elderly and children were more likely to be misidentified than those in other age groups.

    Privacy International notes that the use of facial recognition technology impacts on the exercise of at least the following rights:(10)

    • Privacy: According to Privacy International, “[t]he use of facial recognition in public spaces makes a mockery of our privacy rights”.  It is a disproportionate crime-fighting technique, as it scans the face of every person who passes by the camera, whether or not they are suspected of any wrongdoing.  The biometric data that it collects can be as uniquely identifying as DNA or a fingerprint, and is typically done without consent or knowledge of the data subject.
    • Freedom of expression: Being watched and identified in public spaces is likely to lead us to change our behaviour, limiting where we go, what we do and with whom we engage.  For example, persons might be unwilling to participate in a particular protest action if facial recognition is being used in the area.
    • Equality and non-discrimination: It has been found that facial recognition software is more likely to misidentify women and black people.  There are also concerns that the police use facial recognition to target particular communities.

    The roll-out of facial recognition technology is often done without any empowering legal framework to authorise it, and is arguably a disproportionate limitation on the right to privacy and other associated rights.  In this regard, if challenged, there is a strong case to be made that the use of facial recognition technology, even for security purposes, would not meet the threshold of the three-part test for a justifiable limitation.

    Footnotes

    1. American Civil Liberties Union, ‘Face recognition technology’, accessible at https://www.aclu.org/issues/privacy-technology/surveillance-technologies/face-recognition-technology. Back
    2. Id. Back
    3. American Civil Liberties Union, ‘Face recognition technology’, accessible at https://www.aclu.org/issues/privacy-technology/surveillance-technologies/face-recognition-technology. Back
    4. Interpol, ‘Facial recognition’, accessible at https://www.interpol.int/en/How-we-work/Forensics/Facial-Recognition. Back
    5. Id. Back
    6. Id. Back
    7. Washington Post, ‘Federal study confirms racial bias of many facial recognition systems, casts doubts on their expanding use’, 20 December 2019, accessible at https://www.washingtonpost.com/technology/2019/12/19/federal-study-confirms-racial-bias-many-facial-recognition-systems-casts-doubt-their-expanding-use/. Back
    8. Id. Back