Back to main site

    Privacy & Artificial Intelligence

    Module 4: Data Privacy and Data Protection

    The privacy risks of AI

    As the sophistication and usage of artificial intelligence (AI) has increased rapidly in recent years, concerns about both the use of personal information in the development of such tools as well as the ability of such tools to implement privacy violations have become more prominent. In particular, the launch of the public ChatGPT, alongside similar models, has raised alarm bells on several fronts.

    • First, because such systems rely on vast quantities of information to train their algorithms and continuously improve performance, particularly information scraped from the internet, critics have highlighted that even publicly-available information, such as posts on social media, was never posted with the intent, and hence consent, of the data subjects for its usage by large-language models.
    • Second, the collection and storage of such large quantities of information, including personal information, raise concerns about storage security and the implications if such data were to be accessed by unauthorised parties through hacking or other security breaches. Facial recognition technology, which also often relies on sophisticated algorithms to process large quantities of data, is increasingly in use across the continent by governments ostensibly for law enforcement and security purposes, but they also have the potential to be used for real-time, intrusive tracking and surveillance that risks several human rights including the rights to privacy, freedom of movement, and freedom of association.
    • Third, AI tools such as these are able to rapidly generate images and content about a person based on its training data that may have little correlation to the truth, raising concerns about mis- and disinformation and the portrayal of personal information in the online ecosystem. AI’s ability to rapidly analyse and make sense of large quantities of data can lead to the ability to infer personal information about a person that they never provided themselves, beyond the scope of consent requirements set out in data protection laws.

    Developing international standards

    As a result of these risks, AI has recently garnered increased attention from international and regional human rights bodies seeking to provide guidance and standards to protect the affected rights and ensure the responsible development of these new technologies. For example:

    • In 2021 the UN Special Rapporteur on the Right to Privacy published a report on AI and privacy and children’s privacy that provides guidance on data protection standards for AI at the domestic level as well as calls on states and companies to develop AI solutions ethically and responsibly within a human rights framework.(1)
    • Also in 2021, the UN High Commissioner for Human Rights released a report on the right to privacy in the digital age that analysed how the widespread use of AI affects the right to privacy and other fundamental rights, noting that issued a set of recommendations for states and businesses to design and implement rights safeguards.(2) The report notes that AI systems “[incentivise] widespread data collection, storage, and processing,” contrary to the principle of data minimisation, and highlights concerns in the sectors of law enforcement, public services, employment, and online information management systems.
    • Building on this, in 2023 the new Special Rapporteur submitted her report to the United Nations General Assembly (UNGA) that highlighted the need for transparency and explainability in the use of AI in order for data subjects to be able to exercise their rights over the use of their personal information in such systems.(3)

    Notably, the African Union Commission on Human and Peoples’ Rights (ACHPR) has also taken steps to interrogate the risks of AI by passing Resolution ACHPR/Res. 473 (EXT.OS/ XXXI) 2021: on the need to undertake a Study on human and peoples’ rights and artificial intelligence (AI), robotics and other new and emerging technologies in Africa in 2021.(4) In it, the ACHPR—

    • acknowledges the myriad risks for human rights not limited to privacy;
    • calls on states to put in place mechanisms to ensure the rights-respecting development and use of such technologies in Africa, including by working towards a comprehensive legal and ethical governance framework for AI; and
    • commits to undertake a study to develop guidelines on AI.

    The study officially began in June 2023.(5)

    Back to Resource Hub


    1. UNSR on Privacy ‘Artificial intelligence and privacy, and children’s privacy’ (2021) (accessible at Back
    2. Report of the UN High Commissioner for Human Rights ‘The right to privacy in the digital age ‘(2021) (accessible at Back
    3. UNSR on Privacy ‘Right to privacy,’ (2023) (accessible at Back
    4. ACHR, ‘Resolution on the need to undertake a Study on huma and peoples’ rights and artificial intelligence (AI), robotics and other new and emerging technologies in Africa’ ACHR/Res. 473 (EXT.OS/XXXI) (2021) (accessible at Back
    5. ACHPR, ‘PRESS RELEASE: Inception Workshop and Experts’ Consultation on the Study on human and peoples’ rights and artificial intelligence (AI), robotics and other new and emerging technologies in Africa, 08 – 09 June 2023 Nairobi, Kenya’ (2023) (accessible at Back