Back to main site

    Accountability of Private Platforms for Content Moderation

    Module 2: Restricting Access and Content

    Overview of Content Moderation

    As internet and social media companies have become increasingly influential in the digital age, questions have arisen about the accountability mechanisms in place for these actors who hold extraordinary power over the ability of the general public to exercise their rights to freedom of expression and access to information. The content moderation policies of these tech giants effectively block and filter the content not only that individuals can post, but also that other users can access. As a result, attention is now mounting on how these companies make their decisions about removing or deprioritising content, and the transparency and accountability mechanisms in place to ensure that they comply with human rights law and standards.

    Critics argue that users in African countries, in particular, lack the influence over and access to these big multinational companies to be able to understand how content moderation may be affecting their freedom of expression and to take action where content is removed (or where illegal content remains up).

    Various cases have recently reached the courts in this regard:

    • In Germany, the Federal Court of Justice ruled that Facebook’s terms of service on deleting user posts and blocking accounts for violations of its Community Standards were invalid because they did not make provision for informing users about decisions to remove their content and to grant them an opportunity to respond, followed by a new decision.
    • In France, the Paris Court of Appeal ordered Twitter to provide information on the measures the company was taking to fight online hate speech in a case brought by organisations who had found, in their research, that Twitter only removed under 12% of tweets that were reported to them.
    • In another case involving Facebook, the Republic of The Gambia initiated proceedings in the United States requesting Facebook to release public and private communications and documents about content that Facebook had deleted following the genocide in Myanmar. The Gambia had previously initiated proceedings in the International Court of Justice against Myanmar claiming a breach of its obligations under international law for its alleged crime of genocide against the Rohingyas. The Gambia thus sought information from Facebook on content that it had removed which may have contributed to or exacerbated the violence against the Rohingyas, given Facebook’s dominant position as an almost sole news provider in that country at the time. The US District Court held that Facebook must disclose the requested materials.

    These cases show the various ways in which private platforms are being held accountable for the content moderation decisions they make that have very real impacts on users’ rights to freedom of expression, as well as other rights.

    Non-Consensual Dissemination of Intimate Images

    In recent years, the issue of the Non-Consensual Sharing of Intimate Images (NCII) has become increasingly prominent as a result of the unfortunate proliferation of this form of online gender-based violence. In many cases content is shared in order to blackmail, threaten, or harass internet users, predominantly women and gender minorities. It is vital that the rights of the victims/survivors to privacy and reputation are protected by enabling such content to be rapidly and permanently removed. While this is one of the narrow circumstances in which the removal of content is not only justified but absolutely critical to protecting human rights, it is still important to maintain appropriate checks and balances over the tech companies that make decisions regarding the removal or blocking of content.

    Case law on NCII

    A body of case law is gradually building up that provides guidance on how courts are interpreting this issue around the world:

    • In a case in India in 2021, the High Court of Delhi upheld an actor’s right to privacy under the Indian Constitution and directed internet intermediaries as well as YouTube, the host of the content, to take down the explicit videos of the actor which had been uploaded on to multiple video-sharing platforms without her consent.
    • In another case in India, the High Court of Delhi ordered the police to remove content that was unlawfully published on a pornographic website and went further to order search engines to de-index that content from their search results. In its judgment, the Court stressed the need for “immediate and efficacious” remedies for victims of cases of NCII and set out the type of directions that a court can issue in such cases.
    • The Constitutional Court of Ecuador dealt with a case in 2021 in which pictures of the victim/survivor had been sent to their parents without the victim’s consent. The Court also found in favour of the right to privacy and held, in the words of the Columbia Global Freedom of Expression database, that “the storage and sharing of sexual photos without the consent of the victim were a violation of her constitutional rights to personal data protection, reputation, and intimacy” and that “intimate images were personal data sent exclusively to the defendant’s partner and required previous consent to be processed by anyone else.”

    Some countries are also incorporating provisions criminalising NCII in domestic law. For example, South Africa’s Cybercrimes Act, passed into law in 2020 criminalises the disclosure of data messages that contain intimate images of a person without the latter’s consent. While such provisions are welcomed for the recourse they provide to victims of online gender-based violence, concerns have also been raised about the potential for infringements on the right to freedom of expression if such provisions are vague, broad, or open to abuse. It is, therefore, crucial, that protections for privacy are carefully balanced against potential intrusions into freedom of expression in the online space. Litigation by civil society can play an important role in appropriately defining this balance and ensuring the advancement of digital rights for all.


    The power and opacity of the tech giants raise real questions about the legitimacy of content moderation decisions that are made on a daily basis and how they affect the information environment around the world. Litigation has been shown to be a powerful way to seek greater transparency and accountability from these actors and to achieve a more rights-respecting balance between the various rights implicated by different types of content online.