Back to main site

    Accountability of Private Platforms for Content Moderation

    Module 2: Restricting Access and Content

    Overview of Content Moderation

    As internet and social media companies have become increasingly influential in the digital age, questions have arisen about the accountability mechanisms in place for these actors who hold extraordinary power over the ability of the general public to exercise their rights to freedom of expression and access to information. The content moderation policies of these tech giants effectively block and filter the content not only that individuals can post, but also that other users can access. As a result, attention is now mounting on how these companies make their decisions about removing or deprioritising content, and the transparency and accountability mechanisms in place to ensure that they comply with human rights law and standards.

    Critics argue that users in African countries, in particular, lack the influence over and access to these big multinational companies to be able to understand how content moderation may be affecting their freedom of expression and to take action where content is removed (or where illegal content remains up).

    Various cases have recently reached the courts in this regard:

    • In Germany, the Federal Court of Justice ruled in 2021 that Facebook’s terms of service on deleting user posts and blocking accounts for violations of its Community Standards were invalid because they did not make provision for informing users about decisions to remove their content and to grant them an opportunity to respond, followed by a new decision.(1)
    • In France, the Paris Court of Appeal ordered Twitter to provide information on the measures the company was taking to fight online hate speech in a case brought by organisations who had found, in their research, that Twitter only removed under 12% of tweets that were reported to them.UEJF v. Twitter (2022) (accessible at https://globalfreedomofexpression.columbia.edu/cases/uejf-v-twitter/).[/footnote]
    • In another case involving Facebook, the Republic of The Gambia initiated proceedings in the United States requesting Facebook to release public and private communications and documents about content that Facebook had deleted following the genocide in Myanmar. (2) The Gambia had previously initiated proceedings in the International Court of Justice against Myanmar claiming a breach of its obligations under international law for its alleged crime of genocide against the Rohingyas. The Gambia thus sought information from Facebook on content that it had removed which may have contributed to or exacerbated the violence against the Rohingyas, given Facebook’s dominant position as an almost sole news provider in that country at the time. The US District Court held that Facebook must disclose the requested materials.

    In an effort to address public concerns over its content moderation practices, Meta has instituted the Oversight Board, a semi-independent body of experts that reviews content moderation decisions to ensure the protection of the right to freedom of expression. While it has made some influential decisions, it has been criticised for only reviewing a very small proportion of total decisions made, being too slow in making recommendations, and having little real influence over Meta’s policies.(3)

    In a recent example affecting Africa, the Oversight Board upheld Meta’s decision to remove a post alleging the involvement of ethnic Tigrayan civilians in atrocities in Ethiopia’s Amhara region in 2021. The Board held that the post violated Facebook’s prohibition on unverified rumours under its Violence and Incitement Community Standard, and therefore must be removed.(4)

    Meta and Kenyan content moderators’ labour dispute

    Employees of Sama, a Meta subcontractor responsible for removing violent and hateful publications from Facebook, filed a complaint against their employer and Meta as their principal in 2023 claiming they had been unfairly dismissed by Sama.(5) This followed ongoing complaints by the content moderators that they were insufficiently compensated and protected from the risks to which they were exposed and the damages caused to their mental health as a consequence of the moderation of content for Facebook.(6)

    In August 2023 the parties agreed to negotiate to reach an amicable solution. However, in October 2023 negotiations broke down amidst accusations that Meta and Sama “were making very little attempt to address the core issues” and “were not being genuine.”(7) As of 2024, it is expected that this matter will proceed to litigation.

    Non-Consensual Dissemination of Intimate Images

    In recent years, the issue of the Non-Consensual Sharing of Intimate Images (NCII) has become increasingly prominent as a result of the unfortunate proliferation of this form of online gender-based violence. In many cases content is shared in order to blackmail, threaten, or harass internet users, predominantly women and gender minorities. It is vital that the rights of the victims/survivors to privacy and reputation are protected by enabling such content to be rapidly and permanently removed. While this is one of the narrow circumstances in which the removal of content is not only justified but absolutely critical to protecting human rights, it is still important to maintain appropriate checks and balances over the tech companies that make decisions regarding the removal or blocking of content.

    Case law on NCII

    A body of case law is gradually building up that provides guidance on how courts are interpreting this issue around the world:

    • In 2023, the High Court of Delhi in India ordered intermediaries to remove all offending content from their platform in cases of NCII and not just the specific links provided by victims. The Court highlighted the damage caused by NCII and how victims being required to search the internet for new uploads for the purpose of requesting their removal can cause further trauma.(8) In an earlier case, the same Court ordered the immediate removal of content not only from the website on which it had been published, without consent but also ordered search engines to de-index the content from their search results, stressing the need for “immediate and efficacious” remedies for victims of such cases.(9)
    • The Constitutional Court of Ecuador dealt with a case in 2021 in which pictures of the victim/survivor had been sent to their parents without the victim’s consent. The Court also found in favour of the right to privacy and held, in the words of the Columbia Global Freedom of Expression database, that “the storage and sharing of sexual photos without the consent of the victim were a violation of her constitutional rights to personal data protection, reputation, and intimacy” and that “intimate images were personal data sent exclusively to the defendant’s partner and required previous consent to be processed by anyone else.”(10)
    • In 2016, the High Court of Kenya determined a case involving the non-consensual distribution of the petitioner’s nude photographs by an ex-boyfriend, resulting in her dethronement as Miss World Kenya 2015.(11) The Court held that the victim/survivor had a legitimate expectation of privacy, that she did not waive her right to protection of privacy by taking nude photographs, and did not consent to their dissemination to third parties, and as such, her right to privacy under Article 31 of the Constitution of Kenya had been violated.

    Some countries are also incorporating provisions criminalising NCII in domestic law. For example, South Africa’s Cybercrimes Act, passed into law in 2020, criminalises the disclosure of data messages that contain intimate images of a person without the latter’s consent.(12) While such provisions are welcomed for the recourse they provide to victims of online gender-based violence, concerns have also been raised about the potential for infringements on the right to freedom of expression if such provisions are vague, broad, or open to abuse. It is, therefore, crucial, that protections for privacy are carefully balanced against potential intrusions into freedom of expression in the online space. Litigation by civil society can play an important role in appropriately defining this balance and ensuring the advancement of digital rights for all.

    Conclusion

    The power and opacity of the tech giants raise real questions about the legitimacy of content moderation decisions that are made daily and how they affect the information environment around the world. Litigation is a powerful way to seek greater transparency and accountability from these actors and to achieve a more rights-respecting balance between the various rights implicated by different types of content online.

    Footnotes

    1. Global Freedom of Expression, ‘The Case on Facebook’s Terms of Service’ (2021) (accessible at https://globalfreedomofexpression.columbia.edu/cases/the-case-on-facebooks-terms-of-service/). Back
    2. Global Freedom of Expression, ‘Oversight Board Case of Alleged Crimes in Raya Kobo’ (2021) (accessible at https://globalfreedomofexpression.columbia.edu/cases/oversight-board-case-of-alleged-crimes-in-raya-kobo/). Back
    3. The Guardian, ‘Meta’s settlement talks with Kenyan content moderators break down’ (2023) (accessible at https://www.theguardian.com/technology/2023/oct/16/meta-settlement-talks-with-kenyan-content-moderators-break-down-facebook). Back
    4. Id. Back
    5. Id. Back
    6. Global Freedom of Expression, ‘The Case of Nonconsensual Pornography Sent to Victim’s Parents’ (2021) (accessible at https://globalfreedomofexpression.columbia.edu/cases/case-of-nonconsensual-pornography-sent-to-victims-parents/). Back
    7. Roshanara Ebrahim v Ashleys Kenya Limited & 3 others (2016) (accessible at http://kenyalaw.org/caselaw/cases/view/129282/). Back
    8. South Africa, ‘Cybercrimes Act 19 of 2020’ (2020) (accessible at https://www.gov.za/sites/default/files/gcis_document/202106/44651gon324.pdf). Back