Back to main site

    Intermediary Liability

    Module 5: Trends in Censorship by Private Actors

    Internet intermediaries – an overview

    An ‘internet intermediary’ is a broad, constantly developing term. The Council of Europe suggests the term encompasses “a wide, diverse and rapidly evolving range of service providers that facilitate interactions on the internet between natural and legal persons”. They fulfil a variety of functions, including connecting users to the internet; hosting web-based services; facilitating the processing of data; gathering information and storing data; assisting searching, and; enabling the sale of goods and services.(1) Examples of internet intermediaries include:

    • ISPs and web hosting companies that provide the infrastructure;
    • Search engines and social media platforms, that provide content and facilitate communication.(2)

    Simply put, “internet intermediaries are the pipes through which internet content is transmitted and the storage spaces in which it is stored, and are therefore essential to the functioning of the internet.”(3) Internet intermediaries dominate a pivotal role in the current digital climate impacting social, economic and political exchanges. They can influence the dissemination of ideas and have been described as the “custodians of our data and gatekeepers of the world’s knowledge”.(4)

    It is not difficult to create a link between internet intermediaries and the advancement of an array of human rights. As the gatekeepers to the internet, they occupy a unique position in which they can enable the exercise of freedom of expression, access to information and privacy rights. The 2016 Report of the UNSR noted that:

    “The contemporary exercise of freedom of opinion and expression owes much of its strength to private industry, which wields enormous power over digital space, acting as a gateway for information and an intermediary for expression.”

    Internet intermediary liability

    Given the important roles that intermediaries play in society, particularly in relation to the myriad of implicated rights, it is imperative to understand their legal liability. The Association for Progressive Communications (APC) explains that intermediary liability refers to the extent that internet intermediaries should be held responsible for illegal or harmful activities performed by users through their services. Where intermediary liability exists, ISPs have an obligation to prevent the occurrence of unlawful or harmful activity by users of their services, and failure to do so may lead to legal consequences such as orders to compel or criminal sanctions.

    In a Report on the liability of internet intermediaries in Nigeria, Kenya, South Africa, and Uganda, APC captured the following ways in which intermediary liability can arise:

    • Copyright infringement.
    • Digital privacy.
    • Defamation.
    • National and public security.
    • Hate speech.
    • Child protection.
    • Intellectual property disputes.

    While intermediary liability can be associated with a legitimate interest, there are growing concerns, as noted by the UNSR in the 2016 Report, about the “appropriate balance between freedom of expression and other human rights” and the misuse of intermediary liability to curb expression and access.(5) The legal liability of intermediaries has a direct impact on users’ rights. In this regard, there is a direct correlation between restrictive liability laws – the over-regulation of content – and the increased censorship, monitoring and restrictions of legitimate and lawful online expression.

    There are three general approaches to intermediary liability, each with differing considerations and implications: strict liability, the broad immunity model, and the safe-harbour model.

    Strict liability

    In terms of this approach, intermediaries are liable for third-party content. The abovementioned UNESCO report states that the only way to avoid liability is to proactively monitor, filter, and remove content in order to comply with the state’s law. Failing to do so places an intermediary at risk of fines, criminal liability, and revocation of business or media licenses. The UNESCO report notes that China and Thailand are governed by strict liability. This approach is largely considered inconsistent with international norms and standards.

    Strict Liability in China

    The Stanford CIS World Intermediary Liability Map documents laws around the world that govern internet intermediaries and shape users’ digital rights. It provides both basic and advanced tools to search for and visualise how legislation, decisions and public policies are evolving globally. It has captured the following in relation to China:

    • In 2000, China’s State Council imposed obligations on “producing, assisting in the production of, issuing, or broadcasting” information that contravened an ambiguous list of principles (for example opposing the basic principles as they are confirmed in the Constitution; disrupting national policies on religion, propagating evil cults and feudal superstitions; and spreading rumours, disturbing social order, or disrupting social stability).
    • China has followed through with its strict liability approach and continues to hold internet companies liable if they fail to comply. This has led to wide-scale filtering and monitoring by intermediaries. This level of oversight has resulted in social media companies being the principal censors of their users’ content.

    Broad immunity model

    On the other end of the spectrum is the broad immunity model, which provides exemptions from liability without distinguishing between intermediary function and content. The UNESCO report cites the Communications Decency Act in the United States as an example of this model, which protects intermediaries from liability for illegal behaviour by users when they do remove content in compliance with private company policy. ARTICLE19 explains that under this model, intermediaries are not responsible for the content they carry, but are responsible for the content they disseminate. The Organisation for Economic Co-operation and Development (OECD), in its Council Recommendation on principles for internet policy, makes reference to this as the preferred model, as it conforms with the best practices, discussed below, and gives due regard to the promotion and protection of the global free flow of information online.

    Safe-harbor model

    The safe harbour model, otherwise known as conditional liability, seemingly adopts a middle-ground approach. This approach gives intermediaries immunity provided they comply with certain requirements. Through this approach, intermediaries are not required to actively monitor and filter content, but rather are expected to remove or disable content upon receipt of notice that the content includes infringing material. Central to this approach is the idea of ‘notice and takedown procedures’, which can be content- or issue‑specific. There are mixed views on this approach; for some, it is a fair middle-ground; for others, it is a necessary evil to guard against increased filtering or a complete change in the intermediary landscape.(6) As noted in the UNESCO report, there are others who express concern about this approach because of its susceptibility to abuse, as it may lend itself to self-censorship, giving the intermediaries quasi-judicial power to evaluate and determine the legality of content.

    Conditional liability in South Africa

    The Freedom of Expression Institute explains the position in South Africa as follows:

    Chapter 11 of the South African Electronic Communications Act 25 of 2002 provides for limited liability of internet intermediaries subject to a takedown notice condition. These provisions apply to members of the Internet Service Providers Association. An immediate response to takedown notices is necessary, failing which the immunity from liability is forfeited.

    Concerns have been noted regarding South Africa’s framework, similar to most concerns around the safe harbour approach: ISPs err on the side of caution and are quick to remove content without providing the content provider with an opportunity to defend the content, and there are no existing appeal mechanisms for content creators or providers. This is concerning given the fact that any individual can submit a take-down notice.(7)

    The potential for these mechanisms to be abused became clear in 2019 when an ISP briefly took the South African news portal offline in response to a fraudulent takedown request seemingly submitted in retaliation for an investigative report about a convicted fraudster at the centre of a controversial South African oil deal.(8)

    At the core of the debate between the various models is the need to understand the difference between lawful and unlawful content. There is a chilling effect on expression when internet intermediaries are left to their own devices to determine what is good or legal, as it is likely they will tend towards more censorship than less, out of fear of liability.

    Keeping in line with a human rights perspective, this guide advocates that “[t]he right to freedom of expression online can only be sufficiently protected if intermediaries are adequately insulated from liability for content generated by others.”(9) The following section provides some guidance on applicable international human rights frameworks that can be relied on when advocating for rights in relation to intermediary liability.

    Intermediary liability in the courts

    Intermediary liability has been dealt with at some length in the European Court of Human Rights (ECtHR). The seminal case of Delfi AS v Estonia found that an online news portal was liable for offensive comments they allowed to be posted below one of their news articles.

    In Magyar Tartalomszolgáltatók Egyesülete and Zrt v. Hungary, however, found that imposing objective liability for unlawful comments made by readers on a website placed “excessive and impracticable forethought capable of undermining freedom of the right to impart information on the Internet.”

    More recently, Media Defence and the Electronic Frontier Foundation (EFF) have intervened in a case at the Grand Chamber of the ECtHR, which concerns online users being held liable for third-party comments. In Sanchez v France a French politician was charged with incitement to hatred on religious grounds following comments posted on the ‘wall’ of his Facebook account by other parties. Because he failed to delete those comments promptly, he was convicted of that offence. The individuals who posted the comments were convicted of the same offence. The Fifth Section of the ECtHR held that his conviction for failing to promptly delete unlawful comments published by third parties on the public wall of his Facebook account did not breach his Article 10 rights despite his apparent lack of knowledge of the comments. The judgment has now been referred to the Grand Chamber of the ECtHR.

    Applicable international human rights standards and current international best practices

    Different interest groups continue to push different agendas in relation to internet intermediaries and their liability. Many countries either have non-existent laws or vague and inconsistent laws that make it difficult to enforce rights. There are, however, applicable international human rights frameworks that guide how laws should be enacted or how restrictions may be imposed. With any rights-based advocacy or litigation, it is necessary to establish the rights invoked. As discussed above, it is clear that internet intermediaries play a vital role in the advancement of an array of rights. Thereafter, the next step is to determine responsibility.

    In relation to internet intermediaries, the triad of information rights is clearly invoked. The 2010 UN Framework for Business and Human Rights finds that states are primarily responsible for ensuring that internet intermediaries act in a manner that ensures the respect, protection and promotion of fundamental rights and freedoms of internet users. But at the same time, the intermediaries themselves have a responsibility to respect the recognised rights of their users.

    The 2019 Joint Declaration on Challenges to Freedom of Expression in the Next Decade observed that:

    “private companies have responsibilities to respect human rights and remedy violations, and that addressing the challenges outlined above requires multi-stakeholder support and the active engagement of State actors, media outlets, intermediaries, civil society and the general public.”

    Although there might be complexities regarding the cross-jurisdictional scope of intermediaries’ powers and responsibilities, international human rights norms should always be at the fore.

    Given the link between internet intermediaries and the fundamental right to freedom of expression, it is best to engage with this topic and test laws, regulations and policies against prescribed human rights standards and understand the restrictions and limitations that may be applicable. As discussed in previous sections, restrictions on the right to freedom of expression have been formulated as a strict, narrow, three-part test – namely, that the restriction must:

    • Be provided by law.
    • Pursue a legitimate aim.
    • Conform to the strict tests of necessity and proportionality.

    Laws and content restriction orders and practices must comply with this test. Practically, the need to assess the compliance of legislative frameworks is most likely to be necessitated in jurisdictions that adopt the strict liability model and the safe-harbour model. The strict liability model can be easily tested and found to be compliant. The safe-harbour model presents a slightly more in-depth engagement in order to determine compliance, with the Kenyan Copyright (Amendment) Bill, 2017, providing a useful example to illustrate the application of the applicable tests.

    Kenyan Copyright (Amendment) Act

    In 2022, Kenya passed into law the Copyright (Amendment) Act. The Act was quite substantially altered during the public participation process and ultimately did not deal substantively with intermediary liability issues. However, in its earlier forms, the Bill provided some interesting proposals regarding intermediary liability in the African context. A key feature of earlier versions of the Bill was the introduction of the safe-harbour approach, providing for “conduit” safe harbours and “caching” safe harbours. The former, per (former) section 35A(1)(a), would have protected intermediaries from liability for copyright infringements if their involvement was limited to “providing access to or transmitting content, routing or storage of content in the ordinary course of business”.

    Under these circumstances, the intermediary is not under an obligation to take down or disable content if a takedown notice is received. As per (former) section 35A(1)(b), intermediaries would have been protected if their role was related to content storage that is “automatic, intermediate and temporary”. This protection would be conditional upon the removal of content following a take-down notice.(10)

    Civil society criticised the lack of clarity and poor notice-and-takedown procedures in the Bill, noting that it fell short of international standards on freedom of expression. ARTICLE 19 listed five problems with the Bill in terms of notice-and-takedown procedures:

    • Lack of proportionality: criminal sanctions are imposed on intermediaries who fail to remove content. As discussed above, this causes intermediaries to lean toward censorship and blocking, which infringes freedom of expression.
    • Lack of clarity: the procedures are vague and do not provide clarity on the issue of counter notices.
    • Lack of due process: there is no mention of judicial review or appeal mechanisms. There is also no requirement notifying the content publisher of the alleged infringement. The 48-hour time frame for the removal of content is too short to allow for the submission of a counter notice.
    • Lack of transparency: there is no obligation to maintain records of takedown requests or provide access to such records.
    • Severe sanctions: the harsh sanctions for false take down notices are disproportionate to the purpose of deterring such.

    It is apparent that the necessity and proportionality legs of the test proved to be the sticking points in relation to this Act. While the safe harbour method might serve a legitimate aim, if the guiding regulations are not clear, necessary, and proportionate, then there is an unjustifiable limitation on freedom of expression.

    Ultimately, these sections of the Bill were removed, and the Act was passed in 2022 without addressing intermediary liability.

    In 2015, a group of civil society organisations drafted a framework of baseline safeguards and best practices to protect human rights when intermediaries are asked to restrict online content. Known as the Manila Principles, these were drafted with the intention of being “considered by policymakers and intermediaries when developing, adopting, and reviewing legislation, policies and practices that govern the liability of intermediaries for third-party content.” Advocates and litigators should similarly rely on these best practice principles, which are based on international human rights instruments and other international legal frameworks when advancing online rights.

    Manila Principles

    The key tenets of the Manila Principles on Intermediary Liability:

    • Intermediaries should be shielded from liability for third-party content.
    • Content must not be required to be restricted without an order by a judicial authority.
    • Requests for restrictions of content must be clear, unambiguous, and follow due process.
    • Laws and content restriction orders and practices must comply with the tests of necessity and proportionality.
    • Laws and content restriction policies and practices must respect due process.
    • Transparency and accountability must be built into laws and content restriction policies and practices.

    These principles have been relied on to test state rules and to gauge whether the legal frameworks regarding intermediary liability are adequate. In 2019, the Centre for Internet and Society in India submitted a report to the Indian government comparing the Manila Principles to the draft Information Technology [Intermediary Guidelines (Amendment) Rules], 2018. These submissions provided useful guidance by highlighting provisions that were unaligned with the Manila Principles and which had the potential to infringe upon the right to freedom of expression.(11) The submission further provided recommendations to assist the Indian government in ensuring the regulations are compliant. The submissions are a useful illustration of the significance of these principles, as well as a useful resource for others who seek to test domestic legislation against international best practices.


    Internet intermediaries play a crucial role in the advancement of human rights. Intermediary liability needs to be understood holistically in relation to the prevention of harm, the protection of free speech and access to information, and, in encouraging innovation and creativity.(12) While there is a growing trend of online harms and unlawful content:

    “The law must find a way to flexibly address these changes, with an awareness of the ways in which existing and proposed laws may affect the development of information intermediaries, online speech norms, and global democratic values.”(13)


    1. Media Defence ‘Training Manual on Digital Rights and Freedom of Expression Online Litigating digital rights and online freedom of expression in East, West and Southern Africaat 6, (accessible at Back
    2. ARTICLE 19, “Internet intermediaries: Dilemma of Liability”, 2013, at 3, (accessible at: See further Li, ‘Beyond Intermediary Liability: The Future of Information Platforms’ Yale Law School Information Society Project, (2018) at 9 (accessible at Back
    3. Id at 6. Back
    4. Riordan, ‘The Liability of Internet Intermediaries’ DPhil thesis, Oxford University (2013) at 1 (accessible at Back
    5. A 2014 UNESCO report on fostering freedom online and the role of internet intermediaries provides a comprehensive overview of the above regulatory objectives pursued by the states, which in turn have a direct impact on how, and to what extent, intermediaries are compelled to restrict freedom of expression online. Back
    6. Koren, Nahmia and Perel, ‘Is It Time to Abolish Safe Harbor? When Rhetoric Clouds Policy Goals’ Stanford Law & Policy Review, Forthcoming (2019) at 47 (accessible at Back
    7. See further Comninos, ‘Intermediary liability in South Africa’ (2012) (accessible at  See also Rens, ‘Failure of Due Process in ISP Liability and Takedown Procedures’ in Global Censorship, Shifting Modes, Persisting Paradigms (2015) (accessible at Back
    8. Mail & Guardian, ‘The digital breadcrumbs behind the M&G’s censorship attack’, (2019) (accessible at Back
    9. Media Defence ‘Training Manual on Digital Rights and Freedom of Expression Online Litigating digital rights and online freedom of expression in East, West and Southern Africaat 28, (accessible at Back
    10. For a more detailed discussion on the Act see Walubengo and Mutemi, ‘Treatment of Kenya’s Internet Service Providers (ISPs) under the Kenya Copyright (Amendment) Bill, 2017’ (2019) The African Journal of Information and Communication (accessible at Back
    11. Note that the 2018 Draft Rules were subsequently replaced by the Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Rules, (2021) (accessible at: Back
    12. Keller, ‘Build Your Own Intermediary Liability Law: A Kit for Policy Wonks of All Ages’ in Li, ‘New Controversies in Intermediary Liability Law Essay Collection Yale Law School’ Information Society Project (2019) at 20 (accessible at Back
    13. Li, “Beyond Intermediary Liability: The Future of Information Platforms” Yale Law School Information Society Project (2018). Back