Back to main site

    Intermediary Liability

    Module 5: Trends in Censorship by Private Actors

    Internet intermediaries – an overview

    An internet intermediary is a broad, constantly developing term. The Council of Europe suggests the term encompasses “a wide, diverse and rapidly evolving range of service providers that facilitate interactions on the internet between natural and legal persons”. They fulfil a variety of functions, including connecting users to the internet; hosting web-based services; facilitating the processing of data; gathering information and storing data; assisting searching; and enabling the sale of goods and services.(1) Examples of internet intermediaries range from ISPs and web hosting companies who provide the infrastructure, to search engines and social media platforms, who provide content and facilitate communication.(2) Simply put, “internet intermediaries are the pipes through which internet content is transmitted and the storage spaces in which it is stored, and are therefore essential to the functioning of the internet.”(3) Internet intermediaries dominate a pivotal role in the current digital climate impacting social, economic and political exchanges. They can influence the dissemination of ideas and have been described as the “custodians of our data and gatekeepers of the world’s knowledge”.(4)

    It is not difficult to create a link between internet intermediaries and the advancements of an array of human rights.  As the gatekeepers to the internet, they occupy a unique position in which they can enable the exercise of freedom of expression, access to information and privacy rights.  The 2016 Report of the UNSR noted that:

    “The contemporary exercise of freedom of opinion and expression owes much of its strength to private industry, which wields enormous power over digital space, acting as a gateway for information and an intermediary for expression.”

    Internet intermediary liability

    Given the roles intermediaries are playing in society, particularly in relation to the myriad of implicated rights, it is imperative to understand their legal liability.  The Association for Progressive Communications (APC) explains that internet intermediary liability refers to the liability of intermediaries for illegal or harmful activities performed by users through their services, where they have an obligation to prevent the occurrence of unlawful or harmful activity by users of their services – failing which may lead to legal consequences such as orders to compel or criminal sanctions.  In a 2012 Report on the liability of internet intermediaries in Nigeria, Kenya, South Africa and Uganda, APC captured the following ways in which intermediary liability can arise:

    • Copyright infringement.
    • Digital privacy.
    • Defamation.
    • National and public security.
    • Hate speech.
    • Child protection.
    • Intellectual property disputes.

    A 2014 UNESCO Report on fostering freedom online and the role of internet intermediaries provides a comprehensive overview of the above regulatory objectives pursued by the states, which in turn have a direct impact on how, and to what extent, intermediaries are compelled to restrict freedom of expression online.

    While intermediary liability can be associated to a legitimate interest, there are growing concerns, as noted by the UNSR in the 2016 Report, about the “appropriate balance between freedom of expression and other human rights” and the misuse of intermediary liability to curb expression and access.  The legal liability of intermediaries has a direct impact on users’ rights. In this regard, there is a direct correlation between restrictive liability laws – the over-regulation of content – and the increased censorship, monitoring and restrictions of legitimate and lawful online expression.

    There are three general approaches to intermediary liability, each with differing considerations and implications: strict liability, broad immunity model, and the safe-harbour model.

    Strict liability

    In terms of this approach, intermediaries are liable for third-party content.  The abovementioned UNESCO Report states that the only way to avoid liability is to proactively monitor, filter and remove content in order to comply with the state’s law.  Failing to do so places an intermediary at risk of fines, criminal liability, and revocation of business or media licenses.  The UNESCO Report notes that China and Thailand are governed by strict liability.  This approach is largely considered inconsistent with international norms and standards.

    Strict Liability in China

    The Stanford CIS World Intermediary Liability Map documents laws around the world that govern internet intermediaries and shape users’ digital rights. It provides both basic and advanced tools to search for and visualise how legislation, decisions and public policies are evolving globally.  It has captured the following in relation to China:

    • In 2000, China’s State Council imposed obligations on “producing, assisting in the production of, issuing, or broadcasting” information that contravened an ambiguous list of principles (for example opposing the basic principles as they are confirmed in the Constitution; disrupting national policies on religion, propagating evil cults and feudal superstitions; and spreading rumours, disturbing social order or disrupting social stability).
    • China has followed through with its strict liability approach and continues to hold internet companies liable if they fail to comply.  This has led to wide scale filtering and monitoring by intermediaries.  This level of oversight has resulted in social media companies being the principal censors of their users’ content.

    Broad immunity model

    On the other end of the spectrum is the broad immunity model, which provides exemptions from liability without distinguishing between intermediary function and content.  The UNESCO Report cites the Communications Decency Act in the United States as an example for this model, which protects intermediaries from liability for illegal behaviour by users from liability when they do remove content in compliance with private company policy.  ARTICLE19 explains that under this model, intermediaries are not responsible for the content they carry, but are responsible for the content they disseminate.  The Organisation for Economic Co-operation and Development (OECD), in its Council Recommendation on principles for internet policy, makes reference to this as the preferred model, as it conforms with the best practices, discussed below, and gives due regard to the promotion and protection of the global free flow of information online.

    Safe-harbor model

    The safe-harbor model seemingly adopts a middle-ground approach. Otherwise known as conditional liability, this approach gives intermediaries immunity, provided they comply with certain requirements. Through this approach, as explained in the UNESCO Report, intermediaries are not required to actively monitor and filter content, but rather are expected to remove or disable content upon receipt of notice that the content includes infringing material. Central to this approach is the idea of ‘notice and take down procedures’, which can be content or issue specific. There are mixed views on this; for some it is a fair middle-ground; for others, it is a necessary evil to guard against increased filtering or a complete change to the intermediary landscape.(5) As noted in the UNESCO Report, there are others who express concern about this approach because of its susceptibility to abuse, as it may lend itself to self-censorship, giving the intermediaries quasi-judicial power to evaluate and determine the legality of content.

    Conditional liability in South Africa

    The Freedom of Expression Institute explains the position in South Africa as follows:

    Chapter 11 of the South African Electronic Communications Act 25 of 2002 provides for limited liability of internet intermediaries subject to a takedown notice condition.  These provisions apply to members of the Internet Service Providers Association.  An immediate response to takedown notices is necessary, failing which the immunity from liability is forfeited.

    Concerns have been noted regarding South Africa’s framework, like most concerns around the safe harbour approach: ISPs err on the side of caution and are quick to remove content without providing the content provider with an opportunity to defend the content, and there are no existing appeal mechanisms for content creators or providers. This is concerning given the fact that any individual can submit a take-down notice.(6)

    At the core of the debate between the various models is the need to understand the difference between lawful and unlawful content. There is a chilling effect on expression where internet intermediaries are left to their own devices to determine what is good or what is legal, as it is likely they will tend towards more censorship than less, out of fear of liability.

    Keeping in line with a human rights perspective this guide, along with previous guides, advocates that “[t]he right to freedom of expression online can only be sufficiently protected if intermediaries are adequately insulated from liability for content generated by others.”(7) The following section provides some guidance on applicable international human rights frameworks that can be relied on when advocating for rights in relation to intermediary liability.

    Applicable international human rights standards and current international best practices

    Different interest groups continue to push different agendas in relation to internet intermediaries and their liability.  Many countries either have non-existent laws, or vague and inconsistent laws that make it difficult to enforce rights.  There are, however, applicable international human rights frameworks that guide how laws should be enacted or how restrictions may be imposed.  With any rights-based advocacy or litigation, it is necessary to establish the rights evoked.  As discussed above, it is clear that internet intermediaries play a vital role in the advancement of an array of rights.  Thereafter, the next step is to determine responsibility.

    In relation to internet intermediaries, the triad of information rights is clearly invoked.  The 2010 UN Protect, Respect and Remedy’ Framework for Business and Human Rights finds that states are primarily responsible for ensuring that internet intermediaries act in a manner that ensures the respect, protection and promotion of fundamental rights and freedoms of internet users.  But at the same time, the intermediaries themselves have a responsibility to respect the recognised rights of their users.

    In the 2019 Joint Declaration: Challenges to Freedom of Expression in the Next Decade, it was observed that:

    “private companies have responsibilities to respect human rights and remedy violations, and that addressing the challenges outlined above requires multi-stakeholder support and the active engagement of State actors, media outlets, intermediaries, civil society and the general public.”

    Although there might be complexities regarding the cross-jurisdictional scope of intermediaries’ powers and responsibilities, international human rights norms should always be at the fore.

    Given the link between internet intermediaries and the fundamental right to freedom of expression, it is best to engage with this topic and test laws, regulations and policies against prescribed human rights standards and understand the restrictions and limitations that may be applicable.  As discussed in previous sections, restrictions on the right to freedom of expression have been formulated as a strict and narrow test, one that must:

    • Be provided by law.
    • Pursue a legitimate aim.
    • Conform to the strict tests of necessity and proportionality.(8)

    Laws and content restriction orders and practices must comply with this test.  Practically, the need to assess the compliance of legislative frameworks is most likely to be necessitated in jurisdictions that adopt the strict liability model and the safe-harbor model.  The strict liability model can be easily tested and found to be compliant.  The safe-harbor model presents a slightly more in-depth engagement in order to determine compliance, with the Kenyan Copyright (Amendment) Bill, 2017, providing a useful example to illustrate the application of the applicable tests.

    Kenyan Copyright (Amendment) Bill

    The Bill proposes substantial amendments to the current Copyright Act, and of particular relevance is the proposal in relation to intermediary liability. A key feature of the Bill is the introduction of the safe-harbor approach. The Bill provides for “conduit” safe-harbors and “caching” safe harbors. The former, per section 35A(1)(a), protects intermediaries from liability for copyright infringements if their involvement was limited to “providing access to or transmitting content, routing or storage of content in the ordinary course of business”.

    Under these circumstances, the intermediary is not under an obligation to take down or disable content if a takedown notice is received. As per section 35A(1)(b), intermediaries are protected if their role is related to content storage that is “automatic, intermediate and temporary”. This protection is conditional upon the removal of content following a take-down notice.(9)

    There have been calls for clarity and improved notice-and-takedown procedures and concerns that the Bill falls short of international standards on freedom of expression.  ARTICLE 19 lists five problems with the Bill in terms of notice-and-takedown procedures:

    • Lack of proportionality: criminal sanctions are imposed on intermediaries who fail to remove content. As discussed above, this causes intermediaries to lean toward censorship and blocking, which infringes freedom of expression.
    • Lack of clarity: the procedures are vague and do not provide clarity on the issue of counter notices.
    • Lack of due process: there is no mention of judicial review or appeal mechanisms. There is also no requirement notifying the content publisher of the alleged infringement. The 48-hour time frame for the removal of content is too short to allow for the submission of a counter notice.
    • Lack of transparency: there is no obligation to maintain records of takedown requests or provide access to such records.
    • Severe sanctions: the harsh sanctions for false take down notices are disproportionate to the purpose of deterring such.

    It is apparent that the necessity and proportionality legs of the test prove to be the sticking points in relation to this Bill.  While the safe-harbor method might serve a legitimate aim, if the regulations that guide it are not clear, necessary and proportionate, then they unjustifiably limit the right to freedom of expression.

    In 2015, a group of civil society organisations drafted a framework of baseline safeguards and best practices to protect human rights when intermediaries are asked to restrict online content.  Known as the Manila Principles, these were drafted with the intention of being “considered by policymakers and intermediaries when developing, adopting, and reviewing legislation, policies and practices that govern the liability of intermediaries for third-party content.”  Advocates and litigators should similarly rely on these best practice principles, which are based on international human rights instruments and other international legal frameworks, when advancing online rights.

    Manila Principles

    The key tenets of the Manila Principles on Intermediary Liability:

    • Intermediaries should be shielded from liability for third-party content.
    • Content must not be required to be restricted without an order by a judicial authority.
    • Requests for restrictions of content must be clear, unambiguous, and follow due process.
    • Laws and content restriction orders and practices must comply with the tests of necessity and proportionality.
    • Laws and content restriction policies and practices must respect due process.
    • Transparency and accountability must be built into laws and content restriction policies and practices.

    These principles have been relied on to test state rules and to gauge whether the legal frameworks regarding intermediary liability are adequate.  This was done most recently in 2019 in India by the Centre for Internet and Society who submitted a report to the Indian government comparing the Manila Principles to the draft Information Technology [Intermediary Guidelines (Amendment) Rules], 2018.  These submissions provided useful guidance on the provisions that were unaligned with the Manila Principles by highlighting the areas that have the potential to infringe upon the right to freedom of expression.  The submission further provided recommendations to assist the Indian government in ensuring the regulations are compliant.  The submissions are a useful illustration of the significance of these principles, as well as a useful resource for others who seek to test domestic legislation against international best practices.


    Internet intermediaries play a crucial role in the advancement of human rights. Intermediary liability needs to be understood holistically in relation to the prevention of harm, the protection of free speech and access to information, and, in encouraging innovation and creativity.(10) While there is a growing trend of online harms and unlawful content:

    “The law must find a way to flexibly address these changes, with an awareness of the ways in which existing and proposed laws may affect the development of information intermediaries, online speech norms, and global democratic values.”(11)


    1. Media Defence above n 2 at 6. Back
    2. ARTICLE 19, “Internet intermediaries: Dilemma of Liability” (2013) at 3 available at  See further Li, ‘Beyond Intermediary Liability: The Future of Information Platforms’ Yale Law School Information Society Project (2018) at 9 (accessible at Back
    3. Id at 6. Back
    4. Riordan, ‘The Liability of Internet Intermediaries’ DPhil thesis, Oxford University (2013) at 1 (accessible at Back
    5. Koren, Nahmia and Perel, ‘Is It Time to Abolish Safe Harbor? When Rhetoric Clouds Policy Goals’ Stanford Law & Policy Review, Forthcoming (2019) at 47 (accessible at Back
    6. See further Comninos, ‘Intermediary liability in South Africa’ (2012) (accessible at  See also Rens, ‘Failure of Due Process in ISP Liability and Takedown Procedures’ in Global Censorship, Shifting Modes, Persisting Paradigms (2015) (accessible at Back
    7. Media Defence above n 2 at 28. Back
    8. For a more detailed discussion on the Bill see Walubengo and Mutemi, ‘Treatment of Kenya’s Internet Service Providers (ISPs) under the Kenya Copyright (Amendment) Bill, 2017’ (2019) The African Journal of Information and Communication (accessible at Back
    9. Keller, ‘Build Your Own Intermediary Liability Law: A Kit for Policy Wonks of All Ages’ in Li, ‘New Controversies in Intermediary Liability Law Essay Collection Yale Law School’ Information Society Project (2019) at 20 (accessible at Back
    10. Li, “Beyond Intermediary Liability: The Future of Information Platforms” Yale Law School Information Society Project(2018). Back