Trends in Censorship by Private Actors

This module aims to:

  • Give an overview of ways in which non-state actors facilitate online censorship.

  • Set out the international and regional legal principles that are implicated by online censorship.

  • Unpack the concept of net neutrality.

  • Examine the misuse of intermediary liability to curb expression and access.

  • Explore the right to be forgotten.

  • Explain the monitoring obligations of search engines and platforms.

Introduction

States’ obligations to uphold and respect rights, including digital rights, are a cornerstone of international law.(1) However, there is growing appreciation in international law and human rights that much of the digital space, and the technology used to access it, is owned, or controlled by multinational companies, giving the private sector unprecedented power to either uphold or infringe on an array of expressive rights. Litigators and activists must now contend not only with state abuses of digital rights but also violations by private actors.

In 2011, the United Nations Special Rapporteur on Freedom of Expression (UNSR on FreeEx) noted that: “Generally, companies have played an extremely positive role in facilitating the exercise of the right to freedom of opinion and expression,” but that “pressure exerted upon them by States, coupled with the fact that their primary motive is to generate profit rather than to respect human rights” creates risks for the private sector to engage in or enable censorship.(2)

In 2021, a subsequent UNSR on FreeEx highlighted the gendered dimensions of these risks, noting that social media companies’ failure to address the proliferation of online gender-based violence, and gender biases in content moderation and other AI-driven processes, have led to the silencing of women’s voices online.(3)

This module grapples with some of the long-term threats to freedom of expression from non-state actors, as well as emergent threats. Alongside a brief overview of relevant topics, it provides practical guidance on how to ensure that fundamental rights and freedoms are respected, protected, and promoted online.

Net Neutrality

An overview of net neutrality

The principle of net neutrality is that internet service providers (ISPs) should treat all internet traffic equally, without imposing restrictions or preferential treatment based on factors like the source, destination, or type of data being transferred, or any profit motive. For example, an ISP cannot block, slow down or alter access to service A or make it faster and easier to access service B.4 This aims to ensure that users have equal access to all online content and services. It means that ISPs must remain neutral and impartial when providing internet access.5

Net neutrality is now a well-established principle of contemporary human rights and international law.6

  • A 2017 report of the UNSR, for example, found that: “In the digital age, the freedom to choose among information sources is meaningful only when Internet content and applications of all kinds are transmitted without undue discrimination or interference by non-State actors, including provider.”7

  • In 2021, a resolution of the Human Rights Council on the promotion and protection of human rights on the internet included a clarion call for states to ensure net neutrality, and to prohibit ISPs from giving preferential access to particular types of content or services for commercial gain.8

In principle, net neutrality protections are designed to safeguard freedom of expression and access to information online by ensuring that such freedoms are not determined by market forces or curtailed by network providers. Net neutrality aims to promotes diversity, pluralism, and innovation, and to ensure that people can freely access information and impart ideas across the information society. The Steering Committee on Media and Information Society of the Council of Europe, in its report on Protecting Human Rights through Network Neutrality, explained that net neutrality encourages internet users to freely elect how they use their internet connection. The Center for Technology and Democracy explains that:

Preserving internet neutrality means preserving the power of individuals to make choices about how they use the Internet – what information to seek, receive, and impart, from which sources, and through which services.”9

Net neutrality, development and human rights

Given net neutrality’s role in the advancement of freedom of expression, it should be viewed through a human rights lens. Some have gone as far as suggesting that it is an emerging international human rights norm.10 Ensuring network neutrality is seen as central to the protection of fundamental human rights and an enabler of fair competition and innovation, as it promotes freedom and enhances network access.11

Yet despite the demonstrable link between human rights and net neutrality and the clearly defined position of the UNSR, the past decade has seen growing threats to net neutrality. It has been the subject of regulatory debates and radical shifts in regulations across the world. Additionally, norms and standards have started to develop, and, equally, attempts by state and non-state actors to influence net neutrality and individuals’ freedom of expression online are pervasive. This will be outlined below.

Current challenges and debates

There are two common approaches that interfere with net neutrality:

  • Blocking or throttling of content, either by state or non-state actors, may include entirely blocking or significantly slowing down access to specific websites, content, or platforms, or restricting access to content in specific geographic regions. This form of restriction contravenes international human rights norms. The Net Neutrality Compendium explains that “blocking certain information resources or restricting what information Internet users can impart over their connection would have serious implications for the right to free expression. For example, blocking access to a particular lawful blog because its content is disfavoured by the access provider would raise obvious concerns.” The 2017 Report of the UNSR notes that “States’ use of blocking or filtering technologies is frequently in violation of their obligation to guarantee the right to freedom of expression.”

  • Zero-rating involves the differential treatment of content by making certain content available with a zero-download cost.12

This method is less drastic than blocking and throttling of content and is often framed in terms of public benefit. The 2017 Report of the UNSR describes zero-rating as “the practice of not charging for the use of internet data associated with a particular application or service; other services or applications, meanwhile, are subject to metered costs.” The impact of zero-rating can depend on who implements it, the purpose of the zero-rating, how decisions are made about what content is zero-rated, and the nature of the content itself. In low-income contexts, it can be an effective way to provide widespread access to information in the public interest.

States have responded differently to debates about net neutrality and zero-rating, with some legislating strong protections for the former and others developing policies to promote zero-rating of certain content as a public service.

Certain developed states have shown a trend towards complete bans of zero-rating, perhaps as a reflection of better and more affordable connectivity. Canada, Norway, Slovenia, and the Netherlands are some of the states that have prohibited service providers from differentiating between tariffs for internet access services.13

Among developing countries, zero-rating is more likely to be viewed as a policy approach to address challenges such as limited internet access, high data prices and widespread digital divides. Notably, the global COVID-19 pandemic prompted a range of temporary zero-rating initiatives in both developed14 and developing nations,15 in which online education, health, and other resources were zero-rated. In many instances, ISPs voluntarily provided zero-rated access to certain resources, such as in Tanzania and Kenya,16 while in South Africa the government issued regulations requiring ISPs to zero-rate certain resources.17

While these measures were enacted as once-off exceptions in the unprecedented challenges of a global pandemic, in the long run, zero-rating could be seen to cause complications in relation to net neutrality. Access Now explains:

“Activists in advanced economies are struggling to communicate the importance of Net Neutrality for free expression, innovation, and competition, in some cases to audiences that are increasingly anti-regulation. Many in developing countries are facing down critics who argue that non-neutral internet access somehow functions as an “on-ramp” for the free and open internet.” The following examples illustrate the complexity of this debate.

The fight for net neutrality in India

The net neutrality debate came o he fore in India in 2015 with wo zero-rated options being offered o Indian users – Facebook’s ‘Inte et.org’ and Bharti Airtel’s ‘Airtel Zero’. Facebook (now Meta) launched Inte et.org with he stated intention of providing free basic inte et services o people in India, but only o selected online content.18 At around he same ime, Airtel launched Airtel Zero, a platform for zero-rated services, offering access o a range of content. Content providers paid Airtel o be included in his service. By April 2015, Airtel was he largest mobile ISP in India with 226 million customers.19   That year, he Telecom Regulatory Authority of India (TRAI) called for public comment on its consultation paper on net neutrality. This sparked a national debate on he opic, with many individuals and civil society actors providing comments on he importance of net neutrality. While Meta argued hat some access is better han no access, digital rights activists lobbied o introduce regulations o safeguard net neutrality. The process led o significant changes o safeguard net neutrality in India’s digital policy:  
  • In 2016, TRAI released regulations itled “Prohibition of discriminatory ariffs for data services” which, among other hings, prohibited any service provider from offering or charging discriminatory ariffs for data services on he basis of content.20
 
  • In 2017, TRAI abled further recommendations for net neutrality with he Department of Technology.21
    However, in 2023 TRAI published a policy discussion paper22 which invited public comment on he possibility of policy changes which would mark a shift away from net neutrality, including a framework for authorisation and network usage feeds for inte et services, and a mechanism for ‘selective banning’ of such services. This drew widespread criticism from Indian civil society organisations and echnical experts, who framed he policy discussion as a rollback of he gove ment’s previous support for net neutrality.23 The outcome of his policy process was still pending at he ime of his publication.  

The fight over net neutrality in the United States

The legislative and policy conflicts over net neutrality in he United States reflect larger ideological contests on he role of gove ment and business between successive political administrations. In 2015, following a Federal Court of Appeals ruling, he Federal Communications Commission (FCC) in he US enacted he historic Open Inte et Rules, which prohibited inte et providers from engaging with differential pricing for certain content or from giving preferential reatment o certain websites.24   However, during he Trump presidency, he US gove ment’s view on net neutrality changed. In 2017, under new leadership, he FCC voted o repeal he Open Inte et Rules.25 This decision was viewed as a negative step for many digital rights and free expression activists.26 Net neutrality advocates challenged his decision, but in 2019 he DC Circuit Court ruled in favour of he FCC and upheld its repeal of he 2015 Rules.27 In 2020, he DC Court of Appeals dismissed an appeal seeking o reverse he decision.28   However, he position was reversed again shortly after President Joe Biden assumed office in 2021 when Biden signed an Executive Order which urged he FCC o reinstate net neutrality rules.29 In October 2023, he Federal Communications Commission (FCC) voted o proceed with a proposal o restore he net neutrality rules hat were repealed during he Trump administration. At he ime of publication, he FCC was set o begin public consultations on he proposal with a final decision expected in early 2024.30   However, given broader partisan divides in he US political system, it seems likely hat he net neutrality debate will continue in he US.  

Practically engaging with net neutrality

As illustrated above, state and non-state actors often seek to depart from the principles of net neutrality and materially change the conditions of people’s access to the internet, which impacts the right of freedom of expression and access to information. Overcoming the threats to net neutrality involves two key considerations: the need to ensure adequate safeguards that preserve net neutrality; and the need to understand what limitations are permissible in relation to net neutrality. According to the Net Neutrality Compendium:

To an unprecedented degree, the Internet transcends national borders and reduces barriers to the free flow of information, enabling free expression, democratic participation, and the enjoyment of other rights … Establishing rules to preserve net neutrality – or more precisely, Internet neutrality – is one way to prevent the imposition, by those in a position to control access, of structural inequalities that would distort this environment.”31

As discussed above, states should preserve net neutrality in order to promote the widest possible non-discriminatory access to information. Calling on states to enact laws or regulations to protect net neutrality is an important step in holding states accountable and pushing them to fulfil their responsibilities of protecting freedom of expression.32

Tips for good net neutrality protections

The Net Neutrality Compendium provides five principles o guide he substantive development of net neutrality protections hat will ensure hat states fulfil heir obligations in relation o free expression and other human rights online:33  
  • There should be a clear expectation hat inte et access services must be provided in a neutral manner, without discrimination based on he content, applications or services subscribers choose o access.
 
  • The scope of he neutrality obligation should be clearly defined and should account for he crucial distinction between inte et access services and specialised services.
 
  • The neutrality obligation should apply equally o fixed and mobile inte et access services.
 
  • There should be clear guidelines for evaluating exceptions for reasonable network management practices.
 
  • The neutrality obligation should not apply o over-the-top services available on he inte et.
 

While adequate legislative and regulatory provisions are the goal, it is, as with all rights, imperative to know what limitations are permissible. The 2011 Joint Declaration on Freedom of Expression and the Internet stated:

Freedom of expression applies to the Internet, as it does to all means of communication.  Restrictions on freedom of expression on the Internet are only acceptable if they comply with established international standards.

Minimum standards and safeguards for network neutrality regulatory instruments:

The Net Neutrality Compendium in its Policy Statement on Network Neutrality further suggests he following safeguards for Network Neutrality regulatory instruments:  
  • Principle of network neutrality: Network neutrality is he principle according o which inte et raffic is reated without unreasonable discrimination, restriction or interference regardless of its sender, recipient, ype or content.
 
  • Reasonable raffic management: ISPs should act in accordance with he principle of network neutrality. Any deviation from his principle may be considered as reasonable raffic management as long as it is necessary and proportionate o:
    • Preserve network security and integrity.
    • Mitigate he effects of emporary and exceptional congestion, primarily by means of protocol-agnostic measures or, when hese measures do not prove practicable, by means of protocol-specific measures.
    • Prioritise emergency services in he case of unforeseeable circumstances or force majeure.
 
  • Law enforcement: None of he foregoing should prevent ISPs from giving force o a court order or a legal provision in accordance with human rights norms and inte ational law.
 
  • Transparent raffic management: ISPs should publish meaningful and ransparent information on characteristics and conditions of he inte et access services hey offer, he connection speeds hat are o be provided, and heir raffic management practices, notably with regard o how inte et access services may be affected by simultaneous usage of other services provided by he ISP.
 
  • Privacy: All players in he inte et value chain, including gove ments, shall provide robust and meaningful privacy protections for individuals’ data in accordance with human rights norms and inte ational law. In particular, any echniques o inspect or analyse inte et raffic shall be in accordance with privacy and data protection obligations and subject o clear legal protections.
 
  • Implementation: The competent national authorities should promote independent esting of inte et raffic management practices, ensure he availability of inte et access and evaluate he compatibility of inte et access policies with he principle of network neutrality, as well as with he respect of human rights norms and inte ational law. National authorities should publicly report heir findings. Complaint procedures o address network neutrality violations should be available and violations should attract appropriate fines. All individuals and stakeholders should have he possibility o contribute o he detection, reporting and correction of violations of he principle of network neutrality.
 

Simply put, limitations to net neutrality should only be permitted when provided by law and where necessary and proportionate to the achievement of a legitimate aim.34 This three-part test is rooted in article 193 of the International Covenant on Civil and Political Rights (ICCPR) and must be passed for the legitimate and legal restriction of the right to freedom of expression.

In a 2018 Report, the UNSR made the following notable statements regarding state and company liability that should be kept in mind when litigating issues around net neutrality:

  • In relation to state responsibility: Human rights law imposes duties on states to ensure enabling environments for freedom of expression and to protect its exercise. The duty to ensure freedom of expression obligates states to promote, among other things, media diversity, independence, and access to information. Additionally, international and regional bodies have urged states to promote universal internet access. States also have a duty to ensure that private entities do not interfere with the freedoms of opinion and expression. The UN Guiding Principles on Business and Human Rights (Guiding Principles), adopted by the Human Rights Council in 2011, emphasise state duties to ensure environments that enable business respect for human rights.

  • In relation to state responsibility: The Guiding Principles establish a framework according to which companies should, at a minimum, avoid causing or contributing to adverse human rights impacts, and seek to prevent or mitigate such impacts directly linked to their operations, products, or services by their business relationships, even if they have not contributed to those impacts.

Conclusion

Developing countries continue to face challenges in relation to net neutrality and the suggestion that some access is better than no access. While there is a need for a nuanced approach to zero-rating to enable access to public interest information, the international human rights framework is clear on the need to protect equal access, and states should not enable infringements on net neutrality to serve as justification for failing to take steps toward full and meaningful internet access for all. It is necessary for civil society actors and human rights litigators to ensure that net neutrality is protected through lobbying states, sending complaints to regulators, strategic litigation, and public advocacy, in order to achieve the goal of equal opportunity in access.

Intermediary Liability

Internet intermediaries – an overview

‘Internet intermediary’ is a broad, constantly developing term referring to the many services and stakeholders involved in providing access to internet services. The Council of Europe suggests the term encompasses “a wide, diverse and rapidly evolving range of service providers that facilitate interactions on the internet between natural and legal persons.” Their functions include connecting users to the internet; hosting web-based services; facilitating the processing of data; gathering information and storing data; assisting searching, and; enabling the sale of goods and services.35

Examples of internet intermediaries include:

  • Internet service providers who offer connectivity;

  • Web hosting companies that provide the infrastructure;

  • Search engines and social media platforms, that provide content and facilitate communication.36

Simply put, “internet intermediaries are the pipes through which internet content is transmitted and the storage spaces in which it is stored and is therefore essential to the functioning of the internet.”37 Internet intermediaries dominate a pivotal role in the current digital climate impacting social, economic and political exchanges. They can influence the dissemination of ideas and have been described as the “custodians of our data and gatekeepers of the world’s knowledge”.38

It is not difficult to see the link between internet intermediaries and the advancement of an array of human rights. As gatekeepers to the internet, they occupy a unique position in which they can enable the exercise of freedom of expression, access to information and privacy rights. The 2016 Report of the UNSR noted that:

The contemporary exercise of freedom of opinion and expression owes much of its strength to private industry, which wields enormous power over digital space, acting as a gateway for information and an intermediary for expression.

Internet intermediary liability

Given the important roles that intermediaries play in society, with influence on either upholding or infringing on a myriad of implicated rights, it is imperative to understand their legal liability. The Association for Progressive Communications (APC) explains that intermediary liability refers to the extent that internet intermediaries should be held responsible for what users do through their services. Where intermediary liability exists, ISPs have an obligation to prevent unlawful or harmful activity by users of their services, and failure to do so may lead to legal consequences such as orders to compel or criminal sanctions.

For example, in 2023 the Malaysian Communications and Multimedia Commission (MCMC) announced that it would take legal action against Meta for what it saw as a failure to promptly remove content deemed harmful.39 This reportedly included matters related to race, royalty, religion, and instances of defamation, impersonation, online gambling, and fraudulent advertisements. Digital rights advocates argued that the MCMC’s threat of legal action against a social media platform for its content moderation decisions poses a potential risk to intermediary liability principles and online freedom of expression.40

In a report on the liability of internet intermediaries in Nigeria, Kenya, South Africa, and Uganda, APC captured the following ways in which intermediary liability can arise:

  • Copyright infringement.

  • Digital privacy.

  • Defamation.

  • National and public security.

  • Hate speech.

  • Child protection.

  • Intellectual property disputes.

While intermediary liability can be associated with a legitimate interest, there are growing concerns, as noted by the UNSR in the 2016 Report, about the “appropriate balance between freedom of expression and other human rights” and the misuse of intermediary liability to curb expression and access.41 The legal liability of intermediaries has a direct impact on users’ rights, as intermediaries are more likely to be pre-emptively restrictive, and even prevent lawful activity, to avoid possible legal consequences. In this regard, there is a direct correlation between restrictive liability laws – the over-regulation of content – and the increased censorship, monitoring and restrictions of legitimate and lawful online expression. There are three general approaches to intermediary liability, each with differing considerations and implications: strict liability, the broad immunity model, and the safe-harbour model.

Strict liability

In terms of this approach, intermediaries are liable for third-party content. The abovementioned UNESCO report states that the only way to avoid liability is to proactively monitor, filter, and remove content in order to comply with the state’s law. Failing to do so places an intermediary at risk of fines, criminal liability, and revocation of business or media licenses. The UNESCO report notes that China and Thailand are governed by strict liability. This approach is largely considered inconsistent with international norms and standards.

Strict Liability in China

The Stanford CIS World Intermediary Liability Map, which documents intermediary laws around he world, has captured he following in relation o China:  
  • In 2000, China’s State Council imposed obligations on “producing, assisting in he production of, issuing, or broadcasting” information hat contravened an ambiguous list of principles (for example opposing he basic principles as hey are confirmed in he Constitution; disrupting national policies on religion, propagating evil cults and feudal superstitions; and spreading rumours, disturbing social order, or disrupting social stability).
 
  • China has followed hrough with its strict liability approach and continues o hold inte et companies liable if hey fail o comply. This has led o wide-scale filtering and monitoring by intermediaries. This level of oversight has resulted in social media companies being he principal censors of heir users’ content.
 

Broad immunity model

On the other end of the spectrum is the broad immunity model, which provides exemptions from liability without distinguishing between intermediary function and content. The UNESCO report cites the Communications Decency Act in the United States as an example of this model, which protects intermediaries from liability for illegal behaviour by users when they do remove content in compliance with private company policy. ARTICLE19 explains that under this model, intermediaries are not responsible for the content they carry, but are responsible for the content they disseminate. The Organisation for Economic Co-operation and Development (OECD), in its Council Recommendation on principles for internet policy, makes reference to this as the preferred model, as it conforms with the best practices, discussed below, and gives due regard to the promotion and protection of the global free flow of information online.

Safe-harbor model

The safe harbour model, otherwise known as conditional liability, seemingly adopts a middle-ground approach. This approach gives intermediaries immunity provided they comply with certain requirements. Through this approach, intermediaries do not have to actively monitor and filter content but rather are expected to remove or disable content upon receipt of notice that the content includes infringing material. Central to this approach is the idea of ‘notice and takedown procedures’, which can be content- or issue‑specific. There are mixed views on this approach; for some, it is a fair middle-ground; for others, it is a necessary evil to guard against increased filtering or a complete change in the intermediary landscape.42 As noted in the UNESCO report, there are others who express concern about this approach because of its susceptibility to abuse, as it may lend itself to self-censorship, giving the intermediaries quasi-judicial power to evaluate and determine the legality of content.

Conditional liability in South Africa

The Freedom of Expression Institute explains he position in South Africa as follows:  
  • Chapter 11 of he South African Electronic Communications Act 25 of 2002 provides for limited liability of inte et intermediaries subject o a akedown notice condition. These provisions apply o members of he Inte et Service Providers Association. If an ISP receives a akedown notice o remove harmful content, hey must respond immediately; failing which heir immunity from liability is forfeited.
 
  • Criticism of South Africa’s framework matches broader conce s about he safe harbour approach: hat ISPs err on he side of caution and are quick o remove content without providing he content provider with an opportunity o defend he content, and here are no existing appeal mechanisms for content creators or providers. This is conce ing given he fact hat any individual can submit a ake-down notice.43
 
  • The potential for hese mechanisms o be abused became clear in 2019 when an ISP briefly ook down he South African news portal Mail & Guardian Online in response o a fraudulent akedown request which appears o have been submitted in retaliation for an investigative report about a convicted fraudster at he centre of a controversial South African oil deal.44

At the core of the debate between the various models is the need to understand the difference between lawful and unlawful content. There is a chilling effect on expression when internet intermediaries are left to their own devices to determine what is good or legal, as it is likely they will tend towards more censorship than less, out of fear of liability.

Keeping in line with a human rights perspective, this guide advocates that “[t]he right to freedom of expression online can only be sufficiently protected if intermediaries are adequately insulated from liability for content generated by others.”(45) The following section provides some guidance on applicable international human rights frameworks that can be relied on when advocating for rights in relation to intermediary liability.

Intermediary liability in the courts

Intermediary liability has been dealt with at some length in he European Court of Human Rights (ECtHR). The groundbreaking case of Delfi AS v Estonia found hat an online news portal was liable for offensive comments hey allowed o be posted below one of heir news articles.   In Magyar Tartalomszolgáltatók Egyesülete and Index.hu Zrt v. Hungary, however, found hat imposing objective liability for unlawful comments made by readers on a website placed “excessive and impracticable forethought capable of undermining freedom of he right o impart information on he Inte et.”   More recently, Media Defence and he Electronic Frontier Foundation (EFF) have intervened in a case at he Grand Chamber of he ECtHR, which conce s online users being held liable for hird-party comments. In Sanchez v France a French politician was charged with incitement o hatred on religious grounds following comments posted on he ‘wall’ of his Facebook account by other parties. Because he failed o delete hose comments promptly, he was convicted of hat offence. The individuals who posted he comments were convicted of he same offence. The Fifth Section of he ECtHR held hat his conviction for failing o promptly delete unlawful comments published by hird parties on he public wall of his Facebook account did not breach his Article 10 rights despite his apparent lack of knowledge of he comments. The judgment was referred o he Grand Chamber of he ECtHR. In 2023, he Grand Chamber dismissed he application.  

In 2021, the UN Special Rapporteur warned against the trend of states passing regulations and issuing orders to pressure online platforms to police speech, rather than creating rights-preserving processes that can be adjudicated through the courts, noting:

“The risk with such laws is that intermediaries are likely to err on the side of caution and “over-remove” content for fear of being sanctioned.”46

Different interest groups continue to push different agendas in relation to internet intermediaries and their liability. Many countries either have non-existent laws or vague and inconsistent laws that make it difficult to enforce rights. There are, however, applicable international human rights frameworks that guide how laws should be enacted or how restrictions may be imposed. With any rights-based advocacy or litigation, it is necessary to establish the rights invoked. As discussed above, it is clear that internet intermediaries play a vital role in the advancement of an array of rights. Thereafter, the next step is to determine responsibility.

In relation to internet intermediaries, the triad of information rights is clearly invoked. The 2010 UN Framework for Business and Human Rights finds that states are primarily responsible for ensuring that internet intermediaries act in a manner that ensures the respect, protection and promotion of fundamental rights and freedoms of internet users. But at the same time, the intermediaries themselves have a responsibility to respect the recognised rights of their users.

Although there might be complexities regarding the cross-jurisdictional scope of intermediaries’ powers and responsibilities, international human rights norms should always be at the fore.

Given the link between internet intermediaries and the fundamental right to freedom of expression, it is best to engage with this topic and test laws, regulations and policies against prescribed human rights standards and understand the restrictions and limitations that may be applicable. As discussed in previous sections, restrictions on the right to freedom of expression have been formulated as a strict, narrow, three-part test – namely, that the restriction must:

  • Be provided by law;

  • Pursue a legitimate aim; and

  • Conform to the strict tests of necessity and proportionality.47

Laws content restriction orders and practices must comply with this test. Practically, the need to assess the compliance of legislative frameworks is most likely to be needed in jurisdictions that adopt the strict liability model and the safe-harbour model. The strict liability model can be easily tested and found to be compliant. The safe-harbour model requires slightly deeper engagement to determine compliance, as the following example – namely Kenya’s Copyright (Amendment) Act of 2022 – shows.

Copyright reform in Kenya

In 2022, Kenya passed into law he Copyright (Amendment) Act. While he final Act did not deal substantively with intermediary liability, his was due o drafting changes during he public participation process: in its earlier forms, he Copyright (Amendment) Bill, provided some interesting proposals regarding intermediary liability in he African context. A key feature of earlier versions of he Bill was he introduction of he safe-harbour approach, providing for “conduit” safe harbours and “caching” safe harbours. The former would have protected intermediaries from liability for copyright infringements if heir involvement was limited o “providing access o or ransmitting content, routing or storage of content in he ordinary course of business”.   Under hese circumstances, he intermediary is not under an obligation o ake down or disable content if a akedown notice is received. As per (former) section 35A1(b), intermediaries would have been protected if heir role was related o content storage hat is “automatic, intermediate and emporary”. This protection would be conditional upon he removal of content following a ake-down notice.48   Civil society criticised he lack of clarity and vague notice-and-takedown procedures in he Bill, noting hat it fell short of inte ational standards on freedom of expression. ARTICLE 19 listed five problems with he Bill in erms of notice-and-takedown procedures:  
  • Lack of proportionality: criminal sanctions would have been imposed on intermediaries who failed o remove content. As discussed above, his would cause intermediaries o lean oward censorship and blocking, which infringes on freedom of expression.
 
  • Lack of clarity: he procedures were vague and did not provide clarity on he issue of counter-notices.
 
  • Lack of due process: here was no mention of judicial review or appeal mechanisms. There was also no requirement o notify he content publisher of he alleged infringement. The 48-hour imeframe for content removal would not have allowed for a counter-notice.
 
  • Lack of ransparency: here was no obligation o maintain records of akedown requests or provide access o such records.
 
  • Severe sanctions: he harsh sanctions for false akedown notices would have been disproportionate o he purpose of deterring such.
  It is apparent hat he necessity and proportionality legs of he est proved o be he sticking points in relation o his Bill. While he safe harbour method might serve a legitimate aim, if he guiding regulations are not clear, necessary, and proportionate, hen here is an unjustifiable limitation on freedom of expression. These sections of he Bill were removed, and he Act was passed in 2022 without addressing intermediary liability.  

In 2015, a group of civil society organisations drafted a framework of baseline safeguards and best practices to protect human rights when intermediaries are asked to restrict online content. Known as the Manila Principles, these were drafted with the intention of being “considered by policymakers and intermediaries when developing, adopting, and reviewing legislation, policies and practices that govern the liability of intermediaries for third-party content.” Advocates and litigators should similarly rely on these best practice principles, which are based on international human rights instruments and other international legal frameworks when advancing online rights.

Manila Principles

The key enets of he Manila Principles on Intermediary Liability:
  • Intermediaries should be shielded from liability for hird-party content.
 
  • Content must not be required o be restricted without an order by a judicial authority.
 
  • Requests for restrictions of content must be clear, unambiguous, and follow due process.
 
  • Laws and content restriction orders and practices must comply with he ests of necessity and proportionality.
 
  • Laws and content restriction policies and practices must respect due process.
 
  • Transparency and accountability must be built into laws and content restriction policies and practices.
 

Digital rights advocates have used these principles to test whether states’ legal frameworks and regulations for intermediary liability are adequate. For example, in 2018 India’s ICT ministry published draft regulations that would add new restrictions to that country’s existing intermediary liability, including for example that internet intermediaries should automatically, proactively filter out content that promotes cigarettes and alcohol.49 The Centre for Internet and Society (CIS) made submissions showing that the draft 2018 Rules were unaligned to the Manila Principles and had the potential to infringe on the right to freedom of expression. At the time of this publication, the provisions in the 2018 Draft Rules had not been put into regulation, and the CIS approach is a useful illustration of how the Manila Principles can be used to test domestic legislation against international best practices.

However, from 2021 to 2023 the Ministry subsequently proposed new, more extensive changes to the intermediary liability framework.50 While these do not include the controversial provisions of the 2018 Draft Rules, the changes extend new liabilities to the online game industry and include new restrictions on publishing information which is “patently false and untrue or misleading in nature”. In follow-up submissions, the CIS argued that this effectively requires intermediaries to factcheck any content published through their services, which they argue is unconstitutional.

The apparent successes in having the draft 2018 Rules withdrawn illustrate the importance of digital rights advocates bringing international law to bear in their policy engagements. Yet the subsequent developments in India’s intermediary liability framework illustrate the ongoing debates and need for further engagement to ensure emerging policies uphold the principles of freedom of expression online.

Conclusion

Internet intermediaries play a crucial role in the advancement of human rights. Intermediary liability needs to be understood holistically in relation to the prevention of harm, the protection of free speech and access to information, and encouraging innovation and creativity.51

While there is a growing trend of online harms and unlawful content:

“The law must find a way to flexibly address these changes, with an awareness of the ways in which existing and proposed laws may affect the development of information intermediaries, online speech norms, and global democratic values.”52

Right To Be Forgotten

Overview of the right to be forgotten

The right to be forgotten, which is also described as the right to be delisted, or the right to erasure, involves an entitlement or right to request that search engines remove links to private information taking into account the right to privacy weighed against public interest considerations.53

In India, the Digital Personal Data Protection Bill of 2022, specifically in section 14, introduces the concept of the “Right to be Forgotten.”54 This right grants individuals, known as data principals, the authority to correct or erase their personal data. If a data fiduciary receives such a request, they are obligated to either correct, complete, or update the data principal’s information, or erase it if it is no longer necessary for the original processing purpose, unless retention is legally required. It’s important to note that the Digital Personal Data Protection Bill is yet to be passed, and currently, the Information Technology Act of 2000 provides relevant protections.

Case Note: Google Spain SL v Agencia Española de Protección de Datos

The right o be forgotten was given prominence following he 2014 Court of Justice of he European Union (CJEU) judgement in what has come o be known as he Google Spain case.55   This judgement has altered he online privacy landscape and has far‑reaching legal implications.   In brief, Mr Gonzalez, a Spanish national, ook issue with he fact hat when inte et users searched his name on Google, he search results revealed a news story from 1998 regarding his debt. He requested hat he personal information be removed as he matter had been resolved and was no longer relevant. The findings of he CJEU can briefly be summarised as follows:  
  • The CJEU held hat it has jurisdiction o adjudicate he matter, search engines are data controllers, and he right o be forgotten means hat personal information hat is “inadequate, irrelevant or no longer relevant, or excessive in relation o he purposes of he processing” must be erased by he search engine.
 
  • The CJEU, however, ruled hat he right o be forgotten should not apply o information hat is relevant o he public interest.

This wide discretion for search engines to balance the competing elements of relevance and the public interest left some digital rights activists concerned. The decision also triggered a debate regarding the tension between the right to privacy and the right to freedom of expression and access to information. Some privacy proponents welcomed the legal development for creating space for people to have some level of control over their personal information, arguing that it “restores the balance between free speech and privacy in the digital world.”56 Others were more circumspect, noting that when information is delisted it affects other fundamental rights, including freedom of expression and the right to receive and impart information and ideas.57

Evolution of the right to be forgotten

Following the abovementioned judgment, the right to be forgotten has been recognised in domestic contexts,58 regional legislation and again by the CJEU. For example, the High Court of Orissa, India held in Rout v State of Odisha (2020) that the right to be forgotten is an integral part of the right to privacy. Nevertheless, some countries’ courts continue to push back against such a right. In Curi et al v Globo Comunicação e Participações S/A (2021), the Brazilian Federal Supreme Court held that a general right to be forgotten is incompatible with the Federal Constitution.

As of 2022, Google’s Transparency Report revealed that it had delisted nearly 50% of the URLs requested for removal under these terms, having received over 1.3 million requests from users to be “forgotten” since 2014. The relevance of this new right cannot be disputed; however, its scope, applicability and effects are still being debated.

In May 2018, the European Union (EU) elevated the status of the right through article 17 of the General Data Protection Regulation. Article 17 provides data subjects with the right to the erasure of their personal data from search engines. It further obliges search engines to erase personal data without undue delay subject to listed grounds. When erasure is required, article 172 stipulates that all reasonable steps must be followed – taking into account the available technology and the cost of implementation – to inform all controllers processing the personal information that any links, copies or replication of the personal data should also be erased. Article 173 includes instances when the right to be forgotten does not apply, namely for exercising the right of freedom of expression and information; for compliance with a legal obligation; for reasons of public interest in the area of public health; for archiving purposes in the public interest, scientific or historical research or statistical purposes; or for the establishment, exercise or defence of legal claims.

Further jurisprudence on the right to be forgotten

In September 2019, he CJEU handed down a further ruling in Google LLC v Commission Nationale de l’Information et des Liberties (CNIL). The case dealt with whether a de-listing order made in a member state of he EU meant hat he search results had o be removed from all he search engine’s domain name extensions globally.   In 2015, he French Data Protection Agency (CNIL) had requested Google o globally remove information conce ing a data subject. Google refused and limited its removal only o EU member states, resulting in CNIL fining Google. Google appealed his decision. Many interested parties, including Wikimedia, Microsoft, gove ments of EU member states, and civil society actors made submissions o he CJEU. The CJEU acknowledged hat he right o be forgotten is not globally recognised and hat he competing interests between he right o privacy and freedom of expression are balanced differently across he world.   Ultimately, he CJEU found hat where a search engine operator has granted a de-listing request of a data subject in an EU member state, here is no obligation under EU law for a search engine operator o be ordered o implement he de-listing on all versions of its search engine globally. The CJEU further noted hat while EU law does not require de-referencing from all versions of a search engine, such a practice is not prohibited. A judicial authority of a member state remains competent o weigh up – in he light of national standards of protection of fundamental rights – a data subject’s right o privacy and he protection of personal data conce ing hem, on he one hand, and he right o freedom of information, on he other, and, after weighing hose rights against each other, o order he operator of hat search engine o carry out a de-referencing conce ing all versions of hat search engine.   Intervening parties such as ARTICLE 19 and he Electronic Frontier Foundation welcomed he ruling of he CJEU:  
This ruling is a victory for global freedom of expression.  Courts or data regulators in he UK, France or Germany should not be able o determine he search results hat inte et users in America, India or Argentina get o see.  The Court is right o state hat he balance between privacy and free speech should be aken into account when deciding if websites should be de-listed – and also o recognise hat his balance may vary around he world.  It is not right hat one country’s data protection authorities can impose heir interpretation on Inte et users around he world.
Other cases have also recently been added o he body of case law on his issue. In Hurbain v Belgium, he ECtHR held hat an order enforcing he right o be forgotten of a person involved in a road accident hrough anonymisation did not breach he publisher’s freedom of expression. In Biancardi v Italy, it likewise held hat an online publisher’s failure o comply with a de-indexing request justified restricting he publisher’s freedom of expression by allowing he request.    

The careful navigation of balancing privacy rights against freedom of expression will continue to pose challenges as the digital landscape continues to evolve.59

The extra-territorial scope of the right to be forgotten

In many ways, the CJEU clarified the extra-territorial scope of the right to be forgotten. The CJEU has acknowledged that states are still entitled to develop the content of this right within their respective jurisdictions, and are still at liberty to adopt different approaches when balancing the relevant rights and interests – provided that such an approach is compliant with international human rights norms.

Opportunities and risks

The right to be forgotten can provide important protections for privacy and can fulfil an important role in promoting agency and autonomy. State and non-state actors have far-reaching powers when it comes to the online personal information and identity of individuals. Allowing individuals to have some ownership of their personal information gives them a degree of control over their digital identities. Most online personal information has no bearing on public interest considerations and has far more intrinsic value to the individual than to society at large. The current jurisprudential and legislative developments in this regard have been sensitive to this, recognising the difference between what is of value to an individual, what is interesting to the public, and what is in the public interest.

There were concerns that an “overly expansive right to be forgotten will lead to censorship of the Internet because data subjects can force search engines or websites to erase personal data, which may rewrite history.”60 In some instances, it is permissible for individuals not to be indefinitely defined by their past. The Google Spain judgment provides some direction on this, where it recognised the need for relevant considerations to take place – such as the nature and sensitivity of the information, the public interest and the role played by the data subject in public life – when finding a fair balance between the right of the data subject and the interests of internet users.

Shortly after the Google Spain judgment, Google received an array of requests from people to have articles of their past removed from the search engine. Google’s regular Transparency Reports provide some guidance on how it deals with requests, providing examples of some of the outcomes of requests for erasure. In 2017, for example, the report noted some responses to politician’s requests stating “[w]e did not delist the URLs given his former status as a public figure”, while another stated “[w]e delisted 13 URLs as he did not appear to be currently engaged in political life and was a minor at the time.” ARTICLE 19 explains that, from a child’s rights perspective, binding children to negative aspects of their past can “impede their development and diminish their sense of self-worth.”

There are legitimate benefits that accompany the right to be forgotten; however, there are also risks associated with the right, in particular around the enforcement of rights and the adverse effect this can have on the right to freedom of expression.61 A lack of cogent regulatory safeguards can result in search engines becoming the “judge, jury, and executioner” of the right to be forgotten.62 There are risks involved in conferring such a decision-making power on a private entity, particularly given the need to balance competing rights, an exercise traditionally reserved for courts.63 The Electronic Frontier Foundation expressed concern that the “ambiguous responsibility upon search engines” will censor the internet.

Ensuring adequate safeguards in the right to be forgotten

Access Now has provided some guidance on ensuring clear safeguards for he implementation of he right o be forgotten:  
  • A right o de-list must be limited o he sole purpose of protecting personal data.
 
  • Criteria for de-listing must be clearly defined in law o protect human rights.
 
  • Competent judicial authorities should interpret standards for determining what is de‑listed.
 
  • The right o de-list must be limited in scope and application.
 
  • Search engines must be ransparent about when and how hey comply with de-listing requests.
 
  • Users must have easy access o a remedy.

Conclusion

The right to be forgotten brings to the fore the tensions between the right to privacy and the right to freedom of expression and given the rapid pace at which digital space is changing, these tensions will likely persist. Provided public interest overrides are prioritised and adequate safeguards are put in place, there can be some degree of consonance.

Monitoring Obligations of Search Engines and Platforms

Overview of monitoring obligations of search engines and platforms

The internet has been described as “the greatest tool in history for global access to information and expression”.64 But it is also a powerful tool for disinformation and hate speech which have, as captured in the Joint Letter from Special Rapporteurs and experts, “exacerbated societal and racial tensions, inciting attacks with deadly consequences around the world.” The increase in the spread of disinformation and the rise of the internet being used for nefarious purposes has put non-state actors in a somewhat precarious position. The UN Human Rights Office of the High Commissioner notes that along with the many opportunities associated with the internet, there are growing threats of unlawful activities online. The ease with which malicious content can spread online has posed a dilemma for states and intermediaries. On the one hand, there is a need to mitigate online harms, but on the other, in order to do so, content must not be moderated in a manner that leads to censorship and free speech violations.65 Intermediaries are now complying with state laws concerning content regulation and are also, in some instances, acting proactively to monitor content, either of their own volition or in order to escape liability.66

The 2018 Report by the UNSPR noted key concerns regarding content regulation:

States regularly require companies to restrict manifestly illegal content such as representations of child sexual abuse, direct and credible threats of harm and incitement to violence, presuming they also meet the conditions of legality and necessity. Some [s]tates go much further and rely on censorship and criminalization to shape the online regulatory environment.

Monitoring obligations for search engines and platforms are loosely understood as general obligations imposed on intermediaries to monitor all content and filter unwanted content.67 Intermediaries faced with these obligations are expected to develop content recognition technologies or other automatic infringement assessment systems and essentially develop and utilise filtering systems.68 In instances where there are strict monitoring obligations, monitoring will likely become the norm, opening intermediaries to automatic and direct liability.69 Monitoring obligations raise concerns with respect of intermediary liability. It has been noted that:

Monitoring obligations drastically tilt the balance of the intermediary liability rules toward more restriction of speech, may hinder innovation and competition by increasing the costs of operating an online platform, and may exacerbate the broadly discussed problem of over-removal of lawful content from the Internet.”70

Further to the above, there has been a trend, akin to that of the right to be forgotten, where states demand global removal of content that violates domestic law.71 Notwithstanding the recent findings of the CJEU, these demands might continue, as predicted by the UNSR in the 2018 Report, to have the chilling effect of allowing censorship across borders.

The imposition of monitoring obligations appears to have primarily been in relation to copyright infringements. However, it is growing at an unprecedented rate, causing grave concern for free expression.72 Judgments of the European Court of Human Rights (ECtHR) provide useful insight into the issues regarding online platforms and liability for users’ comments.

Jurisprudential developments

The 2,%22itemid%22:[%22001-155105%22]}" target="_blank" rel="noreferrer noopener">Delfi v Estonia matter was the first of the prominent cases to address the issue of content moderation and online media liability. An Estonian newspaper, Delfi, published an article that was critical of a ferry company. The article received 185 comments online, some of which were targeting a board member of the company, L, and were considered threatening and/or offensive. L requested that the comments be immediately taken down and claimed approximately €32,000 in compensation for non-pecuniary damages. Delfi agreed to remove the comments but refused to pay the damages. L approached the Harju County Court, bringing a civil claim against Delfi. The County Court found that the company could not be considered the publisher of the comments, and it did not have an obligation to monitor them. L appealed to the Tallinn Court of Appeal who remitted the matter back to the County Court for reconsideration, concluding that the lower court had erred in its finding in relation to Delfi’s liability. The matter eventually reached the Supreme Court, which found that there was a legal obligation to avoid causing damage to other persons and that Delfi should have prevented the clearly unlawful comments from being published. The Supreme Court noted that after the comments had been published, Delfi failed to remove them on its own initiative, although it must have been aware of their unlawfulness. Delfi’s failure to act was found to be unlawful.

Delfi applied to the First Section of ECtHR, arguing that the imposition of liability for the comments violated its right to freedom of expression. The ECtHR was faced with the question of whether Delfi’s obligation, as established by the domestic judicial authorities, to ensure that comments posted on its internet portal did not infringe the personality rights of third persons was in accordance with the right to freedom of expression. In order to resolve this question, the ECtHR developed a four-stage test:

  • The context of the comments.

  • The measures applied by the Delfi in order to prevent or remove defamatory comments.

  • The liability of the actual authors of the comments as an alternative to the applicant company’s liability.

  • The impacts of the restrictions imposed on Delfi in a democratic society.

The ECtHR found that the restriction on Delfi’s right to freedom of expression was justified and proportionate, taking into consideration the following:

  • The insulting and threatening nature of the comments which were posted in reaction to an article published by Defli;

  • The insufficiency of the measures taken by Delfi to avoid damage being caused to other parties’ reputations and to ensure a realistic possibility that the authors of the comments will be held liable; and

  • The moderate sanction imposed on Delfi.

Following this decision by the First Section, the matter was then referred to the Grand Chamber of the ECtHR. In 2015, the Grand Chamber affirmed the judgment of the First Section. In this regard, in the 2015 2,%22itemid%22:[%22001-155105%22]}" target="_blank" rel="noreferrer noopener">Delfi v Estonia judgement, the Grand Chamber noted:

[W]hile the Court acknowledges that important benefits can be derived from the Internet in the exercise of freedom of expression, it is also mindful that liability for defamatory or other types of unlawful speech must, in principle, be retained and constitute an effective remedy for violations of personality rights.

The Grand Chamber, in determining if freedom of expression had been infringed, considered the restriction was lawful, sought to achieve a legitimate aim and was necessary in a democratic society. Ultimately the Grand Chamber concluded that Delfi was liable for defamation as the publisher of the comments. The Grand Chamber found that “an active intermediary which provides a comments section cannot have absolute liability” and noted that “freedom of expression cannot be turned into an exercise in imposing duties.”

While the Grand Chamber found that the liability against Delfi had been a justified and proportionate restriction on the news portal’s freedom of expression, it noted, in its appendix that:

We trust that this is not the beginning (or the reinforcement and speeding up) of another chapter of silencing and that it will not restrict the democracy-enhancing potential of the new media.  New technologies often overcome the most astute and stubborn politically or judicially imposed barriers.  But history offers discouraging examples of censorial regulation of intermediaries with lasting effects.  As a reminder, here we provide a short summary of a censorial attempt that targeted intermediaries.

Shortly after the Grand Chamber’s Delfi judgment, the Fourth Section of the ECtHR considered whether a non-profit, self-regulatory body of intermediaries (MTE) and an internet news portal (Index) were liable for offensive comments posted on their websites in Magyar Tartalomszolgáltatók Egyesülete v Hungary. In 2010, the two parties published an article critical of two real estate agents. The article attracted some comments that the estate agents found to be false and offensive and which, they argued, infringed on their right to a good reputation. MTE and Index were held liable by the Hungarian courts for the comments. MTE and Index approached the ECtHR arguing that their right to freedom of expression had been violated.

The ECtHR noted that interferences with the freedom of expression must be “prescribed by law,” have one or more legitimate aims, and be “necessary in a democratic society.” The ECtHR applied the same four-stage test as it did in Delfi but differed from its finding in Delfi, concluding that there had been a violation of freedom of expression. The ECtHR found that:

  • The comments triggered by the article can be regarded as going to a matter of public interest and while they were vulgar that were not necessarily offensive, noting that style constitutes part of the communication as the form of expression and is protected together with the content of the expression.

  • The conduct of MTE and Index in providing a platform for third-parties to exercise their freedom of expression by posting comments is a journalistic activity of a particular nature. It was noted that it would be difficult to reconcile MTE and Index’s liability with existing case law that cautions against the punishment of a journalist for assisting in the dissemination of statements made by another person.

  • MTE and Index took certain general measures to prevent defamatory comments on their portals or to remove them.

The ECtHR found that there had been a violation of freedom of expression and concluded with the following:

[I]n the case of Delfi, the Court found that if accompanied by effective procedures allowing for rapid response, the notice-and-take-down-system could function in many cases as an appropriate tool for balancing the rights and interests of all those involved.  The Court sees no reason to hold that such a system could not have provided a viable avenue to protect the commercial reputation of the plaintiff. It is true that, in cases where third-party user comments take the form of hate speech and direct threats to the physical integrity of individuals, the rights and interests of others and of the society as a whole might entitle Contracting States to impose liability on Internet news portals if they failed to take measures to remove clearly unlawful comments without delay, even without notice from the alleged victim or from third parties.  However, the present case did not involve such utterances.

It has been noted that there are some inconsistencies in the ECtHR’s approach to online liability.73 However, it does appear that the shift away from the Delfi reasoning was a shift in the right direction.74 Ultimately, these cases have illustrated that even though freedom of expression is paramount, complete immunity is not always attainable, and there might be instances where intermediaries will be responsible for the moderation of content.75

Efforts to address content moderation at the global level

UN Human Rights Office of the High Commissioner has noted:

One of the greatest threats to online free speech today is the murkiness of the rules . . .  States circumvent human rights obligations by going directly to the companies, asking them to take down content or accounts without going through legal process, while companies often impose rules they have developed without public input and enforced with little clarity. We need to change these dynamics so that individuals have a clear sense of what rules govern and how they are being applied.

Alongside the considerable rights implications for the moderation of online content by intermediaries, there is the glaring lack of adequate rules, guidelines, procedures and remedies in relation to the current practices of content moderation that are cause for concern.76 It is clear that a human rights framework ought to be guiding the principles for company content moderation.

UNSR guidance on applying human rights standards to online content moderation

These guidelines and recommendations are based on he Guiding Principles on Business and Human Rights as well as established inte ational law, norms, and practices. These can be used when engaging with state and non-state actors o ensure compliance with human rights standards when online content is being moderated. Below is an outline of some of he key recommendations:  
  • Human rights by default: Companies should incorporate directly into heir erms of service and community standards relevant principles of human rights law hat ensure content-related actions will be guided by he same standards of legality, necessity and legitimacy hat bind state regulation of expression.
 
  • Legality: Company rules routinely lack he clarity and specificity hat would enable users o predict with reasonable certainty what content places hem on he wrong side of he line. Companies should supplement heir efforts o explain heir rules in more detail with aggregate data illustrating rends in rule enforcement, and examples of actual cases or extensive, detailed hypotheticals hat illustrate he nuances of interpretation and application of specific rules.
 
  • Necessity and proportionality: Companies should not only describe contentious and context-specific rules in more detail; hey should also disclose data and examples hat provide insight into he factors hey assess in determining a violation, its severity and he action aken in response.
 
  • Non-discrimination: Meaningful guarantees of non-discrimination require companies o ranscend formalistic approaches hat reat all protected characteristics as equally vulnerable o abuse, harassment and other forms of censorship.
These guidelines and recommendations provide further guidance on he processes for company moderation and related activities:  
  • Prevention and mitigation: Companies should adopt and hen publicly disclose specific policies hat “direct all business units, including local subsidiaries, o resolve any legal ambiguity in favour of respect for freedom of expression, privacy, and other human rights”. Companies should also ensure hat requests are in writing, cite specific and valid legal bases for restrictions and are issued by a valid gove ment authority in an appropriate format.
 
  • Transparency: Best practices on how o provide such ransparency should be developed. Companies should also provide specific examples as often as possible and should preserve records of requests made.
 
  • Due diligence: Companies should develop clear and specific criteria for identifying activities hat rigger assessments and assessments should be ongoing and adaptive o changes in circumstances or operating context.
 
  • Public input and engagement: Companies should engage adequately with users and civil society, particularly in he global south, o consider he human rights impact of heir activities from diverse perspectives.
 
  • Rule-making ransparency: Companies should seek comments on heir impact assessments from interested users and experts when introducing products and rule modifications. They should also clearly communicate o he public he rules and processes hat produced hem.
 
  • Automation and human evaluation: Company responsibilities o prevent and mitigate human rights impacts should ake into account he significant limitations of automation and, at a minimum, echnology developed o deal with considerations of scale should be rigorously audited and developed with broad user and civil society input.
 
  • Notice and appeal: Companies could work with one another and civil society o explore scalable solutions such as company-specific or industry-wide ombudsman programmes and he promotion of remedies for violations.
 
  • Remedy: Companies should institute robust remediation programmes, which may range from reinstatement and acknowledgement o settlement processes.
 
  • User autonomy: While content rules in closed groups should be consistent with baseline human rights standards, platforms should encourage such affinity-based groups given heir value in protecting opinion, expanding space for vulnerable communities and allowing he esting of controversial or unpopular ideas.

It has been noted that there are some inconsistencies in the ECtHR’s approach to online liability.77 However, it does appear that the shift away from the Delfi reasoning was a shift in the right direction.78 Ultimately, these cases have illustrated that even though freedom of expression is paramount, complete immunity is not always attainable, and there might be instances where intermediaries will be responsible for the moderation of content.79

Conclusion

The growing power of private actors within the internet and technology sphere raises new questions with regard to the protection of freedom of expression in the modern age. Private actors have gained the ability to filter and control the flow of information to internet users, raising questions about net neutrality, and complex challenges with regard to enabling access to the internet and to information in developing countries, while maintaining the free and unhindered flow of information.

These powerful actors, along with online news publishers and a host of other internet intermediaries, have also become responsible for hosting huge quantities of information created and posted by regular users, raising questions about how responsibility should be apportioned for illegal or damaging content online. In particular, concerns have been raised that apportioning liability to intermediaries risk creating a digital ecosystem in which freedom of expression is routinely and structurally stymied because of fears of being held liable.

The right to privacy and the protection of personal information has come up against the free flow of information in the issue now known as ‘the right to be forgotten,’ which has begun to be dealt with at length in regional and domestic courts. This issue relates closely to that of the content moderation obligations of private platform providers and search engines, who must make influential decisions every day about what content will be allowed and what will be removed, with significant consequences for the right to freedom of expression in the digital age.

As a result, it is vital that mechanisms and processes for greater transparency and accountability over the decisions of these powerful, private actors be put in place to ensure alignment with international human rights law and standards on freedom of expression and access to information.

References

  1. UNHRC, ‘The promotion, protection and enjoyment of human rights on the Internet’ (2012) (accessible at https://daccess-ods.un.org/TMP/9602589.01119232.html). See further UNHRC, ‘Report of the Special Rapporteur on the rights to freedom of peaceful assembly and of association’ (2019) (accessible at https://undocs.org/A/HRC/41/41).
  2. UNHRC, ‘Report of the Special Rapporteur on the promotion and protection of the right to freedom of opinion and expression’ A/HRC/17/27 (2011) at para 44 (accessible at https://www2.ohchr.org/english/bodies/hrcouncil/docs/17session/A.HRC.17.27_en.pdf).
  3. UNHRC ‘Report of the Special Rapporteur on the promotion and protection of the right to freedom of opinion and expression’ (accessible at https://www.ohchr.org/en/documents/thematic-reports/a76258-gender-justice-and-freedom-expression-report-special-rapporteur).
  4. Center for Democracy and Technology ‘The importance of internet neutrality to protecting human rights online’ (2013) (accessible at https://cdt.org/wp-content/uploads/pdfs/internet-neutrality-human-rights.pdf)
  5. Carrillo, ‘Having Your Cake and Eating It Too? Zero-Rating, Net Neutrality, and International Law’ 19 Stanford Technology Law Review (2016) at 367 (accessible at https://digitallibrary.un.org/record/3937534?ln=en).
  6. See further Media Defence ‘Training Manual on Digital Rights and Freedom of Expression Online Litigating digital rights and online freedom of expression in East, West and Southern Africa’ at 24, (accessible at https://www.mediadefence.org/resources/mldi-training-manual-digital-rights-and-freedom-expression-online).
  7. UNHRC, ‘Report of the Special Rapporteur on the promotion and protection of the right to freedom of opinion and expression’ (2017) (accessible at https://documents.un.org/doc/undoc/gen/g17/077/46/pdf/g1707746.pdf?token=DbBBkbPjwcY7Y1QKhE&fe=true).
  8. UNHRC, ‘Resolution on the promotion, protection and enjoyment of human rights on the Internet’ (2021) (accessible at https://www.article19.org/resources/un-human-rights-council-adopts-resolution-on-human-rights-on-the-internet/).
  9. See Center for Democracy and Technology, ‘The importance of internet neutrality to protecting human rights online’ (2013) (accessible at https://cdt.org/wp-content/uploads/pdfs/internet-neutrality-human-rights.pdf).
  10. See Carillo, accessible at https://digitallibrary.un.org/record/3937534?ln=en.
  11. Audibert and Murray, ‘A Principled Approach to Network Neutrality’ LSE Research Online (2016) at 120 (accessible at http://eprints.lse.ac.uk/67362/7/Murray_Principled approach_2016.pdf).
  12. Marsden ‘Zero Rating and Mobile Net Neutrality’ Belli and De Filippi (ed) Net Neutrality Compendium: Human Rights, Free Competition, and the Future of the Internet (2016) at 241.
  13. Marsden in Net Neutrality Compendium above n 11 at 248.
  14. Body of European Regulators for Electronic Communication, BEREC Report on COVID-19 crisis – lessons learned regarding communications networks and services for a resilient society (2021) (accessible at https://www.berec.europa.eu/sites/default/files/files/document_register_store/2021/6/BoR_(21)_88_Draft_BEREC_Report_on_COVID19_final.pdf).
  15. Bhandari, Improving internet connectivity during Covid-19, Digital Pathways at Oxford Paper Series no. 4 (2020) (accessible at https://pathwayscommission.bsg.ox.ac.uk/sites/default/files/2020-09/improving_internet_connectivity_during_covid-19_0.pdf).
  16. GSMA, ‘Education for all during COVID-19: Scaling access and impact of EdTech’ (2020) (accessible at https://www.gsma.com/mobilefordevelopment/blog/education-for-all-during-covid-19-scaling-access-and-impact-of-edtech/).
  17. Bhandari, above n 14 at 19.
  18. Carrillo above n 5 at 367. See further Chaudhry, ‘Spotlight on India’s Internet: Facebook’s Free Basics or Basic Failure’ University of Washington Henry M. Jackson School of International Studies (2016) (accessible at https://jsis.washington.edu/news/spotlight-indias-internet-facebooks-free-basics-basic-failure/).
  19. Marsden in Net Neutrality Compendium at 251.
  20. Telecom Regulatory Authority of India, Regulation no 2 of 2016 (2016) (accessible at https://web.archive.org/web/20160209062517/http:/www.trai.gov.in/WriteReadData/WhatsNew/Documents/Regulation_Data_Service.pdf).
  21. Telecom Regulatory Authority of India, Recommendations on Net Neutrality (2017) (accessible at https://www.trai.gov.in/sites/default/files/Recommendations_NN_2017_11_28.pdf).
  22. Telecom Regulatory Authority of India, Consultation Paper on Regulatory Mechanism for Over-The-Top (OTT) Communication Services (2023) (accessible at https://www.trai.gov.in/sites/default/files/CP_07072023.pdf).
  23. Access Now” Open letter: no discriminatory fees or licencing; TRAI must uphold net neutrality” 2023 (accessible at https://www.accessnow.org/press-release/open-letter-trai-india-net-neutrality/).
  24. See Pouzin, ‘Net Neutrality and Quality of Service’ in Net Neutrality Compendium above n 11 at 78. See further Access Now ‘Net Neutrality matters for human rights across the globe’ (2017) (accessible at https://www.accessnow.org/net-neutrality-matters-human-rights-across-globe/).
  25. See Washington Post, ‘The FCC just voted to repeal its net neutrality rules, in a sweeping act of deregulation’ (2017) (accessible at https://www.washingtonpost.com/news/the-switch/wp/2017/12/14/the-fcc-is-expected-to-repeal-its-net-neutrality-rules-today-in-a-sweeping-act-of-deregulation/). See further Electronic Frontier Foundation ‘Team Internet Is Far From Done: What’s Next For Net Neutrality and How You Can Help’ (2017) (accessible at https://www.eff.org/deeplinks/2017/12/team-internet-far-done-whats-next-net-neutrality-and-how-you-can-help).
  26. AccessNow ‘The world responds to the U.S. FCC vote against Net Neutrality’ (2017) (accessible at https://www.accessnow.org/press-release/world-responds-u-s-fcc-vote-net-neutrality/).
  27. Washington Post, ‘Appeals Court Ruling Upholds FCC’s Cancelling of Net Neutrality Rules’ (2019) (accessible at https://www.washingtonpost.com/technology/2019/10/01/appeals-court-upholds-trump-administrations-cancelling-net-neutrality-rules/).
  28. Endgated, ‘US Appeals Court Will Not Rule on Repealing Net Neutrality Laws’ (2020) (accessible at https://www.engadget.com/2020/02/07/net-neutrality-us-appeals-court/)
  29. Office of the US Presidency, ‘Fact Sheet: Executive Order on Promoting Competition in the American Economy’ (2021) (accessible at https://www.whitehouse.gov/briefing-room/statements-releases/2021/07/09/fact-sheet-executive-order-on-promoting-competition-in-the-american-economy/).
  30. Vox “Net neutrality is back, but it’s not what you think” 2023 (accessible at https://www.vox.com/technology/2023/9/28/23893138/fcc-net-neutrality-returns).
  31. McDiarmid and Shears, ‘The Importance of Internet Neutrality to Protecting Human Rights Online’ in Net Neutrality Compendium at 31-32.
  32. Id at 38.
  33. Id 38-41.
  34. For a detailed outline of the limitation of freedom of expression see Module 2 on Restricting Access and Content at 4 – 5. See also Belli, ‘End-to-End, Net Neutrality and Human Rights’ in Net Neutrality Compendium at 12.
  35. Media Defence, accessible at https://www.mediadefence.org/resources/mldi-training-manual-digital-rights-and-freedom-expression-online.
  36. ARTICLE 19, ‘Internet intermediaries: Dilemma of Liability’, 2013, at 3 (accessible at https://www.article19.org/data/files/Intermediaries_ENGLISH.pdf). See further Li, ‘Beyond Intermediary Liability: The Future of Information Platforms’ Yale Law School Information Society Project (2018) at 9 (accessible at https://law.yale.edu/sites/default/files/area/center/isp/documents/beyond_intermediary_liability_-_workshop_report.pdf).
  37. Id at 6.
  38. Riordan, ‘The Liability of Internet Intermediaries’ DPhil thesis, Oxford University (2013,) at 1 (accessible at https://ora.ox.ac.uk/objects/uuid:a593f15c-583f-4acf-a743 ).62ff0eca7bfe/download_file?file_format=pdf&safe_filename=THESIS02&type_of_work=Thesis).
  39. MCMC press statement (2023) (accessible at https://www.mcmc.gov.my/en/media/press-releases/non-cooperation-to-remove-undesirable-contents-fro).
  40. ARTICLE 19 ‘Malaysia: Halt legal action against Meta over content moderation’ 2023 (accessible at https://www.article19.org/resources/malaysia-halt-legal-action-against-meta/).
  41. A 2014 UNESCO report on fostering freedom online and the role of internet intermediaries provides a comprehensive overview of the above regulatory objectives pursued by the states, which in turn have a direct impact on how, and to what extent, intermediaries are compelled to restrict freedom of expression online.
  42. Koren, Nahmia and Perel, ‘Is It Time to Abolish Safe Harbor? When Rhetoric Clouds Policy Goals’ Stanford Law & Policy Review, Forthcoming (2019) at 47 (accessible at https://papers.ssrn.com/sol3/papers.cfm?abstract_id=3344213).
  43. See further Comninos, ‘Intermediary liability in South Africa’ (2012) (accessible at https://www.apc.org/sites/default/files/Intermediary_Liability_in_South_Africa-Comninos_06.12.12.pdf). See also Rens, ‘Failure of Due Process in ISP Liability and Takedown Procedures’ in Global Censorship, Shifting Modes, Persisting Paradigms (2015) (accessible at https://law.yale.edu/sites/default/files/area/center/isp/documents/a2k_global-censorship_2.pdf)..
  44. Mail & Guardian, ‘The digital breadcrumbs behind the M&G’s censorship attack’ (2019) (accessible at https://mg.co.za/article/2019-10-04-00-the-digital-breadcrumbs-behind-the-mgs-censorship-attack/).
  45. Media Defence above n 6 at 28.
  46. UNHRC, ‘Disinformation and freedom of opinion and expression’ The promotion, protection and enjoyment of human rights on the Internet’ (2021) (accessible at https://www.ohchr.org/en/documents/thematic-reports/ahrc4725-disinformation-and-freedom-opinion-and-expression-report
  47. For a detailed outline of the limitation of freedom of expression see Module 2 on Restricting Access and Content at 4 – 5. See further OSCE, ‘Media Freedom on the Internet: An OSCE Guidebook’ (2016) (accessible at https://www.osce.org/netfreedom-guidebook?download=true).
  48. For a more detailed discussion on the Bill see Walubengo and Mutemi, ‘Treatment of Kenya’s Internet Service Providers (ISPs) under the Kenya Copyright (Amendment) Bill, 2017’, The African Journal of Information and Communication (2019) (accessible at https://journals.co.za/docserver/fulltext/afjic_n23_a5.pdfexpires=1581473231&id=id&accname=guest&checksum=1AD5DE6F4FD5EA3A0CB45F94F3335E67).
  49. Ministry of Electronics and Information Technology ‘Draft Information Technology [Intermediaries Guidelines (Amendment)] Rules, 2018’ (2018) (accessible at https://www.meity.gov.in/writereaddata/files/Draft_Intermediary_Amendment_24122018.pdf).
  50. Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Rules (2021) (accessible at https://prsindia.org/billtrack/the-information-technology-intermediary-guidelines-and-digital-media-ethics-code-rules-2021).
  51. Keller, ‘Build Your Own Intermediary Liability Law: A Kit for Policy Wonks of All Ages’ in Li, ‘New Controversies in Intermediary Liability Law Essay Collection Yale Law School’ Information Society Project (2019) at 20 (accessible at https://law.yale.edu/sites/default/files/area/center/isp/documents/new_controversies_in_intermediary_liability_law.pdf)
  52. Li, ‘Beyond Intermediary Liability: The Future of Information Platforms’ Yale Law School Information Society Project (2018).
  53. See Media Defence ‘Training Manual on Digital Rights and Freedom of Expression Online Litigating digital rights and online freedom of expression in East, West and Southern Africa’ at 35, (accessible at https://www.mediadefence.org/resources/mldi-training-manual-digital-rights-and-freedom-expression-online)..
  54. The Indian Express ‘Plea in Delhi High Court: What is the ‘Right to be Forgotten?’ 2023 (accessible at https://indianexpress.com/article/explained/explained-law/right-to-be-forgotten-8466283/).
  55. Id.
  56. Cook, ‘The Right to be Forgotten: A Step in the Right Direction for Cyberspace Law and Policy’ (2015) 6 Journal of Law, Technology & the Internet121 at 121-123 (accessible at https://scholarlycommons.law.case.edu/jolti/vol6/iss1/8/).
  57. Kulk and Borgesius, ‘Freedom of expression and ‘right to be forgotten’ cases in the Netherlands after Google Spain’ (2015) 2 European Data Protection Law Review 113 at 116 (accessible at https://papers.ssrn.com/sol3/papers.cfm?abstract_id=2652171). See also ARTICLE 19, ‘The “Right to be Forgotten”: Remembering Freedom of Expression’ (2016) (accessible at https://www.article19.org/data/files/The_right_to_be_forgotten_A5_EHH_HYPERLINKS.pdf).
  58. Media Defence ‘Training Manual on Digital Rights and Freedom of Expression Online Litigating digital rights and online freedom of expression in East, West and Southern Africa’ at 24, (accessible at https://www.mediadefence.org/resources/mldi-training-manual-digital-rights-and-freedom-expression-online).
  59. For more on the importance of balancing these right see the Written Observations of ARTICLE 19 and Others (2017), Google LLC v Commission Nationale de l’Information et des Libertés (CNIL) (accessible at https://www.article19.org/wp-content/uploads/2017/12/Google-v-CNIL-A19-intervention-EN-11-12-17-FINAL-v2.pdf).
  60. Michael L Rustad & Sanna Kulevska, “Reconceptualizing the Right to Be Forgotten to Enable Transatlantic Data Flow” (2015) 28 Harvard Journal of Law and Technology 349 at 373 (accessible at https://papers.ssrn.com/sol3/papers.cfm?abstract_id=2627383).
  61. Forde, ‘Implications of the Right to be Forgotten’ (2015) 17 Tulane Journal of Technology and Intellectual Property 83 at 113 -114 (accessible at https://journals.tulane.edu/TIP/article/view/2652). See further Lindsay ‘The ‘Right to be Forgotten’ by Search Engines under Data Privacy Law: A Legal Analysis of the Costeja Ruling’ (2014) 6 Journal of Media Law 159 at 173 – 174.
  62. Forde, ‘Implications of the Right to be Forgotten’ 17 Tulane Journal of Technology and Intellectual Property 83, (2015) at 113 -114 (accessible at https://journals.tulane.edu/TIP/article/view/2652). See further Lindsay ‘The ‘Right to be Forgotten’ by Search Engines under Data Privacy Law: A Legal Analysis of the Costeja Ruling’ 6 Journal of Media Law, (2016) 159 at 173 – 174.
  63. Kuczerawy & Ausloos, ‘From Notice-and-Takedown to Notice-and-Delist: Implementing Google Spain’, 14 Colorado Technology Law Journal 219, (2016,) (accessible at https://papers.ssrn.com/sol3/papers.cfm?abstract_id=2669471).
  64. APC, ‘Reorienting rules for rights: A summary of the report on online content regulation by the Special Rapporteur on the promotion and protection of the right to freedom of opinion and expression’ (2018) (accessible at https://www.apc.org/en/pubs/reorienting-rules-rights-summary-report-online-content-regulation-special-rapporteur-promotion).
  65. Langvardt, ‘Regulating Online Content Moderation’ Georgetown Law Journal 106 (2018) 1354 at 1354-1359 (accessible at https://www.law.georgetown.edu/georgetown-law-journal/wp-content/uploads/sites/26/2018/07/Regulating-Online-Content-Moderation.pdf).
  66. APC, ‘Content Regulation in the Digital Age Submission to the United Nations Special Rapporteur on the Right to Freedom of Opinion and Expression’ (2018) (accessible at https://www.ohchr.org/Documents/Issues/Opinion/ContentRegulation/APC.pdf).
  67. Frosio, ‘From Horizontal to Vertical: an Intermediary Liability Earthquake in Europe’ Centre for International Intellectual Property Studies Research Paper (2017) at 12 (accessible at https://papers.ssrn.com/sol3/papers.cfm?abstract_id=3009156).
  68. Id.
  69. Id.
  70. Stanford Law, ‘Monitoring Obligations’ (2017) (accessible at https://wilmap.law.stanford.edu/topics/monitoring-obligations).
  71. See discussion above on the right to be forgotten, particularly the discussion on Google LLC v Commission Nationale de l’Information et des Liberties (CNIL).
  72. Frosio, ‘The Death of ‘No Monitoring Obligations’ A Story of Untameable Monsters’ JIPITEC (2017) (accessible at https://www.jipitec.eu/issues/jipitec-8-3-2017/4621/JIPITEC_8_3_2017_199_Frosio).
  73. Fahy, ‘The Chilling Effect of Liability for Online Reader Comments’ European Human Rights Law Review (2017) (accessible at https://www.ivir.nl/publicaties/download/EHRLR_2017_4.pdf).
  74. Id at 3. See also Media Defence ‘European Court clarifies intermediary liability standard’ (2016) (accessible at https://www.mediadefence.org/news/european-court-clarifies-intermediary-liability-standard).
  75. For substantive commentary on the impact of these cases on intermediary liability see Maroni, ‘A Court’s Gotta Do, What a Court’s Gotta Do. An Analysis of the European Court of Human Rights and the Liability of Internet Intermediaries through Systems Theory’ EUI Working Paper (2019) (accessible at https://cadmus.eui.eu/bitstream/handle/1814/62005/RSCAS%202019_20.pdf?sequence=1&isAllowed=y).
  76. ARTICLE 19, ‘Social Media Councils: Consultation’ (2019) (accessible at https://www.article19.org/resources/social-media-councils-consultation/).
  77. Fahy, ‘The Chilling Effect of Liability for Online Reader Comments’ European Human Rights Law Review (2017) (accessible at https://www.ivir.nl/publicaties/download/EHRLR_2017_4.pdf).
  78. Id at 3. See also Media Defence ‘European Court clarifies intermediary liability standard’ (2016) (accessible at https://www.mediadefence.org/news/european-court-clarifies-intermediary-liability-standard/).
  79. For substantive commentary on the impact of these cases on intermediary liability see Maroni, ‘A Court’s Gotta Do, What a Court’s Gotta Do. An Analysis of the European Court of Human Rights and the Liability of Internet Intermediaries through Systems Theory’ EUI Working Paper (2019) (accessible at https://cadmus.eui.eu/bitstream/handle/1814/62005/RSCAS 2019_20.pdf?sequence=1&isAllowed=y).

Related Resources

MENA

ماژول‌های آموزشی درباره‌ی دادخواهی مربوط به آزادی بیان و حقوق دیجیتال

واحد آموزشی ۱: اصول اساسی حقوق بین‌الملل و آزادی بیان واحد آموزشی ۲: مقدمه‌ای بر حقوق دیجیتال واحد آموزشی ۳: دسترسی به اینترنت واحد آموزشی ۴: حریم خصوصی داده‌ها و حفاظت از داده‌ها واحد آموزشی ۵: افترا واحد آموزشی ۶:

MENA

Defamation

• Defamation is frequently used to unjustly stifle dissent. However, it can provide a genuine remedy for those whose reputations are harmed by the statements or actions of others. • Criminal defamation is generally considered to be disproportionate under international

MENA

وحدات تعليمية حول التقاضي بشأن حرية التعبير والحقوق الرقمية

الوحدة التعليمية 1: المبادئ األساسية للقانون الدولي وحرية التعبير الوحدة التعليمية 2: مقدمة في الحقوق الرقمية الوحدة التعليمية 3: الوصول إلى اإلنترنت الوحدة التعليمية 4: خصوصية البيانات وحماية البيانات الوحدة التعليمية 5: التشهير الوحدة التعليمية 6: خطاب الكراهية الوحدة التعليمية