Back to main site

    Monitoring Obligations of Search Engines and Platforms

    Module 5: Trends in Censorship by Private Actors

    Overview of monitoring obligations of search engines and platforms

    The internet has been described as “the greatest tool in history for global access to information and expression”.(1) But it is also a powerful tool for disinformation and hate speech which have, as captured in the Joint Letter from Special Rapporteurs and experts, “exacerbated societal and racial tensions, inciting attacks with deadly consequences around the world.” The increase in the spread of disinformation and the rise of the internet being used for nefarious purposes has put non-state actors in a somewhat precarious position. The UN Human Rights Office of the High Commissioner notes that along with the many opportunities associated with the internet, there are growing threats of unlawful activities online. The ease with which malicious content can spread online has posed a dilemma for states and intermediaries. On the one hand, there is a need to mitigate online harms, but on the other, in order to do so, content must not be moderated in a manner that leads to censorship and free speech violations.(2) Intermediaries are now complying with state laws concerning content regulation and are also, in some instances, acting proactively to monitor content, either of their own volition or in order to escape liability.(3)

    The 2018 Report by the UNSPR noted key concerns regarding content regulation:

    “States regularly require companies to restrict manifestly illegal content such as representations of child sexual abuse, direct and credible threats of harm and incitement to violence, presuming they also meet the conditions of legality and necessity. Some [s]tates go much further and rely on censorship and criminalization to shape the online regulatory environment.”

    Monitoring obligations for search engines and platforms are loosely understood as general obligations imposed on intermediaries to monitor all content and filter unwanted content.(4) Intermediaries faced with these obligations are expected to develop content recognition technologies or other automatic infringement assessment systems and essentially develop and utilise filtering systems.(5) In instances where there are strict monitoring obligations, it is likely that monitoring will become the norm, opening intermediaries to automatic and direct liability.(6) Monitoring obligations raise concerns in respect of intermediary liability. It has been noted that:

    “Monitoring obligations drastically tilt the balance of the intermediary liability rules toward more restriction of speech, may hinder innovation and competition by increasing the costs of operating an online platform, and may exacerbate the broadly discussed problem of over-removal of lawful content from the Internet.”(7)

    Further to the above, there has been a trend, akin to that of the right to be forgotten, where states demand global removal of content that violates domestic law.(8) Notwithstanding the recent findings of the CJEU, these demands might continue, as predicted by the UNSR in the 2018 Report, to have the chilling effect of allowing censorship across borders.

    The imposition of monitoring obligations appears to have primarily been in relation to copyright infringements. However, it is growing at an unprecedented rate, causing grave concern for free expression.(9) Judgments of the European Court of Human Rights (ECtHR) provide useful insight into the issues regarding online platforms and liability for users’ comments.

    Jurisprudential developments

    The Delfi v Estonia matter was the first of the prominent cases to address the issue of content moderation and online media liability. An Estonian newspaper, Delfi, published an article that was critical of a ferry company. The article received 185 comments online, some of which were targeting a board member of the company, L, and were considered threatening and/or offensive. L requested that the comments be immediately taken down and claimed approximately €32,000 in compensation for non-pecuniary damages. Delfi agreed to remove the comments but refused to pay the damages. L approached the Harju County Court, bringing a civil claim against Delfi. The County Court found that the company could not be considered the publisher of the comments, and it did not have an obligation to monitor them. L appealed to the Tallinn Court of Appeal who remitted the matter back to the County Court for reconsideration, concluding that the lower court had erred in its finding in relation to Delfi’s liability. The matter eventually reached the Supreme Court, which found that there was a legal obligation to avoid causing damage to other persons and that Delfi should have prevented the clearly unlawful comments from being published. The Supreme Court noted that after the comments had been published, Delfi failed to remove them on its own initiative, although it must have been aware of their unlawfulness. Delfi’s failure to act was found to be unlawful.

    Delfi applied to the First Section of ECtHR, arguing that the imposition of liability for the comments violated its right to freedom of expression. The ECtHR was faced with the question of whether Delfi’s obligation, as established by the domestic judicial authorities, to ensure that comments posted on its internet portal did not infringe the personality rights of third persons was in accordance with the right to freedom of expression. In order to resolve this question, the ECtHR developed a four-stage test:

    • The context of the comments.
    • The measures applied by the Delfi in order to prevent or remove defamatory comments.
    • The liability of the actual authors of the comments as an alternative to the applicant company’s liability.
    • The impacts of the restrictions imposed on Delfi in a democratic society.

    The ECtHR found that the restriction on Delfi’s right to freedom of expression was justified and proportionate, taking into consideration the following:

    • The insulting and threatening nature of the comments which were posted in reaction to an article published by Defli;
    • The insufficiency of the measures taken by Delfi to avoid damage being caused to other parties’ reputations and to ensure a realistic possibility that the authors of the comments will be held liable; and
    • The moderate sanction imposed on Delfi.

    Following this decision by the First Section, the matter was then referred to the Grand Chamber of the ECtHR. In 2015, the Grand Chamber affirmed the judgment of the First Section. In this regard, in the 2015 Delfi v Estonia judgement, the Grand Chamber noted:

    “[W]hile the Court acknowledges that important benefits can be derived from the Internet in the exercise of freedom of expression, it is also mindful that liability for defamatory or other types of unlawful speech must, in principle, be retained and constitute an effective remedy for violations of personality rights.”

    The Grand Chamber, in determining if freedom of expression had been infringed, considered the restriction was lawful, sought to achieve a legitimate aim and was necessary in a democratic society. Ultimately the Grand Chamber concluded that Delfi was liable for defamation as the publisher of the comments. The Grand Chamber found that “an active intermediary which provides a comments section cannot have absolute liability” and noted that “freedom of expression cannot be turned into an exercise in imposing duties.”

    While the Grand Chamber found that the liability against Delfi had been a justified and proportionate restriction on the news portal’s freedom of expression, it noted, in its appendix that:

    “We trust that this is not the beginning (or the reinforcement and speeding up) of another chapter of silencing and that it will not restrict the democracy-enhancing potential of the new media.  New technologies often overcome the most astute and stubborn politically or judicially imposed barriers.  But history offers discouraging examples of censorial regulation of intermediaries with lasting effects.  As a reminder, here we provide a short summary of a censorial attempt that targeted intermediaries.”

    Shortly after the Grand Chamber’s Delfi judgment, the Fourth Section of the ECtHR considered whether a non-profit, self-regulatory body of intermediaries (MTE) and an internet news portal (Index) were liable for offensive comments posted on their websites in Magyar Tartalomszolgáltatók Egyesülete v Hungary. In 2010, the two parties published an article critical of two real estate agents. The article attracted some comments that the estate agents found to be false and offensive and which, they argued, infringed on their right to a good reputation. MTE and Index were held liable by the Hungarian courts for the comments. MTE and Index approached the ECtHR arguing that their right to freedom of expression had been violated.

    The ECtHR noted that interferences with the freedom of expression must be “prescribed by law,” have one or more legitimate aims, and be “necessary in a democratic society.” The ECtHR applied the same four-stage test as it did in Delfi but differed from its finding in Delfi, concluding that there had been a violation of freedom of expression. The ECtHR found that:

    • The comments triggered by the article can be regarded as going to a matter of public interest and while they were vulgar that were not necessarily offensive, noting that style constitutes part of the communication as the form of expression and is protected together with the content of the expression.
    • The conduct of MTE and Index in providing a platform for third-parties to exercise their freedom of expression by posting comments is a journalistic activity of a particular nature. It was noted that it would be difficult to reconcile MTE and Index’s liability with existing case law that cautions against the punishment of a journalist for assisting in the dissemination of statements made by another person.
    • MTE and Index took certain general measures to prevent defamatory comments on their portals or to remove them.

    The ECtHR found that there had been a violation of freedom of expression and concluded with the following:

    “However, in the case of Delfi, the Court found that if accompanied by effective procedures allowing for rapid response, the notice-and-take-down-system could function in many cases as an appropriate tool for balancing the rights and interests of all those involved.  The Court sees no reason to hold that such a system could not have provided a viable avenue to protect the commercial reputation of the plaintiff. It is true that, in cases where third-party user comments take the form of hate speech and direct threats to the physical integrity of individuals, the rights and interests of others and of the society as a whole might entitle Contracting States to impose liability on Internet news portals if they failed to take measures to remove clearly unlawful comments without delay, even without notice from the alleged victim or from third parties.  However, the present case did not involve such utterances.”

    It has been noted that there are some inconsistencies in the ECtHR’s approach to online liability.(10) However, it does appear that the shift away from the Delfi reasoning was a shift in the right direction.(11) Ultimately, these cases have illustrated that even though freedom of expression is paramount, complete immunity is not always attainable, and there might be instances where intermediaries will be responsible for the moderation of content.(12)

    Efforts to address content moderation at the global level

    UN Human Rights Office of the High Commissioner has noted:

    “One of the greatest threats to online free speech today is the murkiness of the rules . . .  States circumvent human rights obligations by going directly to the companies, asking them to take down content or accounts without going through legal process, while companies often impose rules they have developed without public input and enforced with little clarity. We need to change these dynamics so that individuals have a clear sense of what rules govern and how they are being applied.”

    Alongside the considerable rights implications for the moderation of online content by intermediaries, there is the glaring lack of adequate rules, guidelines, procedures and remedies in relation to the current practices of content moderation that are cause for concern.(13) It is clear that a human rights framework ought to be guiding the principles for company content moderation.

    Guidance from the UNSR on ensuring compliance with human rights standards when online content is being moderated

    These guidelines and recommendations are based on the Guiding Principles on Business and Human Rights as well as established international law, norms, and practices. These can be used when engaging with state and non-state actors to ensure compliance with human rights standards when online content is being moderated. Below is an outline of some of the key recommendations:

    1. Human rights by default: Companies should incorporate directly into their terms of service and community standards relevant principles of human rights law that ensure content-related actions will be guided by the same standards of legality, necessity and legitimacy that bind state regulation of expression.
    2. Legality: Company rules routinely lack the clarity and specificity that would enable users to predict with reasonable certainty what content places them on the wrong side of the line.  Companies should supplement their efforts to explain their rules in more detail with aggregate data illustrating trends in rule enforcement, and examples of actual cases or extensive, detailed hypotheticals that illustrate the nuances of interpretation and application of specific rules.
    3. Necessity and proportionality: Companies should not only describe contentious and context-specific rules in more detail; they should also disclose data and examples that provide insight into the factors they assess in determining a violation, its severity and the action taken in response.
    4. Non-discrimination: Meaningful guarantees of non-discrimination require companies to transcend formalistic approaches that treat all protected characteristics as equally vulnerable to abuse, harassment and other forms of censorship.

    UNSR guidance on the processes for company moderation and related activities

    These guidelines and recommendations provided further guidance on the processes for company moderation and related activities:

    1. Prevention and mitigation: Companies should adopt and then publicly disclose specific policies that “direct all business units, including local subsidiaries, to resolve any legal ambiguity in favour of respect for freedom of expression, privacy, and other human rights”.  Companies should also ensure that requests are in writing, cite specific and valid legal bases for restrictions and are issued by a valid government authority in an appropriate format.
    2. Transparency: Best practices on how to provide such transparency should be developed.  Companies should also provide specific examples as often as possible and should preserve records of requests made.
    3. Due diligence: Companies should develop clear and specific criteria for identifying activities that trigger assessments and assessments should be ongoing and adaptive to changes in circumstances or operating context.
    4. Public input and engagement: Companies should engage adequately with users and civil society, particularly in the global south, to consider the human rights impact of their activities from diverse perspectives.
    5. Rule-making transparency: Companies should seek comment on their impact assessments from interested users and experts when introducing products and rule modifications.  They should also clearly communicate to the public the rules and processes that produced them.
    6. Automation and human evaluation: Company responsibilities to prevent and mitigate human rights impacts should take into account the significant limitations of automation and, at a minimum, technology developed to deal with considerations of scale should be rigorously audited and developed with broad user and civil society input.
    7. Notice and appeal: Companies could work with one another and civil society to explore scalable solutions such as company-specific or industry-wide ombudsman programmes and the promotion of remedies for violations.
    8. Remedy: Companies should institute robust remediation programmes, which may range from reinstatement and acknowledgment to settlements related to reputational or other harms.
    9. User autonomy: While content rules in closed groups should be consistent with baseline human rights standards, platforms should encourage such affinity-based groups given their value in protecting opinion, expanding space for vulnerable communities and allowing the testing of controversial or unpopular ideas.

    The UNSR goes on to provide specific recommendations, imparting the urgent need for “radical transparency, meaningful accountability and a commitment to remedy in order to protect the ability of individuals to use online platforms as forums for free expression, access to information and engagement in public life”.

    More Resources on Censorship by Private Actors

    Footnotes

    1. APC, ‘Reorienting rules for rights: A summary of the report on online content regulation by the Special Rapporteur on the promotion and protection of the right to freedom of opinion and expression’ (2018) (accessible at https://www.apc.org/en/pubs/reorienting-rules-rights-summary-report-online-content-regulation-special-rapporteur-promotion). Back
    2. Langvardt, ‘Regulating Online Content Moderation’ Georgetown Law Journal 106 (2018) 1354 at 1354-1359 (accessible at https://www.law.georgetown.edu/georgetown-law-journal/wp-content/uploads/sites/26/2018/07/Regulating-Online-Content-Moderation.pdf). Back
    3. APC, ‘Content Regulation in the Digital Age Submission to the United Nations Special Rapporteur on the Right to Freedom of Opinion and Expression’ (2018) (accessible at https://www.ohchr.org/Documents/Issues/Opinion/ContentRegulation/APC.pdf). Back
    4. Frosio, ‘From Horizontal to Vertical: an Intermediary Liability Earthquake in Europe’ Centre for International Intellectual Property Studies Research Paper (2017) at 12 (accessible at https://papers.ssrn.com/sol3/papers.cfm?abstract_id=3009156). Back
    5. Id. Back
    6. Id. Back
    7. Stanford Law, ‘Monitoring Obligations’ (2017) (accessible at https://wilmap.law.stanford.edu/topics/monitoring-obligations). Back
    8. See discussion above on the right to be forgotten, particularly the discussion on Google LLC v Commission Nationale de l’Information et des Liberties (CNIL). Back
    9. Frosio, ‘The Death of ‘No Monitoring Obligations’ A Story of Untameable Monsters’ JIPITEC (2017) (accessible at https://www.jipitec.eu/issues/jipitec-8-3-2017/4621/JIPITEC_8_3_2017_199_Frosio). Back
    10. Fahy, ‘The Chilling Effect of Liability for Online Reader Comments’ European Human Rights Law Review (2017) (accessible at https://www.ivir.nl/publicaties/download/EHRLR_2017_4.pdf). Back
    11. Id at 3.  See also Media Defence ‘European Court clarifies intermediary liability standard’ (2016) (accessible at https://www.mediadefence.org/news/european-court-clarifies-intermediary-liability-standard). Back
    12. For substantive commentary on the impact of these cases on intermediary liability see Maroni, ‘A Court’s Gotta Do, What a Court’s Gotta Do. An Analysis of the European Court of Human Rights and the Liability of Internet Intermediaries through Systems Theory’ EUI Working Paper (2019) (accessible at https://cadmus.eui.eu/bitstream/handle/1814/62005/RSCAS%202019_20.pdf?sequence=1&isAllowed=y). Back
    13. ARTICLE 19, ‘Social Media Councils: Consultation’ (2019) (accessible at https://www.article19.org/resources/social-media-councils-consultation/). Back