ECTHR Approach to Intermediary Liability
Module 3: Content Restrictions and Intermediary Liability
Article 10(2) of the European Convention on Human Rights (the ‘Convention’) provides that restrictions may be prescribed by law and necessary in the interest of “national security, territorial integrity, or public safety, for the prevention of disorder or crime, for the protection of health or morals, for the protection of the reputation or rights of others, for preventing the disclosure of information received in confidence or for maintaining the authority and impartiality of the judiciary.”(1)
Inevitably the growth of the Internet and online communication platforms in recent years has had a profound effect on the interpretation of an individual’s right to freedom of expression. Content published online, including user-generated allegedly defamatory comments, are accessible globally with the harm extending across states, often resulting in complex international legal disputes.(2)
In the case of Delfi v Estonia, the ECtHR commented that “defamatory and other types of clearly unlawful speech, including hate speech and speech inciting violence, can be disseminated like never before, worldwide, in a matter of seconds, and sometimes remain persistently available online”.(3)
The ECtHR considered intermediary liability for the first time in 2015, in Delfi. The principles that were developed in Delfi for determining intermediary liability were subsequently applied in the case of Magyar Tartalomszolgáltatók Egyesülete and Index.hu Zrt v Hungary. In both of those cases the applicants were Internet news portals, the second applicant in MTE being a self-regulatory body of Internet content providers.
In Delfi, the Grand Chamber considered the following factors as being relevant in the finding that the applicant was liable for third party comments on its website:
- The commercial nature of Delfi, and that it was one of the biggest media companies in Estonia with a wide readership.
- That it encouraged posting of comments, and that this encouragement formed part of its business model as engagement of readers would contribute to its overall revenue.
- That it had editorial control over comments once they had been posted
- That it was a “professional publisher” that should be familiar with the relevant laws and could also have sought legal advice.
The Grand Chamber identified four elements that required analysis when determining liability for third party comments:
- The context of the comments.
- The measures applied by the applicant company to prevent or remove defamatory comments.
- The liability of the actual authors of the comments as an alternative to the intermediary’s liability; and
- The consequences of the domestic proceedings for the applicant company.
The Grand Chamber was first concerned with:
“the ‘duties and responsibilities’ of Internet news portals … when they provide for economic purposes a platform for user-generated comments” and it expressly disapplied its findings to “other fora on the Internet where third-party comments can be disseminated, for example an Internet discussion forum or a bulletin board where users can freely set out their ideas on any topics without the discussion being channelled by any input from the forum’s manager; or a social media platform where the platform provider does not offer any content and where the content provider may be a private person running the website or a blog as a hobby”.(4)
This differentiation between news portals and members of the public who use a social media account is stated clearly, and in unqualified terms. The President of the Court has explained that this distinction is made not on the basis “that economic operators exercising free speech rights should, because of that status, enjoy lower free speech protections as a matter of principle, but only that the economic nature of their activities may often justify imposing on them duties and responsibilities which are of a more stringent nature than can be made applicable to non-profit entities”.(5)
The Grand Chamber’s clarification on this point alone would seem to exclude a user of a social media account from liability for failing to monitor and remove third party comments.
Second, the Grand Chamber placed particular weight on whether the identity of the authors of the third party comments could be established.(6)
It started out by asking whether “the liability of the actual authors of the comments could serve as a sensible alternative to the liability of the Internet news portal”.(7)
In noting that the parties disagreed as to the ‘feasibility’ of establishing the identity of the authors,(8) the Grand Chamber then held that the “uncertain effectiveness of measures allowing the identity of the authors of the comments to be established, coupled with the lack of instruments put in place by the applicant company for the same purpose with a view to making it possible for a victim of hate speech to bring a claim effectively against the authors of the comments” were relevant factors supporting its finding of no violation of Article 10.(9)
The Grand Chamber’s judgment implicitly recognised that where the authors of impugned third party comments are known or can be readily identified, and therefore can be subject to legal action, taking legal action against the intermediary, especially where that intermediary is a social media user, can amount to an unduly disproportionate interference with their right to freedom of expression, in violation of Article 10. This principled approach is consistent with the Court’s well established case law on the important role of the Internet in facilitating the dissemination of information.(10)
Third, it was an important part of the government’s case in Delfi that the third party commenters had “lost control of their comments as soon as they had entered them and they could not change or delete them”.(11)
The Court agreed that this detail was a factor in determining liability, stating that because Delfi “exercised a substantial degree of control over the comments published on its portal, the Court does not consider that the imposition on the applicant company of an obligation to remove from its website, without delay after publication, comments that amounted to hate speech and incitements to violence, and were thus clearly unlawful on their face, amounted, in principle, to a disproportionate interference with its freedom of expression”.(12)
This can be contrasted with comments made on social media platforms such as Facebook, where a commenter can still exercise control by withdrawing a comment after it has been posted, as happened in the present case when one of the commenters later deleted the allegedly unlawful online speech.(13)
In MTE, the Court applied the principles developed in Delfi to determine liability for third party comments, carrying out a close analysis of the four elements outlined above.(14)
In that case the Court found a violation of Article 10. The key difference between MTE and Delfi lay in the nature of the third-party comments in issue.(15)
The Court in MTE noted that, unlike in Delfi, the comments did not amount to hate speech or incitement to violence. The domestic courts had held the applicants, a news portal and a self-regulatory body of Internet content providers, liable for the harm to the reputation of a business by ‘false and offensive’ statements by online users, noting that they should have expected that some ‘unfiltered comments’ might be in breach of the law. In finding a violation of Article 10, the Court held that a requirement that an online platform search for and take down unlawful user comments “amounts to requiring excessive and impracticable forethought capable of undermining freedom of the right to impart information on the Internet”.(16)
In Pihl v. Sweden the Court referenced MTE in noting that it had “previously found that liability for third party comments may have negative consequences on the comment-related environment of an internet portal and thus a chilling effect on freedom of expression via internet. This effect could be particularly detrimental for a non-commercial website.”(17)
However, this is a very high standard as even the most sophisticated intermediary would find it difficult to carry out an assessment as to whether a comment qualifies as unlawful speech to an appropriate legal standard, and in any event would feel compelled to remove that comment almost immediately to avoid liability.(18)
This clearly creates a ‘chilling effect’.
Assessing whether material posted online is lawful or unlawful is complex and would amount to an excessively burdensome standard where applied, for example, to the user of a social media platform acting as an intermediary.(19) It can involve an examination of the appropriate balance to be struck between the right to respect for private life and the right to freedom of expression. It might involve questions relating to defamation, privacy rights, or breach of data protection, and their relationship to the criminal law. A proper assessment of lawfulness might require consideration of whether certain legal defences are available. A further level of complexity stems from the fact that states within the Council of Europe classify certain offences differently, for example, where defamation is an offence under criminal law.(20) Where intermediaries do remove content without properly assessing its lawfulness, they are likely to do so without informing the author and where the author has no prospect of appealing the decision to remove their content. Ultimately, a requirement that intermediaries should determine whether online material is unlawful will invariably lead to lawful content being removed. Moderation is already a challenge for social media companies who are best placed to apply resources to this issue. For example, Facebook has admitted that their moderators “make the wrong call in more than one out of every 10 cases”.
These issues arose most recently in Sanchez v. France.(21)
The applicant is a politician for the National Rally (a far-right party in France). While running for election to Parliament for the party in the Nîmes constituency, he posted a message about one of his political opponents, F.P., on his publicly accessible Facebook wall which he ran. The post itself was not inflammatory and only his friends could comment on it. Two third parties, S.B. and L.R, added a number of comments under his post, referring to F.P.’s partner Leila T. and expressing dismay at the presence of Muslims in Nîmes. Leila T. confronted S.B. who she knew, and he deleted his comment later that day.
The next day, Leila T. lodged a criminal complaint against the applicant as well as those who wrote the offending comments. The Nîmes Criminal Court found them all guilty of incitement to hatred or violence against a group or an individual on account of their origin/belonging or not belonging to a specific ethnic group, nation, race, or religion. The Nîmes Court concluded that by creating a public Facebook page Mr. Sanchez had set up a service for communication with the public by electronic means on his own initiative, for the purpose of exchanging opinions. By leaving the offending comments visible on his wall, he had failed to act promptly to stop their dissemination and was guilty as the principal offender. In its decision, the Nimes Criminal Court noted that only ‘friends’ could comment on the applicant’s Facebook wall and that being a political actor, he had to be more thorough in monitoring his comments, as he was more likely to attract polemical content.
This decision was upheld by the Nîmes Court of Appeal which held that the comments had clearly defined a group – Muslims – and associated them with crime and insecurity in the city in a provocative way. The Court of Appeal also noted that by knowingly making his Facebook ‘wall’ public, the applicant had assumed responsibility for the offending content. Mr. Sanchez’ appeal to the Court of Cassation on points of law was rejected. He then went to the ECtHR, alleging that his criminal conviction for incitement to hatred violated Article 10.
The Chamber majority found that no violation had occurred.
The Grand Chamber, in examining whether the interference was necessary in a democratic society, noted that, according to Feldek v. Slovakia,(22) in the case of political speech there is little scope under Article 10 for it to be restricted,(23) as it is a very important feature of a democratic society, and that the governmental margin of appreciation, in this case, was particularly narrow. However, the Court noted that “the freedom of political debate is not absolute in nature,”(24) especially when it comes to the prevention of forms of expression that can promote or propagate hatred or violence.
The Court relied on the case Erbakan v. Turkey,(25) to reiterate the responsibility of politicians in avoiding comments that might foster intolerance when speaking in public. Then, the Court added that Article 10 does not protect declarations that can arouse feelings of rejection or hostility towards a community.(26) The Court declared that this applies too in the context of a political election.
Furthermore, the Court quoted the cases of Sürek v. Turkey(27), Le Pen v. France, Soulas and Others v. France,(28) and E.S. v. Austria,(29), to highlight the broader margin of appreciation granted to states to assess the necessity when restricting freedom of expression in cases of remarks made to incite violence against one or many individuals. It also said that hate speech may take various forms: They are not always plainly aggressive remarks but can include implicit statements that can be equally hateful as determined in Jersild v. Denmark, (30) Soulas, Ayoub and Others v. France,(31) and Smajić v. Bosnia and Herzegovina.(32)
Subsequently, the Court analysed the impact of hateful or discriminatory comments made on the internet and social media. It noted the many harmful risks that this content on the internet posed, and how hate speech can be rapidly disseminated. In order to strike a balance between the rights conferred by Article 10 and the harmful effects that hate speech on social media might have on the rights conferred by Article 8, the Court agreed on the possibility of imposing liability for defamatory speech as an effective remedy. In the case of liability for third-party comments on the Internet:
“the nature of the comment will have to be taken into consideration, in order to ascertain whether it amounted to hate speech or incitement to violence, together with the steps that were taken after a request for its removal by the person targeted in the impugned remarks.”(33)
The Court referred to the cases of Pihl v. Swedena(34) Magyar Kétfarkú Kutya Párt v. Hungary,(35) and Index.hu Zrt v. Hungary.(36)
In order to analyse the necessity of the interference of the French government in the present case, the Court started by examining the context of the comments at issue. Given that the comments were directed to a specific group (i.e., Muslims) in an electoral context in a politician’s Facebook “wall”, the Court found that the comments were clearly unlawful. The Court stated that liability should be shared —in different degrees— between all the actors involved, including Mr Sanchez —even if the comments were posted by third parties. Otherwise, exempting producers from all liability “might facilitate or encourage abuse and misuse, including hate speech and calls to violence, but also manipulation, lies and disinformation.”(37)
The Court continued by analysing the steps taken by Mr Sanchez regarding the comments on his Facebook “wall”. It stated that account holders have to act reasonably and cannot claim any impunity in how they use their electronic resources. That obligation, the Court concluded, is higher for politicians, which have to be aware of the fact that they can reach wider audiences, and whose burden of liability is higher than that of a regular citizen. The Court stressed that Mr Sanchez was aware of the controversial comments made on his Facebook “wall”, as he made a post warning his contacts about it, but nevertheless failed to delete the contested comments, or checked their content.
The Court also dismissed the applicant’s submission regarding the unreasonableness of his prosecution instead of the comments’ authors. According to the Court, he failed to show the arbitrariness of section 93-3 of Law no. 82-652 of 29 July 1982, especially as he was not prosecuted instead of the authors, but alongside them in different autonomous legal regimes.
Consequently, by thirteen votes to four, the Court found that the French government’s interference was “necessary in a democratic society,”(38) in accordance with Article 10 of the ECHR, as it was based on relevant and sufficient reasons to determine Mr Sanchez liability and his criminal conviction.
Hyperlink Publication
Courts assessing cases concerning intermediary liability have had to consider some interesting questions in recent years. The liability of intermediaries dealing with the publication of a hyperlink was examined by the ECtHR in Magyar Jeti Zrt v Hungary.(39)
The domestic courts in Hungary found the applicant, a company, to be liable for defamation after it posted a hyperlink to YouTube video that contained the impugned material.
The ECtHR had to consider whether the posting of a hyperlink amounted to distributing defamatory statements. In its assessment, the Court noted that domestic court had failed to examine various important factors including:
- whether the applicant company had endorsed the alleged defamatory material;
- whether the applicant company had repeated the material, without endorsing it;
- whether the applicant company had just posted the hyperlink without commenting on it;
- whether the applicant company had knowledge that the material it was posting to was or could be unlawful;
- whether the applicant company had acted in good faith and performed the necessary due diligence required in responsible journalistic practices.
Taking all relevant factors into consideration, the Court noted that the view of the domestic law in attributing liability to those hyperlinking to impugned content would have “negative consequences on the flow of information on the Internet, impelling article authors and publishers to refrain together from hyperlinking to material over whose changeable content they have no control. This may have, directly or indirectly, a chilling effect on freedom of expression on the Internet.”(40)
In the subsequent case of Kilin v Russia, the Court had to consider the conviction of the applicant who was prosecuted for public calls to violence through the sharing of third-party content via a social network website. In its assessment, the Court considered that the sharing of material via social media does not necessarily signify a particular attitude or acknowledgment of the user towards the content. The Court further confirmed that the motivations of the applicant in sharing the impugned content was to contribute to public interest debate but noted that on this occasion, the applicant had distorted the context as they had failed to provide any commentary. As such, the content could be “reasonably perceived as stirring up ethnic discord and violence”.(41)
In view of this, the applicant’s prosecution was relevant and could be justified.