CLOSE

‘False News’, Misinformation & Propaganda – Europe

Introduction

In today’s digital landscape, the proliferation of false news and misinformation has surged, particularly amplified by the rapid expansion of the internet and the pervasive reach of social media platforms. The manipulation and distortion of information have been prevalent throughout history, but the contemporary era has seen an unparalleled weaponisation of information in the new online environment, warranting an urgent response both domestically and throughout the region.

This module looks at false news, misinformation, and propaganda, shedding light on the urgency to combat these challenges effectively. It also explores possible response mechanisms, other than legal regulation, such as Media and Information Literacy (MIL) strategies and campaigns to counter misinformation without compromising the fundamental right to freedom of expression.

For the purposes of this module, the term “misinformation” is used broadly and, unless otherwise specified, includes reference to disinformation and malinformation. (For more on this, see this guide by First Draft)

What is ‘False News’?

Definition

In the digital age, the dissemination of information has evolved, giving rise to distinct yet interrelated phenomena: false news, disinformation, and misinformation, as well as malinformation.

“False news” refers to purported news items that are intentionally and verifiably false and seek to mislead readers.(1) False news mimics the format of credible news reports, harnessing attention-grabbing titles, images, and content designed to persuade readers into believing falsehoods. Usually, false news online is disseminated to amass “clicks,” “shares,” and engagement to bolster advertising revenue or further ideological agendas.(2)

The term has, in recent years, fallen out of favour due to the inaccurate implication that, despite being false, it nonetheless constitutes “news.”

Disinformation constitutes intentionally false or misleading content that is strategically propagated to deceive, manipulate, or achieve political or economic objectives.(3)

Lastly, misinformation entails false or misleading content shared inadvertently, lacking the malicious intent associated with disinformation.(4) Despite the absence of deliberate deceit, the unintended consequences of misinformation can still be harmful, contributing to public confusion and creating mistrust in reliable information sources.

While misinformation and disinformation are premised on the dissemination of false information, mal-information is based on reality, with the information being used intentionally to inflict harm on a person, social group, organisation, or country.(5)

The following table highlights the commonalities and differences among the three types of false information:

AspectsMisinformationDisinformationMal-information
False information
Shared without intent to deceive
Deliberately spread to misleadTruthfully represents but aims to deceive
IntentNo intention to deceiveIntentionally deceptiveIntends to deceive despite truthful content
Representation of reality Misrepresents without deceptive intentMisrepresents with deceptive intentTruthfully represents but deceives through intent
ExamplesUnintentional sharing of false informationFake news, hoaxes, propagandaHalf-truths, spin, selective disclosure
Impact Can still have harmful effectsCan have severe consequencesCan mislead without outright lying
Potential harm Can influence opinions and trustDamages trust, affects societal opinionsImpacts perceptions and decisions

International efforts

Several initiatives at both the regional and international levels have sought to deal with the growing problem of misinformation and other forms of harmful information online in recent years.

Of particular note at the international level is the 2017 Joint Declaration on Freedom of Expression and “Fake News”, Disinformation and Propaganda (2017 Joint Declaration) issued by the relevant freedom of expression mandate-holders of the United Nations (UN), the African Commission on Human and Peoples’ Rights (ACHPR), the Organisation for Security and Co-operation in Europe (OSCE), and the Organisation of American States (OAS).(6) The 2017 Joint Declaration noted the growing prevalence of disinformation and propaganda, both online and offline, and the various harms to which they may contribute or be a primary cause.

Amidst this evolving digital landscape, the declaration emphasised the transformative role of the internet and digital technologies in enabling access to information and facilitating responses to disinformation while acknowledging the responsibilities of intermediaries in respecting human rights.(7)

Recommendations of the 2017 Joint Declaration

The 2017 Joint Declaration highlighted, however, that efforts to regulate these harms often have negative effects on freedom of expression and, thus, identified the following recommended standards:

 

  • General prohibitions on the dissemination of information based on vague and ambiguous ideas, including “false news” or “non-objective information”, are incompatible with international standards for restrictions on freedom of expression, as set out in paragraph 1(a), and should be abolished.
  • Criminal defamation laws are unduly restrictive and should be abolished. Civil law rules on liability for false and defamatory statements are legitimate only if defendants are given a full opportunity and fail to prove the truth of those statements and also benefit from other defences, such as fair comment.
  • State actors should not make, sponsor, encourage or further disseminate statements which they know or reasonably should know to be false (disinformation) or which demonstrate a reckless disregard for verifiable information (propaganda).
  • State actors should, in accordance with their domestic and international legal obligations and their public duties, take care to ensure that they disseminate reliable and trustworthy information, including about matters of public interest, such as the economy, public health, security and the environment.

 

The Joint Declaration called on state actors to ensure that they disseminate reliable and trustworthy information, and not to make, sponsor, encourage or further disseminate statements that they know (or reasonably should know) to be false or which demonstrate a reckless disregard for verifiable information.(8)

In 2023, the UN Educational, Scientific and Cultural Organisation (UNESCO) also published Guidelines for the governance of digital platforms: safeguarding freedom of expression and access to information through a multi-stakeholder approach which “outline a set of duties, responsibilities and roles for States, digital platforms, intergovernmental organisations, civil society, media, academia, the technical community and other stakeholders” that will ensure freedom of expression and information.(9)

5 Principles for Governance Systems

The UNESCO Guidelines emphasise five principles that should underly all governance systems that impact freedom of expression and access to information on digital platforms, based on an extensive consultation process that considered over 10,000 comments from 134 countries:

 

  • Principle 1: Platforms should conduct human rights due diligence;
  • Principle 2: Platforms must adhere to international human rights standards, including in platform design, content moderation, and content curation;
  • Principle 3: Platforms must be transparent;
  • Principle 4: Platforms must make information and tools available for users;
  • Principle 5: Platforms should be accountable to relevant stakeholders.

Misinformation, Disinformation and Mal-Information

The socio-technical context

In interrogating the root of this problem, it is clear that social media has played a substantial role in the widespread distribution of misleading messages. This can be attributed to the heightened impact of social media compared to traditional platforms due to their speed, broad reach, and personalised features.(10)

User-generated content capabilities enable individuals to craft false messages while social interactions online facilitate the dissemination of these messages quickly and widely.(11) Social media features that enable users to “share,” “repost” and “follow” also amplify the reach of false information within these platforms, with little formal fact-checking or verification of information.(12)

Other digital products such as algorithms, which now determine which information is seen and prioritised by audiences, and websites that publish and disseminate such information, also contribute to the challenge.(13)

Misinformation has the powerful potential to influence opinions and behaviours in various contexts such as politics and elections.(14) The crisis of sustainability within the traditional media sector, fuelled by the growing dominance of the big tech platforms and the rapid shift away from print news, has also contributed to a generally poor information ecosystem in which misinformation and disinformation are able to thrive.

This, alongside more insidious practices such as the intentional distribution of disinformation for economic or political gain, has created what UNESCO refers to as a “perfect storm.”(15)

UNESCO identifies three causes enabling the spread of misinformation:

  • Collapsing traditional business models: As a result of the rapid decline in advertising revenue and the failure of digital advertising to generate profit, traditional newsrooms are bleeding audiences, with media consumers moving to “peer-to-peer” news products offering “on demand-access.” These decreasing budgets lead to reduced quality control and less time for “checks and balances”. They also promote “click-bait” journalism.(16) Importantly there are no commonly agreed ethics and standards on peer-to-peer news.

  • Digital transformation of newsrooms and storytelling. As the information age develops, there is a discernible digital transformation in the news industry. This transformation causes journalists to prepare content for multiple platforms, limiting their ability to properly interrogate facts. Often, journalists apply a principle of “social-first publishing” whereby their stories are posted directly to social media to meet audience demand in real-time. This, in turn, promotes click-bait practices and the pursuit of “virality” as opposed to quality and accuracy.(17)

  • The creation of new news ecosystems. With increasing access to online audiences as a result of the advent of social media platforms, users of these platforms can curate their own content streams and create their own “trust network” or “echo chambers” within which inaccurate, false, malicious, and propagandistic content can spread. These new ecosystems allow misinformation to flourish as users are more likely to share sensationalist stories and less likely to properly assess sources or facts. Importantly, once published, a user who becomes aware that a publication may constitute misinformation is largely unable to “pull back” or correct the publication.(18)

Rise in online false news in elections in Spain

In the lead-up to Spain’s regional and municipal elections in May 2023, false claims about mail ballots and election fraud circulated widely across social media platforms, echoing similar assertions made by former United States President Donald Trump prior to his 2020 election loss.(19) Debunked videos supposedly displaying election fraud spread on platforms including Facebook and Twitter.(20) Other videos circulated on Facebook and TikTok alleging electoral manipulation by the then-outgoing and currently re-elected Prime Minister’s party.(21)

 

Research uncovered numerous instances of election-related misinformation across platforms such as Twitter, Facebook, YouTube, and TikTok in Spain.(22) While content types vary, election denialism remains a prevalent theme around the world. Conspiracy groups have been found to orchestrate social media attacks resulting in distrust of independent media and creating barriers to users’ access to credible information.(23)

Journalism, political advertising, and elections

Journalism faces the threat of being overshadowed by the widespread dissemination of false information which significantly diminishes the impact of the accurate news disseminated by journalists.(24) There is also the risk of manipulation, with actors aiming to corrupt journalists or manipulate them beyond the ethical bounds of their profession.(25) Journalists, particularly those committed to uncovering inconvenient truths, often become targets of deliberate lies, rumours, and hoaxes designed to discredit their work. This is exacerbated by the instrumentalisation of false concerns by powerful entities, leading to the imposition of stringent laws that could suppress genuine news media.(26)

In the realm of political advertising and elections, the landscape lacks uniformity at the European Union (EU) level. Although the rights to freedom of expression and free elections could be interrelated, in certain circumstances, they may come into conflict.(27) The European Court of Human Rights (ECtHR) has emphasised that the interaction between freedom of expression and the right to free elections can either complement each other or create conflicts based on specific circumstances.(28) In fact, the issue predates the era of social media, with the Court emphasising in the 1987 Mathieu-Mohin and Clerfayt v Belgium matter that it is the responsibility of state authorities to facilitate the free expression of people’s opinions during elections.(29)

False information during elections

In the Salov v Ukraine (2005) case, the ECtHR reviewed a scenario involving a newspaper disseminating false information about the alleged death of a presidential candidate.(30) Despite the factual inaccuracy, the ECtHR recognised that the information related to the elections influenced the electorate’s ability to support a particular candidate.(31) Consequently, the ECtHR maintained that the same principles governing political discourse apply irrespective of the factual accuracy of the information, emphasising that even if the distributor strongly suspected the information’s untruthfulness, the European Convention on Human Rights (ECHR) did not prohibit the dissemination of information.(32)

A guide to anti-misinformation actions around the world

The Poynter Institute, an international resource on journalism, has compiled information about global efforts to regulate misinformation in various ways, including through laws, and media literacy programmes, amongst other things.(33) France passed a law that outlaws election misinformation in 2018, Croatia is reportedly working on a draft bill against hate speech and misinformation, Belarus has passed amendments to media laws that allow prosecution of people who spread false information online, and Russia has also passed an anti-misinformation bill that bans the spread of “unreliable socially-important information.”

 

The EU Disinfo Lab provides a similar resource targeted at EU states.

How To Combat Misinformation, Disinformation and Mal-Information

Of particular importance in the European context is the new EU Digital Services Act, which came into force in November 2022 and applies across the EU. The law is targeted at major online intermediaries and platforms, requiring them to put in place systems to control the spread of misinformation as well as hate speech and terrorist propaganda at the risk of large penalties calculated as a proportion of global annual revenue or a ban. It also includes other requirements related to transparency over the spread of certain types of content and the role of their services in this spread, as well as conducting an annual risk assessment.

In addition to legislation, the European Commission has introduced several alternative measures to combat disinformation:(34)

  • The Communication on “Tackling online disinformation: a European Approach” compiles tools to combat the propagation of disinformation and safeguard EU principles and the 2022 Code of Practice on Disinformation aims to fulfil the objectives outlined in the Communication.

  • The European Democracy Action Plan outlines standards for the responsibilities and the liability of online platforms in combatting disinformation.

  • The European Digital Media Observatory (EDMO), an independent observatory, unites fact-checkers, academic researchers specialising in online disinformation, social media platforms, journalist-driven media, and media literacy experts.

  • The 2018 report of the European Commission High-level Group of Experts on fake news and online disinformation, encourages a multi-dimensional approach to tackling these issues along the lines of five pillars.

Additionally, two expert groups, namely the Committee of Experts on quality journalism in the digital age and the Committee of Experts on Human Rights Dimensions of automated data processing and different forms of artificial intelligence have been appointed by the Council of Europe to explore in more detail how member states can promote a favourable environment for “an independent, diverse and pluralistic media environment in which societies can both trust and actively participate in.”(35)

Media and Information Literacy (MIL) strategies and campaigns

Given the risks inherent in legislation of regulating and criminalising speech, UNESCO proposes MIL strategies and campaigns as an alternative mechanism to detect misinformation and combat its spread, particularly online.(36)

Defining Media and Information Literacy

Media and Information Literacy (MIL) is an umbrella and inter-related concept which is divided into:

 

  • Human rights literacy which relates to the fundamental rights afforded to all persons, particularly the right to freedom of expression, and the promotion and protection of these fundamental rights.(37)
  • News literacy which refers to literacy about the news media, including journalistic standards and ethics.(38) This includes, for example, the specific ability to understand the “language and conventions of news as a genre and to recognise how these features can be exploited with malicious intent.”(39)
  • Advertising literacy which relates to understanding how advertising online works and how profits are driven in the online economy.(40)
  • Computer literacy which refers to basic IT usage and understanding how headlines, images, and, increasingly, videos can be manipulated to promote a particular narrative.(41)
  • Understanding the “attention economy” which relates to one of the causes of misinformation and the incentives to create click-bait headlines and misleading imagery to grab the attention of users and, in turn, drive online advertising revenue.(42)
  • Privacy and intercultural literacy which relate to developing standards on the right to privacy and a broader understanding of how communications interact with individual identity and social developments.(43)

The EU’s Digital Education Action Plan (2021-2027) also emphasises the importance of developing digital competencies and skills among learners, both in formal and non-formal education settings.(44) Additionally, the Digital Competence Framework for Citizens, formulated by the European Commission, outlines a comprehensive set of skills essential for all learners, spanning information and data literacy, digital content creation, online safety, and well-being.(45)

Media literacy programmes in countries such as Sweden aim to strengthen citizen resilience against disinformation and propaganda, highlighting the significance of media literacy in combating disinformation.(46)

Litigation where justifiable limitations exist

The International Covenant on Civil and Political Rights (ICCPR) provides in Article 20 that “[a]ny propaganda for war shall be prohibited by law” and that “[a]ny advocacy of national, racial or religious hatred that constitutes incitement to discrimination, hostility or violence shall be prohibited by law.”

In addition, Article 4(a) of the International Convention on the Elimination of All Forms of Racial Discrimination (CERD) requires that the dissemination of ideas based on racial superiority or hatred, incitement to racial discrimination, as well as all acts of violence or incitement to such acts against any race or group of persons of another colour or ethnic origin, must be declared an offence that is punishable by law.

Article 10(2) of the European Convention on Human Rights (ECHR) guarantees freedom of expression but acknowledges limitations in cases where expressions contribute to social harm. The provision states that:

[freedom of expression] may be subject to such formalities, conditions, restrictions or penalties as are prescribed by law and are necessary in a democratic society, in the interests of national security, territorial integrity or public safety, for the prevention of disorder or crime, for the protection of health or morals, for the protection of the reputation or rights of others, for preventing the disclosure of information received in confidence or for maintaining the authority and impartiality of the judiciary.

Efforts to regulate and prohibit misinformation and disinformation continue restrictions on expression, which must, therefore, align with the general requirements on legitimate aims, necessity and proportionality, and serve specific objectives outlined in human rights instruments. Where mis- or disinformation might amount to hate speech, terrorist content, or other forms of speech that can be legitimately prohibited, the relevant provisions under international and regional law will apply.

In instances where misinformation is so egregious that it meets the definitional elements of hate speech, litigation may be a useful and important tool in the protection and promotion of fundamental rights, including the right to equality and dignity.(47)

However, such litigation should be fully considered for unintended consequences and the possibility of jurisprudence which may negatively impact freedom of expression. Depending on the content of the speech and the harm that it causes, the publication of counter-narratives may constitute a useful complementary strategy to litigation.

Fact-checking and social media verification

Alongside MIL strategies and campaigns and litigating misinformation that constitutes hate speech, another effective tool to combat misinformation is fact-checking and social media verification. According to the Duke Reporters’ Lab, there are around 125 fact-checking projects debunking false news and misinformation in 37 European countries as of 2023.(48) In addition, the European Digital Media Observatory, which presents a map with the names and locations of all of Europe’s fact-checking organisations, demonstrates a considerable number of organisations dedicated to fact-checking information disseminated online.(49)

Fact-checking and verification processes and not new, and were first introduced by US weekly magazines such as Time in the 1920s.(50) However, they have had to adapt to the dynamic online environment and changing trends in the information ecosystem. In general, fact-checking efforts within newsrooms consist of:

  • Ex-ante fact-checking and verification: increasingly and due to shrinking newsroom budgets, ex-ante (or before the event) fact-checking is reserved for more prominent and established newsrooms and publications that employ dedicated fact-checkers.(51)

  • Ex-post fact-checking, verification and “debunking:” this method of fact-checking is becoming increasingly popular and focuses on information published after the fact. It concentrates “primarily (but not exclusively) on political ads, campaign speeches and political party manifestos” and seeks to make politicians and other public figures accountable for the truthfulness of their statements.(52) Debunking is a subset of fact-checking and requires a specific set of verification skills, increasingly in relation to user-generated content on social media platforms.

Fact-checking is central to strategies to combat misinformation and has grown exponentially in recent years due to the increasing spread of false news of misinformation and the need to debunk viral hoaxes.

Regulatory measures concerning journalism and media also play a pivotal role in effectively countering misinformation.(53) Media self-regulatory bodies use established rules on objectivity, honesty, accuracy, fairness, and rigour of information to deal with disinformation cases.(54) Examples from different countries, such as Germany, Latvia, Denmark, and Sweden, demonstrate how these jurisdictions deal with factual accuracy, ethical reporting, and correction of erroneous information in media publications.(55)

Propaganda

Unlike dis- and misinformation, the spread of propaganda is expressly prohibited in international law, provided that it propagates for war or advocacy of hatred that constitutes incitement.(56) In these instances, multiple direct legal remedies such as criminal prosecutions and interdictory or injunctive relief may result. However, propaganda does not often meet these thresholds. In these instances, MIL strategies and campaigns and fact-checking, coupled with the publication of counter-narratives or counter-disinformation, are effective remedies.(57)

EU strategy against propaganda

The EU’s strategy against propaganda involves three key components: identification, removal, and countering without engaging in counter-propaganda.(58)

  • Identification: The EU acts as a coordination platform between Member States, encouraging information sharing and best practices exchange. Europol established a specialised unit in 2015 (EU IRU) to combat terrorist propaganda online, aiming to detect and track such content.

  • Content removal: Regulation 2021/784 has been implemented compelling internet platforms operating in the EU to swiftly remove terrorist content upon authorities’ injunctions, preventing its dissemination. Notably, this regulation applies to platforms regardless of their headquarters location.

  • Countering Propaganda The EU emphasises training citizens to resist biased information and supports good-quality journalism and independent media. Platforms such as the Radicalisation Awareness Network (RAN) focus on producing alternative communications to counter extremist propaganda.

Conclusion

False news, comprising disinformation, misinformation, and mal-information, poses complex challenges in today’s digital realm. Addressing these requires multifaceted approaches. Media and Information Literacy (MIL) strategies, encompassing human rights, media literacy, and privacy awareness, serve as pivotal tools. Complementing this, fact-checking, social media verification, and counter-narratives aid in debunking false content. While legal measures exist, such as litigation for hate speech instances, caution must be exercised to prevent unintended consequences and prevent the stifling of the right to freedom of expression. By combining educational, technological, and legal strategies, combating false news becomes an ongoing endeavour vital to safeguarding the integrity of information dissemination.

  • 1. Media Defence, ‘False News, Misinformation & Propaganda’ (accessible at https://www.mediadefence.org/resource-hub/false-news-misinformation-and-propaganda/).
  • 2. Baptista and Gradim, ‘Understanding Fake News Consumption: A Review’ (2020) 9(10) Soc. Sci 5.
  • 3. European Regulators Group for Audiovisual Media Services, ‘Notions of Disinformation and Related Concepts’ (2021) at p. 30 (accessible at https://erga-online.eu/wp-content/uploads/2021/03/ERGA-SG2-Report-2020-Notions-of-disinformation-and-related-concepts-final.pdf) (‘ERGA’).
  • 4. Id.
  • 5. International Telecommunication Union, ‘Session 5: Disinformation, misinformation, malinformation and Infodemics: Ways to handle’ (accessible at https://www.itu.int/en/ITU-D/Regional-Presence/AsiaPacific/Pages/Events/2021/ASP Regional Dialogue on Digital Transformation/Session Pages/RD-Session-5.aspx).
  • 6. Joint Declaration on Freedom of Expression and “Fake News”, Disinformation and Propaganda (2017) (accessible at https://www.osce.org/fom/302796?download=true).
  • 7. Above n 8.
  • 8. Above n 8.
  • 9. UNESCO, ‘Guidelines for the Governance of Digital Platforms: Safeguarding freedom of expression and access to information through a multistakeholder approach,’ (2023) (accessible at https://unesdoc.unesco.org/ark:/48223/pf0000387339).
  • 10. Elinor Carmi and others, ‘Data citizenship: Rethinking data literacy in the age of disinformation, misinformation, and malinformation’ (2020) 9(2) Internet Policy Review 5-6 (accessible at https://policyreview.info/articles/analysis/data-citizenship-rethinking-data-literacy-age-disinformation-misinformation-and).
  • 11. Id.
  • 12. Elinor Carmi and others, ‘Data citizenship: Rethinking data literacy in the age of disinformation, misinformation, and malinformation’ (2020) 9(2) Internet Policy Review 5-6 (accessible at https://policyreview.info/articles/analysis/data-citizenship-rethinking-data-literacy-age-disinformation-misinformation-and).
  • 13. Ali Khan and others, ‘The anatomy of “fake news”: Studying false messages as digital objects” (2022) 37(2) Journal of Information Technology 125 (accessible at https://journals.sagepub.com/doi/10.1177/02683962211037693).
  • 14. Above n 1 at p. 18.
  • 15. Id.
  • 16. Above n 1 at p. 57.
  • 17. Above n 1 at pp. 57-8.
  • 18. Above n 1 pp. 59-61.
  • 19. AP, ‘Warning over online misinformation ahead of Spanish election’ (2023) Euronews (accessible at https://www.euronews.com/2023/07/19/warning-over-online-misinformation-ahead-of-spanish-election).
  • 20. Id.
  • 21. Above n 21.
  • 22. Above n 21.
  • 23. International Press Institute, ‘New report: How conspiracy groups in Spain worked to undermine the media literacy project of the Maldita.es foundation’ (2023) (accessible at https://ipi.media/new-report-how-conspiracy-groups-in-spain-worked-to-undermine-the-media-literacy-project-of-the-maldita-es-foundation/).
  • 24. UNESCO, ‘Journalism, “Fake News” & Disinformation: Handbook for Journalism Education and Training’ (2018) (accessible at https://unesdoc.unesco.org/ark:/48223/pf0000374458).
  • 25. Id.
  • 26. Above n 26.
  • 27. Paolo Cavaliere, ‘The Truth in Fake News: How Disinformation Laws Are Reframing the Concepts of Truth and Accuracy on Digital Platforms’ (2022) 3 European Convention on Human Rights Law Review 513 (accessible at https://brill.com/downloadpdf/view/journals/eclr/3/4/article-p481_005.pdf).
  • 28. Id.
  • 29. (Application no. 9267/81) (1998) para. 54 (accessible at https://hudoc.echr.coe.int/eng#{“languageisocode”:[“ENG”],”appno”:[“9267/81″],”documentcollectionid2”:[“CHAMBER”],”itemid”:[“001-57536”]}).
  • 30. (Application no. 65518/01) (2005) para. 111 (accessible at https://hudoc.echr.coe.int/app/conversion/docx/?library=ECHR&id=001-23870&filename=SALOV v. UKRAINE.docx&logEvent=False).
  • 31. Id.
  • 32. Above n 32 para. 113.
  • 33. Daniel Funke and Daniela Flamini, ‘A guide to anti-misinformation actions around the world,’ Poynter (accessible at https://www.poynter.org/ifcn/anti-misinformation-actions/).
  • 34. European Commission, ‘Tackling online disinformation’ (accessible at https://digital-strategy.ec.europa.eu/en/policies/online-disinformation).
  • 35. Council of Europe, ‘Information Disorder’ (accessible at https://www.coe.int/en/web/freedom-expression/information-disorder#{“35128646”:[0]}).
  • 36. UNESCO, ‘Journalism, “Fake News” and Disinformation: Handbook for Journalism Education and Training (2018) (‘UNESCO Handbook’) available at https://unesdoc.unesco.org/ark:/48223/pf0000265552.
  • 37. Id.
  • 38. Ibid p. 70.
  • 39. Id.
  • 40. Ibid p. 70.
  • 41. Id.
  • 42. Ibid p. 47.
  • 43. Ibid p. 70.
  • 44. European Commission, ‘Digital Education Action Plan (2021-2027) (accessible at https://education.ec.europa.eu/focus-topics/digital-education/action-plan).
  • 45. European Commission, ‘DigComp 2.2: The Digital Competence Framework for Citizens – With new examples of knowledge, skills and attitudes’ (2022) (accessible at https://publications.jrc.ec.europa.eu/repository/bitstream/JRC128415/JRC128415_01.pdf).
  • 46. European Committee of the Regions, ‘Developing a handbook on good practice in countering disinformation at local and regional level’ (2022) at p. 29 (accessible at https://cor.europa.eu/en/engage/studies/Documents/Developing a handbook on good practice in countering disinformation at local and regional level/Online-disinformation_full study.pdf).
  • 47. For a useful discussion on the balancing of rights see Judith Geldenhuys and Michelle Kelly-Louw, ‘Hate Speech and Racist Slurs in the South African Context: Where to Start?’ (2020) 23 PER 12 (accessible at https://www.scielo.org.za/pdf/pelj/v23n1/27.pdf).
  • 48. Duke Reporters’ Lab, ‘Browse Fact-Checking’ (2020) (accessible at https://reporterslab.org/fact-checking/).
  • 49. European Digital Media Observatory, ‘Map of Fact-checking Activities in Europe’ (accessible at https://edmo.eu/map-of-fact-checking-activities-in-europe/).
  • 50. UNESCO at p. 81.
  • 51. Id.
  • 52. UNESCO at p. 82.
  • 53. European Regulators Group for Audiovisual Media Services, ‘Notions of Disinformation and Related Concepts’ (2021) at p. 41, accessible at https://erga-online.eu/wp-content/uploads/2021/03/ERGA-SG2-Report-2020-Notions-of-disinformation-and-related-concepts-final.pdf.
  • 54. Id.
  • 55. Ibid at p. 42.
  • 56. Article 20 of the ICCPR, read with Article 4(a) of CERD.
  • 57. See, for example, the UK Government Communications Services, ‘RESIST: Counter-disinformation toolkit’ (accessible at https://www.fundacioncarolina.es/wp-content/uploads/2020/11/Toolkit-UK.pdf).
  • 58. Marie Robin, ‘European Policies in the fight to counter propaganda’ (2023) The Research and Studies Centre on Europe (accessible at https://www.robert-schuman.eu/en/european-issues/665-european-policies-in-the-fight-to-counter-propaganda).