How To Combat Misinformation, Disinformation and Mal-Information
Module 5: ‘False News’, Misinformation & Propaganda
Of particular importance in the European context is the new EU Digital Services Act, which came into force in November 2022 and applies across the EU. The law is targeted at major online intermediaries and platforms, requiring them to put in place systems to control the spread of misinformation as well as hate speech and terrorist propaganda at the risk of large penalties calculated as a proportion of global annual revenue or a ban. It also includes other requirements related to transparency over the spread of certain types of content and the role of their services in this spread, as well as conducting an annual risk assessment.
In addition to legislation, the European Commission has introduced several alternative measures to combat disinformation:(1)
- The Communication on “Tackling online disinformation: a European Approach” compiles tools to combat the propagation of disinformation and safeguard EU principles and the 2022 Code of Practice on Disinformation aims to fulfil the objectives outlined in the Communication.
- The Action Plan on Disinformation aims to enhance the EU’s capacity and collaboration in combatting disinformation.
- The European Democracy Action Plan outlines standards for the responsibilities and the liability of online platforms in combatting disinformation.
- The European Digital Media Observatory (EDMO), an independent observatory, unites fact-checkers, academic researchers specialising in online disinformation, social media platforms, journalist-driven media, and media literacy experts.
- The Strengthened Code of Practice on Disinformation, endorsed on 16 June 2022, brings together diverse stakeholders committed to a broad range of voluntary obligations to counter disinformation.
- The 2018 report of the European Commission High-level Group of Experts on fake news and online disinformation, encourages a multi-dimensional approach to tackling these issues along the lines of five pillars.
Additionally, two expert groups, namely the Committee of Experts on quality journalism in the digital age and the Committee of Experts on Human Rights Dimensions of automated data processing and different forms of artificial intelligence have been appointed by the Council of Europe to explore in more detail how member states can promote a favourable environment for “an independent, diverse and pluralistic media environment in which societies can both trust and actively participate in.”(2)
Media and Information Literacy (MIL) strategies and campaigns
Given the risks inherent in legislation of regulating and criminalising speech, UNESCO proposes MIL strategies and campaigns as an alternative mechanism to detect misinformation and combat its spread, particularly online.(3)
Defining Media and Information Literacy
Media and Information Literacy (MIL) is an umbrella and inter-related concept which is divided into:
- Human rights literacy which relates to the fundamental rights afforded to all persons, particularly the right to freedom of expression, and the promotion and protection of these fundamental rights.(4)
- News literacy which refers to literacy about the news media, including journalistic standards and ethics.(5) This includes, for example, the specific ability to understand the “language and conventions of news as a genre and to recognise how these features can be exploited with malicious intent.”(6)
- Advertising literacy which relates to understanding how advertising online works and how profits are driven in the online economy.(7)
- Computer literacy which refers to basic IT usage and understanding how headlines, images, and, increasingly, videos can be manipulated to promote a particular narrative.(8)
- Understanding the “attention economy” which relates to one of the causes of misinformation and the incentives to create click-bait headlines and misleading imagery to grab the attention of users and, in turn, drive online advertising revenue.(9)
- Privacy and intercultural literacy which relate to developing standards on the right to privacy and a broader understanding of how communications interact with individual identity and social developments.(10)
The EU’s Digital Education Action Plan (2021-2027) also emphasises the importance of developing digital competencies and skills among learners, both in formal and non-formal education settings.(11) Additionally, the Digital Competence Framework for Citizens, formulated by the European Commission, outlines a comprehensive set of skills essential for all learners, spanning information and data literacy, digital content creation, online safety, and well-being.(12)
Media literacy programmes in countries such as Sweden aim to strengthen citizen resilience against disinformation and propaganda, highlighting the significance of media literacy in combating disinformation.(13)
Litigation where justifiable limitations exist
The International Covenant on Civil and Political Rights (ICCPR) provides in Article 20 that “[a]ny propaganda for war shall be prohibited by law” and that “[a]ny advocacy of national, racial or religious hatred that constitutes incitement to discrimination, hostility or violence shall be prohibited by law.”
In addition, Article 4(a) of the International Convention on the Elimination of All Forms of Racial Discrimination (CERD) requires that the dissemination of ideas based on racial superiority or hatred, incitement to racial discrimination, as well as all acts of violence or incitement to such acts against any race or group of persons of another colour or ethnic origin, must be declared an offence that is punishable by law.
Article 10(2) of the European Convention on Human Rights (ECHR) guarantees freedom of expression but acknowledges limitations in cases where expressions contribute to social harm. The provision states that:
[freedom of expression] may be subject to such formalities, conditions, restrictions or penalties as are prescribed by law and are necessary in a democratic society, in the interests of national security, territorial integrity or public safety, for the prevention of disorder or crime, for the protection of health or morals, for the protection of the reputation or rights of others, for preventing the disclosure of information received in confidence or for maintaining the authority and impartiality of the judiciary.
Efforts to regulate and prohibit misinformation and disinformation continue restrictions on expression, which must, therefore, align with the general requirements on legitimate aims, necessity and proportionality, and serve specific objectives outlined in human rights instruments. Where mis- or disinformation might amount to hate speech, terrorist content, or other forms of speech that can be legitimately prohibited, the relevant provisions under international and regional law will apply.
In instances where misinformation is so egregious that it meets the definitional elements of hate speech, litigation may be a useful and important tool in the protection and promotion of fundamental rights, including the right to equality and dignity.(14)
However, such litigation should be fully considered for unintended consequences and the possibility of jurisprudence which may negatively impact freedom of expression. Depending on the content of the speech and the harm that it causes, the publication of counter-narratives may constitute a useful complementary strategy to litigation.
Fact-checking and social media verification
Alongside MIL strategies and campaigns and litigating misinformation that constitutes hate speech, another effective tool to combat misinformation is fact-checking and social media verification. According to the Duke Reporters’ Lab, there are around 125 fact-checking projects debunking false news and misinformation in 37 European countries as of 2023.(15) In addition, the European Digital Media Observatory, which presents a map with the names and locations of all of Europe’s fact-checking organisations, demonstrates a considerable number of organisations dedicated to fact-checking information disseminated online.(16)
Fact-checking and verification processes and not new, and were first introduced by US weekly magazines such as Time in the 1920s.(17) However, they have had to adapt to the dynamic online environment and changing trends in the information ecosystem. In general, fact-checking efforts within newsrooms consist of:
- Ex-ante fact-checking and verification: increasingly and due to shrinking newsroom budgets, ex-ante (or before the event) fact-checking is reserved for more prominent and established newsrooms and publications that employ dedicated fact-checkers.(18)
- Ex-post fact-checking, verification and “debunking:” this method of fact-checking is becoming increasingly popular and focuses on information published after the fact. It concentrates “primarily (but not exclusively) on political ads, campaign speeches and political party manifestos” and seeks to make politicians and other public figures accountable for the truthfulness of their statements.(19) Debunking is a subset of fact-checking and requires a specific set of verification skills, increasingly in relation to user-generated content on social media platforms.
Fact-checking is central to strategies to combat misinformation and has grown exponentially in recent years due to the increasing spread of false news of misinformation and the need to debunk viral hoaxes.
Regulatory measures concerning journalism and media also play a pivotal role in effectively countering misinformation.(20) Media self-regulatory bodies use established rules on objectivity, honesty, accuracy, fairness, and rigour of information to deal with disinformation cases.(21) Examples from different countries, such as Germany, Latvia, Denmark, and Sweden, demonstrate how these jurisdictions deal with factual accuracy, ethical reporting, and correction of erroneous information in media publications.(22)