Back to main site

    Misinformation, Disinformation and Mal-Information

    Module 8: ‘False News’, Misinformation and Propaganda

    The problem statement

    Misinformation should not be confused with quality journalism and the circulation of trustworthy information which complies with professional standards and ethics.(1) Misinformation and its ilk are not new but rather have become increasingly more powerful as they are fuelled by new technologies and rapid online dissemination. The consequence is that digitally‑driven misinformation, in contexts of polarisation, risks eclipsing quality journalism, and the truth.(2)

    Increasingly, strategies to combat misinformation should be more social and educational in their character in order to ensure that the right to freedom of expression is not violated by over-broad legislative provisions which criminalise or chill expression. The current misinformation ecosystem, therefore, requires a critical assessment of the reasons for the dissemination of misinformation and the establishment of MIL campaigns.(3) In effect, combatting misinformation should fall more within the realm of advocacy and education than that of litigation. The limited litigation in this space bears testament to this. However, this is likely to change as digital rights litigators engage in more strategic and test case litigation seeking to mitigate misinformation while protecting and promoting freedom of expression.

    Defining false information (4)
    Disinformation Disinformation is information that is false, and the person who is disseminating it knows it is false. “It is a deliberate, intentional lie, and points to people being actively disinformed by malicious actors”. (5)
    Misinformation Misinformation is information that is false, but the person who is disseminating it believes that it is true.(6)
    Mal-information Mal-information is information that is based on reality but is used to inflict harm on a person, organisation or country.(7)

    Causes of misinformation

    To understand how to combat misinformation, it is useful to first understand how it spreads.  With the advent of the information age and the internet, information is spread more rapidly and often with the click of a mouse.(8) Equally, the speed at which information is transmitted and the instant access to information which the internet provides has caused a rush to publish and be the first to transit information. This, alongside more insidious practices such as the intentional distribution for disinformation for economic or political gain, has created what the UN Educational, Scientific and Cultural Organisation (UNESCO) refers to as a “perfect storm”.(9)

    UNESCO identifies three causes enabling the spread of misinformation:

    • Collapsing traditional business models. As a result of the rapid decline in advertising revenue and the failure of digital advertising to generate profit, traditional newsrooms are bleeding audiences, with media consumers moving to “peer-to-peer” news products offering “on demand-access”.  These decreasing budgets lead to reduced quality control and less time for “checks and balances”.  They also promote “click-bait” journalism. Importantly, peer-to-peer news has no agreed-upon ethics and standards.
    • Digital transformation of newsrooms and storytelling. As the information age develops, there is a discernible digital transformation in the news industry. This transformation causes journalists to prepare content for multiple platforms, limiting their ability to properly interrogate facts. Often, journalists apply a principle of “social-first publishing” whereby their stories are posted directly to social media to meet audience demand in real-time. This, in turn, promotes click-bait practices and the pursuit of “virality” as opposed to quality and accuracy.(10)
    • The creation of new news ecosystems. With increasing access to online audiences as a result of the advent of social media platforms, users of these platforms can curate their own content streams and create their own “trust network” or “echo chambers” within which inaccurate, false, malicious, and propagandistic content can spread. These new ecosystems allow misinformation to flourish as users are more likely to share sensationalists stories and are far less likely to properly assess sources or facts. Importantly, once published, a user who becomes aware that a publication may constitute misinformation is largely unable to “pull back” or correct the publication.(11)

    These causes continue to pose difficulties for newsrooms, journalists, and social media users as new news ecosystems, in particular, enable malicious practices and actors to flourish. However, as discussed, there is a fine line between seeking to combat the spread of misinformation online and violating the right to freedom of expression.

    WASHLITE v Fox News(12)

    On 2 April 2020, the Washington League for Increased Transparency and Ethics (WASHLITE) instituted proceedings against Fox News, a conservative American news network, claiming that “Fox’s repeated claims that the COVID-19 pandemic was/is a hoax is not only an unfair act, it is deceptive and therefore actionable under Washington’s Consumer Protection Act.”(13) WASHLITE sought a declaration to this effect and an injunction (interdict) prohibiting repeated statements on Fox News stating that COVID-19 is a hoax.  In its findings, the Washington Superior Court found that WASHLITE’s goal was “laudable” but that its arguments ran “afoul of the protections of the First Amendment”, the right to freedom of expression.  Its case was subsequently dismissed.

    Content moderation by private actors

    As private technology platforms have grown their audiences around the world and become increasingly powerful, the decisions they make internally as to how to moderate the content appearing on their platforms have become increasingly consequential for the protection of freedom of expression and access to information in the digital age. How these platforms make decisions about removing or downgrading content they classify as mis- or disinformation requires transparency and accountability in order to ensure the protection of rights and the creation of an enabling information eco-system. Even decisions about which content is shown to users and how (for example, ranking and curating of feeds) has the potential to affect freedom of expression and access to information.

    Rarely do the community standards enforced by these companies accord with domestic legal provisions that regulate, for example, hate speech or propaganda. Research has also found that untargeted or disproportionate content moderation disproportionately impacts marginalised persons, mainly through disregarding their experiences on social media.(14)

    While it is important to ensure that states do not approach intermediaries such as social media platforms to attempt to remove online content outside the bounds of the law, it is increasingly apparent that there is a need for greater oversight over the decisions these companies make that affect fundamental rights.

    In this regard, the case of UEJF v. Twitter in France is instructive. As described by the Columbia Global Freedom of Expression Case Law Database:

    “The Paris Court of Appeal confirmed an order from the Paris Tribunal ordering Twitter to provide information on their measures to fight online hate speech. Six French organizations had approached the Court after their research indicated that Twitter only removed under 12% of tweets that were reported to them, and sought information on the resources Twitter dedicated to the fight against online racist, anti-Semitic, homophobic speech and incitement to gender-based violence and commission of crimes against humanity. The Paris Tribunal had ruled that Twitter provide this information, and despite Twitter’s argument in the Court of Appeal that they had no statutory obligation to disclose this information, the Court held that the organizations were entitled to the information to enable them to determine whether to file an application under French law that Twitter was not promptly and systematically removing hate speech from their platform.”(15)

    How to combat misinformation

    Effectively combatting misinformation remains a pressing contemporary issue, with various remedies posited by jurists, academics, and activists. Notably, Associate Justice of the Supreme Court of the United States, Anthony Kennedy, in his majority decision in United States v Alvarez(16) held that “[t]he remedy for speech that is false is speech that is true.  This is the ordinary course in a free society. The response to the unreasoned is the rational; to the uninformed, the enlightened; to the straight‑out lie, the simple truth.”(17) MIL strategies and campaigns proposed by UNESCO seek to operationalise the position proposed by Justice Kennedy and provide a holistic approach to combating misinformation, without limiting the right to freedom of expression.

    Media and Information Literacy (MIL) strategies and campaigns

    As a point of departure, MIL strategies and campaigns are a process which enables the detection of misinformation and a means to combat its spread, particularly online.(18) MIL is an umbrella and inter-related concept which is divided into:

    • Human rights literacy which relates to the fundamental rights afforded to all persons, particularly the right to freedom of expression, and the promotion and protection of these fundamental rights.
    • News literacy which refers to literacy about the news media, including journalistic standards and ethics.(19) This includes, for example, the specific ability to understand the “language and conventions of news as a genre and to recognise how these features can be exploited with malicious intent.”(20)
    • Advertising literacy which relates to understanding how online advertising works and how profits are driven in the online economy.(21)
    • Computer literacy which refers to basic IT usage and understanding the easy manner in which headlines, images, and, increasingly, videos can be manipulated to promote a particular narrative.(22)
    • Understanding the “attention economy” which relates to one of the causes of misinformation and the need for journalists and editors to focus on click-bait headlines and misleading imagery to grab the attention of users and, in turn, drive online advertising revenue.(23)
    • Privacy and intercultural literacy which relatesto developing standards on the right to privacy and a broader understanding of how communications interact with individual identity and social developments.(24)

    MIL strategies and campaigns, such as the COVID-19 campaign by the UN detailed below, should underscore the importance of media and information literacy in general but should also include a degree of philosophical understating. According to UNESCO, “[MIL strategies and campaigns should assist users] grasp that authentic news does not constitute the full ‘truth’ (which is something only approximated in human interactions with each other and with reality over time).”(25)

    Five ways in which the UN is fighting the COVID-10 ‘infodemic’

    The coronavirus (COVID-19) pandemic has generated significant amounts of misinformation, ranging from the use of disinfectants to combat the virus to false claims that the virus can spread through radio waves and mobile networks.  In order to counter this “infodemic”, the UN has taken five steps to combat misinformation:(26)

    1. Produce and disseminate facts and accurate information. As a point of departure, the UN identified that the World Health Organisation (WHO) is at the foreground of the battle against the pandemic and that it is transmitting authoritative information based on science while also seeking to counter myths. Identifying sources such as the WHO that produce and disseminate facts is a central tenet to countering misinformation.
    2. Partner with platforms and suitable partners. Allied to the distribution of accurate information is finding the right partners. The UN and the WHO partnered with the International Telecommunications Union (ITU) and the UN Children’s Fund (UNICEF) to help persuade all telecommunications companies worldwide to circulate factual text messages about the virus.
    3. Work with the media and journalists. UNESCO published two policy briefs that assess the COVID-19 which assist journalists working on the frontlines of the “infodemic” around the world to ensure accurate, trustworthy and verifiable public health information.
    4. Mobilise civil society. Through the UN Department of Global Communications, key sources of information on opportunities to access, participate and contribute to UN processes during COVID-19 were communicated to civil society organisations (CSOs) to ensure that all relevant stakeholders ae communicated.
    5. Speak out for rights. Michelle Bachelet, the former UN High Commissioner for Human Rights, joined a chorus of activists to speak out against restrictive measures imposed by states against independent media, as well as the arrest and intimidation of journalists, arguing that the free flow of information is vital in fighting COVID-19.

    Litigation where justifiable limitations exist

    The International Covenant on Civil and Political Rights (ICCPR) provides in article 20 that “[a]ny propaganda for war shall be prohibited by law” and that “[a]ny advocacy of national, racial or religious hatred that constitutes incitement to discrimination, hostility or violence shall be prohibited by law.”

    In addition, article 4(a) of the International Convention on the Elimination of All Forms of Racial Discrimination (CERD) requires that the dissemination of ideas based on racial superiority or hatred, incitement to racial discrimination, as well as all acts of violence or incitement to such acts against any race or group of persons of another colour or ethnic origin, must be declared an offence that is punishable by law.

    Despite the importance of freedom of expression, not all speech is protected under international law, and some forms of speech are required to be prohibited by states. However, there is a need for clear and narrowly circumscribed definitions of what is meant by the term “hate speech”, or objective criteria that can be applied. Over-regulation of hate speech can violate the right to freedom of expression, while under-regulation may lead to intimidation, harassment or violence against minorities and protected groups.(27)

    In instances where misinformation is so egregious that it meets the definitional elements of hate speech, litigation may be a useful and important tool in the protection and promotion of fundamental rights, including the right to equality and dignity.(28) However, such litigation should consider the potential for unintended consequences and the possibility of jurisprudence which may negatively impact freedom of expression. Depending on the content of the speech and the harm that it causes, the publication of counter-narratives may constitute a useful complementary strategy to litigation.

    For more information on this topic, see module 6 of this series of Advanced Modules on Digital Rights and Freedom of Expression Online in sub-Saharan Africa.

    Fact-checking and social media verification

    Alongside MIL strategies and campaigns and litigating misinformation that constitutes hate speech, another effective tool to combat misinformation is fact-checking and social media verification. According to the Duke Reporters’ Lab, in 2022 there were nearly 400 fact-checking projects debunking misinformation in 105 countries around the world, up from about 186 organisations in 2016.(29)

    In general, fact-checking and verification processes, which were first introduced by US weekly magazines such as Time in the 1920s,(30) consist of:

    • Ex-ante fact-checking and verification.  Increasingly and due to shrinking newsroom budgets, ex-ante (or before the event) fact-checking is reserved for more prominent and established newsroom and publications who employ dedicated fact-checkers.
    • Ex-post fact-checking, verification, and “debunking”. This method of fact-checking is increasingly popular and focuses on information published after the fact. It focuses on enabling accountability for the veracity of information after publication. Debunking is a subset of fact‑checking and requires a specific set of verification skills, increasingly in relation to user-generated content on social media platforms.

    Fact-checking is central to strategies to combat misinformation and has grown exponentially in recent years due to the increasing spread of false news and misinformation, and the need to debunk viral hoaxes.(31) Alongside MIL strategies and campaigns, fact-checking and social media verification is becoming increasingly important in the fight against false news and misinformation.

    The REAL411 and PADRE

    The Real 411 is an initiative launched in South Africa as a civil society-led strategy to combat disinformation. The online REAL411 platform, which was supported by South Africa’s Independent Electoral Commission during South Africa’s 2019 national elections and the 2021 local elections, allows users to report disinformation to the Digital Complaints Committee (DCC), which assists a complainant with referrals to one of the multiple statutory bodies in South Africa that may assist with a remedy. The DCC may also assist with the publication of counter-narratives. Aggrieved parties may appeal to the Appeals Committee should they be dissatisfied with an outcome. The Real411 has since expanded to address online hate speech, incitement, and harassment as well.

    In addition to the REAL411, PADRE or the Political Party Advert Repository was an innovative civil-society initiative which collated political party advertisements and assisted users to distinguish between genuine and false political party advertising during South Africa’s 2019 national elections.


    1. UNESCO Handbook above n 2 at p.18. Back
    2. Id. Back
    3. Id at p. 70. Back
    4. Id at pp 44-5. Back
    5. Id at pp 44-5. Back
    6. Id. Back
    7. Id. Back
    8. Id at p.55. Back
    9. Id. Back
    10. Id at pp. 57-8. Back
    11. Id at pp. 59-61. Back
    12. Washington League for Increased Transparency and Ethics v Fox News, Plaintiffs Complaint for Declaratory and Injunctive Relief, 2 April 2020 (accessible here: Back
    13. Id. Back
    14. Eugenia Sipaer, ‘AI Content Moderation, Racism and (de)Coloniality’, International Journal of Bullying Prevention’, (2021) p. 61 (accessible at: Back
    15. Columbia Global Freedom of Expression Database, ‘UEJF v. Twitter,’ (2022) (accessible at: Back
    16. United States v Alvarez, 567 U.S. 709 (2012) (accessible at: Back
    17. Id at pp. 15-6. Back
    18. UNESCO, ‘Journalism, ‘Fake News’ and Disinformation: Handbook for Journalism Education and Training (2018) (UNESCO Handbook) at page 15 (accessible at: at p.70. Back
    19. Id. Back
    20. Id. Back
    21. Id. Back
    22. Id. Back
    23. Id. Back
    24. Id. Back
    25. Id at p.72. Back
    26. For a useful discussion on the balancing of rights see J Geldenhuys and M Kelly-Louw, ‘Hate Speech and Racist Slurs in the South African Context: Where to Start?’ (Vol 23) [2020] PER 12 (accessible at: Back
    27. For a useful discussion on the balancing of rights see J Geldenhuys and M Kelly-Louw, ‘Hate Speech and Racist Slurs in the South African Context: Where to Start?’ (Vol 23) [2020] PER 12 (accessible at: Back
    28. Duke Reporters’ Lab, ‘Fact-checkers extend their global reach with 391 outlets, but growth has slowed ,’ (2022) (accessible at: Back
    29. UNESCO, ‘Journalism, ‘Fake News’ and Disinformation: Handbook for Journalism Education and Training (2018) (UNESCO Handbook) at page 15 (accessible at: at p.81/ Back
    30. For more resources on the legal defence of factcheckers, see the Fact-Checkers Legal Support Initiative (accessible at: Back