What is ‘False News’?
Module 8: ‘False News’, Misinformation and Propaganda
Although definitions of misinformation and disinformation are not universally agreed upon, especially in the online realm, we can glean insights from emerging interpretations of these concepts for comparative purposes:(1)
Disinformation | Disinformation is information that is false, and the person who is disseminating it knows it is false. “It is a deliberate, intentional lie, and points to people being actively disinformed by malicious actors”. |
Misinformation | Misinformation is information that is false, but the person who is disseminating it believes that it is true. |
Mal-information | Mal-information is information that is based on reality but is used to inflict harm on a person, organisation or country. |
Disinformation refers to content purporting to be news that is intentionally and verifiably false and that seeks to mislead readers. In June 2023, the United Nations Secretary-General published a policy brief on information integrity on digital platforms. This policy brief recognises fake news sites that are made to look legitimate as one tactic of dis-information and attributes the difficulty in tracking the true scale of false news to the cloned versions of news sites and articles that are made to look legitimate. In the public address accompanying the launch of the policy brief, the Secretary-General noted with concern the impact that the rapid growth of generative artificial intelligence and digital platforms has on spreading mis- and dis-information globally.(2) The Secretary-General notes further that digital platforms have done little to reduce the spread of hate speech and mis- and misinformation on their platforms. Some of the proposals made by the policy brief are for:
- Governments, technology companies, and other involved parties should avoid using, endorsing, or promoting disinformation and hate speech for any reason.
- Governments ought to ensure a free, sustainable, independent, and diverse media environment, providing robust safeguards for journalists.
- Digital platforms need to prioritize safety and privacy in the design of all products. They should consistently apply policies and allocate resources across different countries and languages.
- All those involved should swiftly implement measures to ensure that all applications of artificial intelligence are safe, secure, responsible, and ethical, complying with human rights obligations.
For the purposes of this module, the term “misinformation” is used broadly and, unless otherwise specified, includes reference to disinformation and mal-information. The term ‘false news’ is not preferred unless referring to legal provisions regulating such, for the reason that the concept of ‘news’ should not be conflated with false information. Misinformation should not be confused with quality journalism and the circulation of trustworthy information which complies with professional standards and ethics.(3) Misinformation and its ilk are not new but rather have become increasingly more powerful as they are fuelled by new technologies and rapid online dissemination. The consequence is that digitally‑driven misinformation, in contexts of polarisation, risks eclipsing quality journalism, and the truth.
Prevalence of mis- and disinformation
The 2017 Joint Declaration on Freedom of Expression and ‘Fake News,’ Disinformation and Propaganda (2017 Joint Declaration) noted the growing prevalence of disinformation and propaganda, both online and offline, and the various harms to which they may contribute or be a primary cause. The quandary remains that the internet both facilitates the circulation of disinformation and propaganda and also provides a useful tool to enable responses to this.
More recently, in October 2023, at the African Commission on Human and Peoples Rights held its 77th Ordinary Session the LEXOTA disinformation tracker was launched to explore and track the role of the government and law in curbing disinformation and its impact on freedom of expression.(4) The tracker operates in real time and monitors 44 out of the 55 African countries making it a highly effective tool to curb disinformation.
The human rights implications of mis- and disinformation
In March 2017, the Joint Declaration on Freedom of Expression and ‘Fake News,’ Disinformation and Propaganda (2017 Joint Declaration) was issued by the relevant freedom of expression mandate-holders of the United Nations (UN), the African Commission on Human and Peoples’ Rights (ACHPR), the Organisation for Security and Co-operation in Europe (OSCE), and the Organisation of American States (OAS). The 2023 Joint Declaration on Media Freedom and Democracy, stressed that mandate holders should:(5)
- Adhere to high standards of information provision that meets recognised professional and ethical standards;
- Refrain and distance themselves from disinformation, discrimination, hate speech and propaganda. Media should never serve as a vehicle for propaganda for war. In case of incidental errors in their reporting, the media should promptly correct the information.
- Media should proactively work towards identifying and changing harmful stereotypes and should counteract disinformation, hate speech, discriminatory norms and attitudes as well as negative prejudice in their coverage and reporting.
Principle 22 of the 2019 ACHPR principles calls on states to repeal laws in their respective countries that criminalise the publication of false news.(6) This recommendation likely stems from concerns about the potential misuse of calls to curb mis- and dis-information and attempts to establish a balance between combating misinformation and protecting individuals right to free expression.
Importantly, the 2017 Joint Declaration stressed that general prohibitions on the dissemination of information based on vague and ambiguous ideas, such as ‘false news,’ are incompatible with international standards for restrictions on freedom of expression. However, it went further to state that this did not justify the dissemination of knowingly or recklessly false statements by official or state actors. In this regard, the Joint Declaration called on state actors to take care to ensure that they disseminate reliable and trustworthy information, and not to make, sponsor, encourage or further disseminate statements that they know (or reasonably should know) to be false or which demonstrate a reckless disregard for verifiable information.
The 2017 Joint Declaration identified the following standards for disinformation and propaganda:
“Standards on disinformation and propaganda
(a)General prohibitions on the dissemination of information based on vague and ambiguous ideas, including “false news” or “non-objective information”, are incompatible with international standards for restrictions on freedom of expression, as set out in paragraph 1(a), and should be abolished.
(b)Criminal defamation laws are unduly restrictive and should be abolished. Civil law rules on liability for false and defamatory statements are legitimate only if defendants are given a full opportunity and fail to prove the truth of those statements and also benefit from other defences, such as fair comment.
(c)State actors should not make, sponsor, encourage or further disseminate statements which they know or reasonably should know to be false (disinformation) or which demonstrate a reckless disregard for verifiable information (propaganda).
(d)State actors should, in accordance with their domestic and international legal obligations and their public duties, take care to ensure that they disseminate reliable and trustworthy information, including about matters of public interest, such as the economy, public health, security and the environment.”
Causes of misinformation
To understand how to combat misinformation, it is useful to first understand its causes and how it spreads. With the advent of the information age and the internet, information is spread more rapidly and often with the click of a mouse.(7) Equally, the speed at which information is transmitted and the instant access to information that the internet provides has caused a rush to publish and be the first to transmit information. This, alongside more insidious practices such as the intentional distribution of disinformation for economic or political gain, has created what the United Nations (UN) Educational, Scientific and Cultural Organisation (UNESCO) refers to as a “perfect storm”.(8)
In the UN Secretary-General’s report titled “Our Common Agenda”, it was noted that while the UN vehemently upholds the universal right to freedom of expression, it is crucial to foster a collective, evidence-based agreement within societies regarding the public value of facts, science, and knowledge.(9) Efforts to enable this include:
- Reestablishing the moral imperative against lying. Institutions can serve as a “reality check” for communities, mitigating disinformation, countering hate speech, and addressing online harassment, particularly against women and girls.
- Expediting efforts in generating and disseminating trustworthy, verified information.
The UN, with its pivotal role, can enhance these efforts, drawing inspiration from successful models such as the Intergovernmental Panel on Climate Change, the World Meteorological Organization Scientific Advisory Panel, or the Verified Initiative for COVID-19.
Additional measures involve:
- supporting independent media in the public interest
- regulating social media, fortifying freedom of information laws, and
- ensuring significant representation of science and expertise in decision-making through entities like science commissions.
A collaborative exploration of a global code of conduct promoting integrity in public information is proposed, involving states, media outlets, and regulatory bodies, facilitated by the United Nations. Given contemporary concerns about trust and distrust related to technology and the digital realm, there’s a recognised need to better understand, regulate, and manage our digital commons as a global public good.
These causes continue to pose difficulties for newsrooms, journalists, and social media users as new news ecosystems, in particular, enable malicious practices and actors to flourish. However, as discussed, there is a fine line between seeking to combat the spread of misinformation online and violating the right to freedom of expression.
Content moderation by private actors
As private technology platforms have grown their audiences around the world and become increasingly powerful, the decisions they make internally as to how to moderate the content appearing on their platforms have become increasingly consequential for the protection of freedom of expression and access to information in the digital age. How these platforms make decisions about removing or downgrading content they classify as mis- or disinformation requires transparency and accountability in order to ensure the protection of rights and the creation of an enabling information ecosystem. Even decisions about which content is shown to users and how (for example, ranking and curating of feeds) have the potential to affect freedom of expression and access to information.
Rarely do the community standards enforced by these companies accord with domestic legal provisions that regulate, for example, hate speech or propaganda. Research has also found that untargeted or disproportionate content moderation disproportionately impacts marginalised persons, mainly through disregarding their experiences on social media.(10)
While it is important to ensure that states do not approach intermediaries such as social media platforms to attempt to remove online content outside the bounds of the law, it is increasingly apparent that there is a need for greater oversight over the decisions these companies make that affect fundamental rights.
In this regard, the case of UEJF v. Twitter in France is instructive. As described by the Columbia Global Freedom of Expression Case Law Database:
“The Paris Court of Appeal confirmed an order from the Paris Tribunal ordering Twitter to provide information on their measures to fight online hate speech. Six French organizations had approached the Court after their research indicated that Twitter only removed under 12% of tweets that were reported to them and sought information on the resources Twitter dedicated to the fight against online racist, anti-Semitic, homophobic speech and incitement to gender-based violence and commission of crimes against humanity. The Paris Tribunal had ruled that Twitter provide this information, and despite Twitter’s argument in the Court of Appeal that they had no statutory obligation to disclose this information, the Court held that the organizations were entitled to the information to enable them to determine whether to file an application under French law that Twitter was not promptly and systematically removing hate speech from their platform.”(11)
Legal responses to mis- and disinformation
False news provisions are laws which prohibit and punish the dissemination of false or inaccurate statements. The criminalisation of false news has been struck down in various countries.(12)
For example, in the matter of Chavunduka and Another v Minister of Home Affairs and Another,(13) the Zimbabwe Supreme Court dealt with the constitutionality of the criminal offence of publishing false news under Zimbabwean law. In 1999, following the publication of an article in The Standard titled “Senior army officers arrested”, the editor and a senior journalist were charged with contravening section 50(2)(a) of the Law and Order Maintenance Act, on the basis that they had published a false statement that was likely to cause fear, alarm, or despondency among the public or a section of the public. The editor and journalist challenged the constitutionality of this provision as being an unjustifiable limitation of the right to freedom of expression and the right to a fair trial.
Of particular relevance, in finding that the section was indeed unconstitutional, the Supreme Court stated that:
“Because s 50(2)(a) is concerned with likelihood rather than reality and since the passage of time between the dates of publication and trial is irrelevant, it is, to my mind, vague, being susceptible of too wide an interpretation. It places persons in doubt as to what can lawfully be done and what cannot. As a result, it exerts an unacceptable “chilling effect” on freedom of expression, since people will tend to steer clear of the potential zone of application to avoid censure, and liability to serve a maximum period of seven years‟ of imprisonment.
The expression “fear, alarm or despondency” is over-broad. Almost anything that is newsworthy is likely to cause to some degree at least, in a section of the public or in a single person, one or other of these subjective emotions. A report of a bus accident which mistakenly informs that fifty instead of forty-nine passengers were killed, might be considered to fall foul of s 50(2)(a).
The use of the word “false” is wide enough to embrace a statement, rumour or report which is merely incorrect or inaccurate, as well as a blatant lie; and actual knowledge of such condition is not an element of liability; negligence is criminalised. Failure by the person accused to show, on a balance of probabilities, that any or reasonable measures to verify the accuracy of the publication were taken, suffices to incur liability even if the statement, rumour or report that was published was simply inaccurate.”
Accordingly, the Supreme Court held that the criminalisation of false news, as contained in section 50(2)(a), was unconstitutional and a violation of the right to freedom of expression. Unfortunately, false news provisions have since found their way into other legislation in Zimbabwe and have been used to justify the arrest and silencing of critics and journalists.(14) Zimbabwe’s Data Protection Act, which, as of January 2024 has been enacted but is not yet in force, criminalises the spreading of false information online. The Act states that any person who unlawfully and intentionally makes available, broadcasts, or distributes data to any other person concerning an identified or identifiable person knowing it to be false, with intent to cause psychological or economic harm, will be guilty of an offence. Civil society has raised concerns that this provision promotes self-censorship, and unjustifiably infringes on freedom of expression.(15)
Courts in other countries have also grappled with these issues:
- In Botswana, a journalist was criminally charged for alarming publications in 2022.(16) This charge is contained in Botswana’s Penal Code and can result in the journalist facing up to two years of imprisonment or a fine.
- In the case of Media Council of Tanzania v Attorney General, the East African Court of Justice unanimously ruled that several sections of Tanzania’s Media Services Act were in violation of the Treaty for the Establishment of the East African Community.(17) The court found that these provisions encroached upon the right to freedom of expression. The legal challenge was initiated by three non-governmental organizations in Tanzania that were troubled by the legislation’s use of criminal offences for defamation, false news, and other media-related conduct. They also raised concerns about restrictions on the publication of certain content and mandatory media accreditation. The Court determined that the Tanzanian government had not successfully demonstrated the legitimacy of the restrictions imposed by the law on the right to freedom of expression. It concluded that the contested provisions of the Act breached the treaty by infringing on the right to freedom of expression safeguarded by the African Charter on Human and Peoples’ Rights. As a remedy, the Court instructed Tanzania to bring the Media Services Act into alignment with the provisions of the Treaty.
- In 2014, the High Court of Zambia in Chipenzi v. The People likewise struck down a provision in the country’s Penal Code that prohibited the publication of false information likely to cause public fear, holding that it did not amount to a reasonable justification for limiting freedom of expression.(18)
- More recently, the ECOWAS Community Court of Justice delivered a landmark judgment in the case of Federation of African Journalists and Others v The Gambia,(19) where it found that the rights of four Gambian journalists had been violated by the state authorities. It was submitted that security agents of The Gambia arbitrarily arrested, harassed, and detained the journalists under inhumane conditions, and forced them into exile for fear of persecution as a consequence of their work as journalists.The Court upheld the claim, finding that The Gambia had violated the journalists’ rights to freedom of expression, liberty, and freedom of movement, as well as violated the prohibition against torture. As such, it awarded six million Dalasi in compensation to the journalists. Importantly, the Gambia was ordered to immediately repeal or amend its laws on, amongst others, false news in line with its obligations under international law.
- In a related case, in 2018 the Court of Cassation of Tunis in Tunisia in Attorney General v. N.F upheld the acquittal of a woman who had been charged with ‘publication of false news threatening public order’ for publishing statements alleging electoral fraud.(20) The Court held that because the woman had subsequently deleted the post, she could not be found to have criminal intent.
How to combat misinformation
Effectively combatting misinformation remains a pressing contemporary issue, with various remedies posited by jurists, academics, and activists. Notably, Associate Justice of the Supreme Court of the United States, Anthony Kennedy, in his majority decision in United States v Alvarez(21) held that “[t]he remedy for speech that is false is speech that is true. This is the ordinary course in a free society. The response to the unreasoned is the rational; to the uninformed, the enlightened; to the straight-out lie, the simple truth.”
MIL strategies and campaigns proposed by organisations such as UNESCO seek to operationalise the position proposed by Justice Kennedy and provide a holistic approach to combating misinformation, without limiting the right to freedom of expression.
Media and Information Literacy (MIL) strategies and campaigns
As a point of departure, MIL strategies and campaigns are a process which enables the detection of misinformation and a means to combat its spread, particularly online.(22) MIL is an umbrella and inter-related concept which is divided into:
- Human rights literacy which relates to the fundamental rights afforded to all persons, particularly the right to freedom of expression, and the promotion and protection of these fundamental rights.
- News literacy which refers to literacy about the news media, including journalistic standards and ethics. This includes, for example, the specific ability to understand the “language and conventions of news as a genre and to recognise how these features can be exploited with malicious intent.”
- Advertising literacy which relates to understanding how online advertising works and how profits are driven in the online economy.
- Computer literacy which refers to basic IT usage and understanding the easy manner in which headlines, images, and, increasingly, videos can be manipulated to promote a particular narrative.
- Understanding the “attention economy” which relates to one of the causes of misinformation and the need for journalists and editors to focus on click-bait headlines and misleading imagery to grab the attention of users and, in turn, drive online advertising revenue.
- Privacy and intercultural literacy which relates to developing standards on the right to privacy and a broader understanding of how communications interact with individual identity and social developments.
MIL strategies and campaigns should underscore the importance of media and information literacy in general but should also include a degree of philosophical reflection. According to UNESCO, MIL strategies and campaigns should assist users to “grasp that authentic news does not constitute the full ‘truth’ (which is something only approximated in human interactions with each other and with reality over time).”(23)
Litigation where justifiable limitations exist
The International Covenant on Civil and Political Rights (ICCPR) provides in article 20 that “[a]ny propaganda for war shall be prohibited by law” and that “[a]ny advocacy of national, racial or religious hatred that constitutes incitement to discrimination, hostility or violence shall be prohibited by law.”
In addition, article 4(a) of the International Convention on the Elimination of All Forms of Racial Discrimination (CERD) requires that the dissemination of ideas based on racial superiority or hatred, incitement to racial discrimination, as well as all acts of violence or incitement to such acts against any race or group of persons of another colour or ethnic origin, must be declared an offence that is punishable by law.
Despite the importance of freedom of expression, not all speech is protected under international law, and some forms of speech are required to be prohibited by states. However, there is a need for clear and narrowly circumscribed definitions of what is meant by the term “hate speech”, or objective criteria that can be applied. Over-regulation of hate speech, as well as false statements, can violate the right to freedom of expression, while under-regulation may lead to intimidation, harassment or violence against minorities and protected groups.
In instances where misinformation is so egregious that it meets the definitional elements of hate speech, litigation may be a useful and important tool in the protection and promotion of fundamental rights, including the right to equality and dignity.(24) However, such litigation should consider the potential for unintended consequences and the possibility of jurisprudence which may negatively impact freedom of expression. Depending on the content of the speech and the harm that it causes, the publication of counter-narratives may constitute a useful complementary strategy to litigation.
For more information on this topic, see Module 6 of this series of Advanced Modules on Digital Rights and Freedom of Expression Online in sub-Saharan Africa.
Fact-checking and social media verification
Alongside MIL strategies and campaigns and litigating misinformation that constitutes hate speech, another effective tool to combat misinformation is fact-checking and social media verification. According to the Duke Reporters’ Lab, in 2022 there were nearly 400 fact-checking projects debunking misinformation in 105 countries around the world, up from about 186 organisations in 2016.(25)
In general, fact-checking and verification processes, which were first introduced by US weekly magazines such as Time in the 1920s,(26) consist of:
- Ex-ante fact-checking and verification. Increasingly and due to shrinking newsroom budgets, ex-ante (or before the event) fact-checking is reserved for more prominent and established newsrooms and publications that employ dedicated fact-checkers.
- Ex-post fact-checking, verification, and “debunking”. This method of fact-checking is increasingly popular and focuses on information published after the fact. It focuses on enabling accountability for the veracity of information after publication. Debunking is a subset of fact-checking and requires a specific set of verification skills, increasingly in relation to user-generated content on social media platforms.
Fact-checking is central to strategies to combat misinformation and has grown exponentially in recent years due to the increasing spread of misinformation, and the need to debunk viral hoaxes.(27) Alongside MIL strategies and campaigns, fact-checking and social media verification are becoming increasingly important in the fight against misinformation, alongside efforts to build the independence, credibility, and scale of the work of fact-checkers.
Civic initiatives to combat disinformation
The Real 411 is an initiative launched in South Africa as a civil society-led strategy to combat disinformation. The online REAL411 platform, which was supported by South Africa’s Independent Electoral Commission during South Africa’s 2019 national elections and the 2021 local elections, allows users to report disinformation to the Digital Complaints Committee (DCC), which assists a complainant with referrals to one of the multiple statutory bodies in South Africa that may assist with a remedy. The DCC may also assist with the publication of counter-narratives. Aggrieved parties may appeal to the Appeals Committee should they be dissatisfied with an outcome. The Real411 has since expanded to address online hate speech, incitement, and harassment as well.
The South African Independent Electoral Commission (IEC) partnered with social media platforms to combat disinformation ahead of South Africa’s 2024 National and Provincial elections. The IEC together with Google, Meta, TikTok and non-governmental organisations signed a Framework of Cooperation to work together to combat disinformation and other digital harms.(28) The Framework sets out to:
- Establish cooperation during the election period in good faith;
- Foster collaboration that respects existing laws and does not require sharing confidential user data;
- Support the establishment of a Working Group between partners which promotes access to accurate information, conducts awareness campaigns on elections, and provides training to political parties, election candidates and other key election stakeholders on addressing disinformation;
- Allow online platforms to implement policies and processes such as content removal, advisory warnings, and delisting to address disinformation.
- Enable signatories to cooperate with the IEC and Media Monitoring Africa’s initiatives including Real 411 and the Political Party Advert Repository (PADRE).