Back to main site

    Forms of Criminalisation

    Module 3: Criminalisation of Online Speech

    In 2019, the ACHPR recognised that the primary issues relating to freedom of expression include:

    • Co-regulation of the media.
    • Safety of journalists.
    • Restrictions related to cyber-crime laws.
    • Regulation of the internet.

    While there is an array of actions and forms of speech that have attracted criminal sanctions, this section focuses on hate speech, cybercrimes and disinformation. (1)

    Hate speech

    The reconciliation of values

    A 2019 Report by the UNSR on FreeEx found that:

    “Under international human rights law, the limitation of hate speech seems to demand a reconciliation of two sets of values: democratic society’s requirements to allow open debate and individual autonomy and development with the compelling obligation to prevent attacks on vulnerable communities and ensure the equal and non-discriminatory participation of all individuals in public life.  Governments often exploit the resulting uncertainty to threaten legitimate expression, such as political dissent and criticism or religious disagreement.  However, the freedom of expression, the rights to equality and life and the obligation of non-discrimination are mutually reinforcing; human rights law permits [s]tates and companies to focus on protecting and promoting the speech of all, especially those whose rights are often at risk, while also addressing the public and private discrimination that undermines the enjoyment of all rights.”

    The above recognition of the UNSR illustrates some of the complexities regarding the criminalisation of hate speech.  The escalation of prejudice and intolerance has led many governments to criminalise hate speech.  However, there are inherent difficulties with this because hate speech is a vague term that lacks universal understanding, and it is open to abuse and restrictions on a wide range of lawful expression.

    Overview of international instruments dealing with hate speech

    Article 20(2) of the ICCPR obliges states to prohibit by law “any advocacy of national, racial or religious hatred that constitutes incitement to discrimination, hostility or violence.”  The Rabat Plan of Action was introduced in 2012 to provide recommendations on the prohibition of advocacy of national, racial or religious hatred that constitutes incitement to discrimination, hostility or violence.  It outlines six factors that should be considered when determining whether a speaker intends and is capable of having the effect of inciting their audience to engage in violent or discriminatory action through the advocacy of discriminatory hatred.

    Rabat Plan of Action: Six-part threshold test for expressions considered as criminal offences

    Context: Context is of great importance when assessing whether particular statements are likely to incite discrimination, hostility or violence against the target group, and it may have a direct bearing on both intent and/or causation. Analysis of the context should place the speech act within the social and political context prevalent at the time the speech was made and disseminated.

    Speaker: The speaker’s position or status in the society should be considered, specifically the individual’s or organization’s standing in the context of the audience to whom the speech is directed.

    Intent: Article 20 of the International Covenant on Civil and Political Rights anticipates intent. Negligence and recklessness are not sufficient for an act to be an offence under article 20 of the Covenant, as this article provides for “advocacy” and “incitement” rather than the mere distribution or circulation of material. In this regard, it requires the activation of a triangular relationship between the object and subject of the speech act as well as the audience.

    Content and form: The content of the speech constitutes one of the key foci of the court’s deliberations and is a critical element of incitement. Content analysis may include the degree to which the speech was provocative and direct, as well as the form, style, nature of arguments deployed in the speech or the balance struck between arguments deployed.

    Extent of the speech act: Extent includes such elements as the reach of the speech act, its public nature, its magnitude and size of its audience. Other elements to consider include whether the speech is public, what means of dissemination are used, for example by a single leaflet or broadcast in the mainstream media or via the Internet, the frequency, the quantity and the extent of the communications, whether the audience had the means to act on the incitement, whether the statement (or work) is circulated in a restricted environment or widely accessible to the general public.

    Likelihood, including imminence: Incitement, by definition, is an inchoate crime. The action advocated through incitement speech does not have to be committed for said speech to amount to a crime. Nevertheless, some degree of risk of harm must be identified. It means that the courts will have to determine that there was a reasonable probability that the speech would succeed in inciting actual action against the target group, recognizing that such causation should be rather direct.”

    Identifying hate speech

    ARTICLE 19 provides useful guidance to activists and lawyers in understanding how to identify hate speech, what the state’s role is, and when can hate speech may be criminalised.  In this regard, it is important to distinguish between the advocacy of hatred that constitutes incitement, on the one hand, and hateful expression that may not constitute advocacy or incitement, on the other.  The former relates to article 20 of the ICCPR and article 4 of the ICERD.  States are obliged to prohibit advocacy of national, racial or religious hatred that constitutes incitement to discrimination, hostility or violence, but notably, states are not obligated to criminalise such kinds of expression.  The latter does not meet the definitions under article 20 of the ICCPR or article 4 of ICERD, and will require strict compliance with international law if it is to be criminalised.

    As explained by ARTICLE 19, there are different categories of hate speech:

    • Hate speech that must be prohibited in terms of international law includes:
      • Direct and public incitement to genocide.  This is prohibited by the Convention on the Prevention and Punishment of the Crime of Genocide and the Rome Statute of the International Criminal Court.
      • Any advocacy of discriminatory hatred that constitutes incitement to discrimination, hostility or violence.  This is prohibited in terms of article 20 of the ICCPR.
      • Propaganda and organisations which are based on ideas or theories of superiority of one race or group of persons of one colour or ethnic origin, or which attempt to justify or promote racial hatred and discrimination in any form.  This is prohibited in terms of article 4 of the ICERD.
    • Hate speech that may be prohibited provided that it complies with the standards of article 19(3):
      • Legality: laws criminalising hate speech must be precise, public and transparent.
      • Legitimacy: it should be justified to protect and to respect the rights or reputations of others or to protect national security, public order, public health or morals.
      • Necessity and proportionality: the criminalising legislation must protect a legitimate interest and be the least restrictive means to achieve the purported aim.
    • Hate speech that is lawful and that should be protected.
      • Inflammatory or offensive expression that does not meet the above thresholds.

    ARTICLE 19 Hate Speech Explained: A Toolkit

    ARTICLE19 has published a toolkit that provides a guide to identifying hate speech and how to effectively counter it while protecting the rights to freedom of expression and equality.  The toolkit responds to a growing demand for clear guidance on identifying ‘hate speech’ and for responding to the challenges hate speech poses within a human rights framework.

    It is clear that cooperation from the state can be an effective means of safeguarding human rights.  However, states are not always fulfilling their duties.  Accordingly, lawyers, civil society organisations (CSOs), individuals and community members need to work together to ensure that states are acting in compliance with their international human rights obligations.  This can include strategic litigation, policy reform and advocacy, such as:

    • Ensuring that states are creating an enabling environment for the right to freedom expression.  This can include ratifying international and regional human rights instruments, adopting domestic laws to protect freedom of expression and repealing any laws that unduly limit the right to freedom of expression.
    • Ensuring that states safeguard the rights of individuals who exercise their right to freedom of expression.  This requires ensuring that states make a concerted effort to end impunity for attacks on independent and critical voices.
    • Ensuring that domestic laws guarantee equality before the law and equal protection of the law. That includes protection against discrimination on all grounds recognised under international human rights law.
    • Ensuring that states establish or strengthen the role of independent equality institutions or expand the mandate of national human rights institutions.
    • Ensuring that states adopt a regulatory framework for diverse and pluralistic media, which promotes pluralism and equality.

    Cybercrime

    The term cybercrime has no single uniform or universally accepted definition, and there is an ongoing debate as to what the term entails.  Some of the explanations and definitions advanced include the term covering “a whole slew of criminal activity” including the theft of personal information, fraud, and the dissemination of ransomware. (2) Cybercrimes can also be the online extension of existing offline crimes such as harassment and sexual abuse, or producing, offering to make available, or making available, and distributing racist and xenophobic material.(3)  For ease of reference, cybercrimes may be categorised as follows:(4)

    Category Examples of crimes
    Offences against the confidentiality, integrity and availability of computer data and systems Illegal access (hacking)

    • Password breaking
    • Distributed denial-of-service (DDoS) attacks
    • Automated attacks and botnets
    Illegal data acquisition (data espionage)

    • Scanning for unprotected ports
    • Circumventing protection measures
    • Social engineering
    • Phishing
    Illegal interception

    • Intercepting communications to record the information exchanges
    • Setting up fraudulent access points
    Data interference

    • Deleting, suppressing or altering computer data
    • Creation of malware and computer viruses
    Content-related offences
    • Sexual exploitation material
    • Child sexual abuse material
    • Commercial sexual exploitation of children
    • Racist and xenophobic speech, hate speech and promotion of violence
    • Disinformation and fake news
    Copyright and trademark-related offences
    • Reproduction of material
    • Exchange of copyright-protected material (songs and movies)
    • Certain file-sharing systems
    • Domain name related offences
    Computer-related offences
    • Computer-related fraud
    • Online auction fraud
    • Advance fee fraud
    • Identity theft
    • Cyberstalking, cyberharassment and cyberbullying

    Cybercrime and cybersecurity are two issues that cannot be separated in an interconnected digital environment.  Cybersecurity, or the management of cybercrime, refers to the collection of tools, policies, security concepts, security safeguards, guidelines, risk management approaches, actions, training, best practices, assurance and technologies that can be used to protect the cyber-environment and organisational and user’s assets, such as computing devices, applications and telecommunication systems.(5)

    Overview of international instruments

    Currently, there are three prominent international instruments that engage the topic of cybercrime:(6)

    • The 2001 Convention on Cybercrimes (Budapest Convention) is the first international treaty that seeks to address internet and computer crimes.  Its main objective is to pursue a common criminal policy aimed at the protection of society against cybercrimes by adopting appropriate legislation and fostering international co-operation.
    • The Additional Protocol to the Convention on Cybercrimes concerns the criminalisation of acts of a racist and xenophobic nature committed through computer systems.  As an international legal instrument, the Protocol provides guidance and plays a key role in facilitating harmonisation across different legal regimes on the issue of specific forms of online speech.
    • The 2014 African Union Convention on Cybersecurity and Personal Data (Malabo Convention), is the second treaty that deals with cybercrime.  The Malabo Convention, among other things, encourages states to take necessary legislative and/or regulatory measures to establish criminal offences relating to cybercrimes.  The offences include:
      • Creating, downloading, disseminating or making available in any form of writings, messages, photography, drawings or any other presentation of ideas or theories of racist or xenophobic nature through a computer system.
      • Threatening, through a computer system, to commit a criminal offence against a person for the reason that they belong to a group distinguished by race, colour, descent, national or ethnic origin or religion, where such membership serves as a pretext for any of these characteristics.
      • Insulting, through a computer system, persons for the reason that they belong to a group distinguished by race, colour, descent, national or ethnic origin or religion or political opinion, if used as pretext for any of these factors, or against a group of persons distinguished by any of these characteristics.

    Under the Malabo Convention, states are also urged to enact legislation criminalising acts related to child pornography.  Importantly, the Malabo Convention does identify acts that warrant criminalisation, such as child pornography and racist and xenophobic acts.  However, there are some concerns when it comes to free speech in the online context.  For instance, the Malabo Convention uses vague language which may be open to abuse by states.  An example is the provision that criminalises the use of insulting language, which is problematic because it describes a significant portion of the language used on the internet.  This can lead to subjective prosecutions and, eventually, may lead to criminal convictions.  The Convention also raises concerns in that it expands the search and seizure powers of the state.

    The rise in cybercrime laws

    The UNODC has found that cybercrimes are of particular relevance when discussing criminalisation of online speech because the laws that are enacted to regulate cybercrimes can result in the restriction of freedom of expression.  Access Now notes that one of the main concerns about the plethora of laws that are currently being enacted to regulate cybercrimes is that many of them lack clear definitions and are susceptible to being used to regulate online content and restrict freedom of expression.  This is a growing concern among human rights defenders as many have been subjected to a wave of arrests and convictions in what is an escalating assault on freedom of expression by cybercrime laws.

    Cybercrime laws in Nigeria

    While there may be legitimate aims in enacting these laws, there are serious concerns that many of these laws are vague and overbroad and are susceptible to being used to restrict freedom of expression.  Amnesty International has reported a growing trend of arrests, detention and torture of journalists and bloggers as well as pointed attacks on major media houses.  Journalists and bloggers are reportedly being charged with cybercrimes under Nigeria’s Cybercrime Act, which criminalises a substantial amount of online forms of expression.

    This situation may be exacerbated if the proposed Protection from Internet Falsehoods and Manipulation Bill is passed into law.  The Bill is aimed at enabling measures to be taken to detect, control and safeguard against uncoordinated and inauthentic behaviour and other misuses of online accounts and bots, enabling measures to be taken to enhance disclosure of information regarding paid content directed towards a political end and to sanction offenders.

    The Bill seeks to criminalise, among other things, prohibited statements of facts which include false statements of fact and statements that are likely to be prejudicial to the country’s security, public health, public safety, public tranquillity or finances, prejudice Nigeria’s relations with other countries, influence the outcome of an election or referendum, incite feelings of enmity, hatred towards a person, ill will between a group of persons, or diminish public confidence in the performance or exercise of any duty, function or power by the government.

    If this Bill is passed it could mean a further affront to freedom of expression in Nigeria, which as it stands is under threat due to the cybercrime legislation that is already in existence.  Further, the Bill gives the state wide-ranging powers, which may be susceptible to abuse.(7)

    In relation to the concerns regarding cybercrime legislation, a 2019 Report of the UNSR on FreeEx noted:

    “A surge in legislation and policies aimed at combating cybercrime has also opened the door to punishing and surveilling activists and protesters in many countries around the world. While the role that technology can play in promoting terrorism, inciting violence and manipulating elections is a genuine and serious global concern, such threats are often used as a pretext to push back against the new digital civil society.”

    In July 2019, the United Nations General Assembly presented a Draft Resolution on countering the use of information and communications technologies for criminal purposes.

    Concerns from CSOs

    CSOs were highly critical of the resolution, calling for delegations to vote against it.  In an Open letter to UN General Assembly, the following concerns were raised:

    • The “use of information and communications technologies for criminal purposes” is not defined in the resolution.  The lack of specificity is not just a concern from an accuracy perspective; keeping the term undefined opens the door to criminalising ordinary online behaviour that is protected under international human rights law.
    • Criminalising ordinary online activities of individuals and organisations through the application of cybercrime laws constitutes a growing trend in many countries around the world.  While legislation aimed at addressing cybercrime can be necessary and reinforce democratic institutions, when misused, cybercrime laws can create a chilling effect and hinder people’s ability to use the internet to exercise their rights online and offline.
    • It goes far beyond what the Budapest Convention allows for regarding cross-border access to data, including by limiting the ability of a signatory state to refuse to provide access to requested data.
    • Building on and improving existing instruments is more desirable and practical than diverting already scarce resources into the pursuit of a new international framework, which is likely to stretch over many years and unlikely to result in consensus.
    • The establishment of an ad hoc intergovernmental committee of experts to address the issue of cybercrime would exclude key stakeholders who bring valuable expertise and perspectives, both in terms of effectively countering the use of ICTs for criminal purposes and to ensure that such efforts do not undermine the use of ICTs for the enjoyment of human rights and social and economic development.

    These critiques from civil society can serve as useful guidelines when lawyers and activists are engaging with cybercrime laws domestically.

    Despite these concerns, the resolution was adopted and published in January 2020.  Through the resolution, an open-ended ad hoc intergovernmental committee of experts, representative of all regions, will be established to elaborate a comprehensive international convention on countering the use of ICTs for criminal purposes, taking into full consideration existing international instruments and efforts at the national, regional and international levels on combating the use of ICTs for criminal purposes.

    Lawyers and activists should monitor further developments in relation to this and, where possible, engage with relevant stakeholders in order to positively influence future developments and decisions.

    Fake news and disinformation

    Fake news, simply defined, refers to news items that are intentionally and verifiably false and which seek to mislead users.(8)  Disinformation includes statements which are known or reasonably should be known to be false that seek to mislead the public, and, in turn, interfere and inhibit the ability of the public to seek, receive, and impart information.(9) In 2018, the High-Level Expert Group on Fake News and Online Disinformation understood disinformation to mean—

    “all forms of false, inaccurate, or misleading information designed, presented and promoted to intentionally cause public harm or for profit. It does not cover issues arising from the creation and dissemination online of illegal content.  Nor does it cover other forms of deliberate but not misleading distortions of facts, such a satire and parody.”

    The High-Level Expert Group noted two reasons for avoiding the use of the term “fake news”:

    • The term is inadequate to capture the complex problem of disinformation, which involves content that blends fabricated information with facts.
    • The term is misleading as it has been appropriated by some politicians and their supporters to dismiss coverage that they find disagreeable and has thus become a weapon with which powerful actors can interfere in the circulation of information and attack and undermine independent news media.

    The Cambridge Analytica scandal around the 2016 US presidential elections brought into light the issue of “fake news” and the ease with which disinformation can be disseminated online.  In response to this and the growing trend of disinformation, a number of states have enacted legislation criminalising this.  It continues to increase in speed and magnitude and is causing demonstrable and significant public harm.  The 2017 Joint Declaration on Fake News, Disinformation and Propaganda noted that countering these issues poses complex challenges that could result in censorship and the suppression of critical thinking.  The 2018 UNESCO handbook on journalism, fake news & disinformation notes:

    “Disinformation and propaganda challenge access to information and the overall public trust in media and government institutions”, but a considered approach is required for addressing it “because blunt forms of action, such as website blocking or specific removals, risk serious interference with freedom of expression”.

    Addressing fake news

    Various international bodies, states and organisations have grappled with various responses to the complexities of disinformation.  However, some of the legislation does not strike an appropriate balance between criminalising fake news and protecting the right to freedom of expression.  Some examples include:

    • Malaysia: In 2018, the Malaysian government responded to disinformation by enacting the Anti-Fake News Act, which attaches criminal liability to persons who knowingly create, offer, publish, print, distribute, circulate or disseminate fake news.  The Act defined “fake news” as including “any news, information, data and reports, which is or are wholly or partly false, whether in the form of features, visuals or audio recordings or in any other form capable of suggesting words or ideas.”  However, the existence of the Act was short-lived.  It was repealed by the Anti-Fake News (Repeal) Act 825 of 2020.
    • Cameroon: The Penal Code in Cameroon criminalises the sending out or propagating of false information.  Section 113 imposes a penalty of imprisonments between three months to three years and a fine between CFAF 100 000 (approximately USD172) to CFAF 2 000 000 (approximately USD3400) for persons found guilty of this offence.  The Committee to Protect Journalists (CPJ) has noted with concern the arrest and detention of journalists under this provision, in particular, a journalist who was sent to maximum-security prison on charges of defamation and spreading false news.
    • Russia: In 2019, the State Duma (the Russian Federal Assembly) passed legislation on Information, Information Technologies and Protection of Information, and a Code of Administrative Offenses both aimed at countering “fake news”.  ARTICLE19 explains that these amended laws allow authorities in Russia to block websites that they consider to be publishing disinformation.  Websites are also liable for insulting Russian authorities.  The Moscow Times reported that “online news outlets and users that spread “fake news” will face fines of up to 1.5 million Rubles (USD20 000) for repeat offenses.  Insulting state symbols and the authorities, including Vladimir Putin, will carry a fine of up to 300 000 Rubles (USD4000) and 15 days in jail for repeat offences.”

    The criminalisation of the dissemination of fake news is likely to increase, and if done with sinister motives, may cause significant violence to freedom of expression.  Such developments should be closely monitored and challenged where necessary.  Fortunately, criminalisation is not the only option in addressing the rise of disinformation.  International bodies, states and CSOs are continually presenting new and innovative ways to address disinformation.  Some notable contributions from international bodies include:

    • European Union: In 2018, the European Union published its Code of Practice on Disinformation.  The purpose of the Code is to identify the actions that signatories could put in place in order to address the challenges related to disinformation.  The Code discusses the need for safeguards against disinformation, implementation of reasonable policies, effective measures to close discernible fake accounts; and the improvement of the scrutiny of advertisement placements.  The Code identifies best practices that signatories – such as Facebook, Google, Twitter, and Mozilla – should apply when implementing the Code’s commitments.

    At a state level, there have also been promising developments.  In 2019, the US Library of Congress produced a report on Initiatives to Counter Fake News in Selected Countries.  Some positive initiatives include:

    • Argentina: The Commission for the Verification of Fake News was established.  The Commission is envisaged to form part of the National Election Chamber, to assist with overcoming issues of disinformation during elections.
    • Sweden: Bamse the Bear, a popular cartoon character in Sweden, has adopted a new role in teaching children about the dangers of fake news by illustrating what happens to the bear’s super-strength when false rumours are circulated about him.
    • Kenya: The United States Embassy in Kenya started a media literacy campaign known as “YALI Checks: Stop.Reflect.Verify” to counter the spread of false information in Kenya.  The campaign relies on an email series, an online quiz, blog posts, online chats, public outreach, educational videos, and an online pledge to engage with the Kenya chapter of the Young African Leaders Initiative (YALI) about disinformation.
    • Finland: Finland has been lauded for winning the war on disinformation due to its initiatives aimed at teaching residents, students, journalists and politicians how to counter false information.  The initiatives include courses at community colleges and the introduction of lessons in schools about disinformation.

    Suggested standards on addressing disinformation

    In the Joint Declaration on Freedom of Expression and ‘Fake News’, Disinformation and Propaganda, the following standards are suggested:

    • General prohibitions on the dissemination of information based on vague and ambiguous ideas, including ‘false news’ or ‘non-objective information’, are incompatible with international standards for restrictions on freedom of expression, and should be abolished.
    • Criminal defamation laws are unduly restrictive and should be abolished.  Civil law rules on liability for false and defamatory statements are legitimate only if defendants are given a full opportunity and fail to prove the truth of those statements and also benefit from other defences, such as fair comment.
    • State actors should not make, sponsor, encourage or further disseminate statements which they know or reasonably should know to be false (disinformation) or which demonstrate a reckless disregard for verifiable information (propaganda).
    • State actors should, in accordance with their domestic and international legal obligations and their public duties, take care to ensure that they disseminate reliable and trustworthy information, including about matters of public interest, such as the economy, public health, security and the environment.

    Determining limitations on freedom of expression

    Global Partners Digital, in an attempt to determine how to tackle disinformation in a way that respects human rights, proposes an information-gathering approach to determine if disinformation amounts to a justifiable limitation of freedom of expression.  Some of the suggested questions include:

    • Is the basis for any restrictions on what information individuals can search for, receive or impart set out in law?
    • Is there clarity over the precise scope of the law so that individuals will know what is and is not restricted?
    • Is speech restricted only where it is in pursuance of a legitimate aim?
    • Are there exceptions or defences where the individual reasonably believed the information to be true?
    • Are determinations made by an independent and impartial judicial authority?
    • Are responses or sanctions proportionate?
    • What is disinformation?
    • Are intermediaries liable for third party content?

    Fake news in the courts

    In the African context, the Court of Justice of the Economic Community of West African States (ECOWAS Court) and the East African Court of Justice (EACJ) have both delivered landmark rulings on cases relating to the criminalisation of fake news.

    In 2018, the ECOWAS Court decided the Federation of African Journalists and Others v The Republic of The Gambia matter, in which it considered offences of sedition, false news and criminal defamation in The Gambia’s Criminal Code.  Several journalists were arrested on charges of spreading false news.  They argued that their rights to freedom of expression had been violated and sought a declaration from the Court that certain provisions of The Gambia’s Criminal Code were inconsistent with regional and international law.  The ECOWAS Court found that the criminal laws of the Gambia imposed criminal sanctions that are disproportionate and not necessary in a democratic society where freedom of speech is a guaranteed right and ordered that the legislation be reviewed.  The Criminal Code was found to be broad and capable of casting an:

    “[E]xcessive burden upon the applicants in particular and all those who would exercise their right of free speech and violates the enshrined rights to freedom of speech and expression under Article 9 of the African Charter, Article 19 of the ICCPR and Article 19 of UDHR”.

    More recent developments in respect of the criminalisation of fake news came from the EACJ in the matter between the Media Council of Tanzania and Others v Attorney-General of the United Republic of Tanzania.  In this case, the applicants challenged various provisions of the Tanzanian Media Services Act on the basis that it was an unjustified restriction on the right to freedom of expression.  The applicants argued that “the Act in its current form is an unjustified restriction on the freedom of expression which is a cornerstone of the principles of democracy, the rule of law, accountability, transparency and good governance which [Tanzania] has committed to abide by, through the Treaty.”  The applicants argued that it violated freedom of expression by restricting the types of news or content without reasonable justification, criminalising the publication of the false news and rumours, criminalising the seditious statements and vesting the Minister with absolute power to prohibit the import of publication or sanction media content.  The respondent argued that all the provisions are just and did not violate the right to freedom of expression and associated rights.

    The EACJ held that although the sections were set out in law, the contents of these sections were vague, unclear and imprecise.  It noted that the use of the word “undermine” in the impugned provision, which formed the basis of the offence, was too vague to assurance to a journalist or other person who sought to regulate their conduct within the law.  The EACJ further noted that the words “impede”, “hate speech”, “unwanted invasion”, “infringe lawful commercial interests”, “hinder or cause substantial harm”, “significantly undermines” and “damage the information holder’s position” are too broad or vague.

    It further stated that it was persuaded by the applicants’ submissions that section 52(1) of the Act failed the test of clarity and certainty.  In this regard, it noted that definitions of sedition hinged on the possible and potential subjective reactions of audiences to whom the publication was made.  This makes it impossible for a journalist or other individual to predict and thus plan their actions.  In conclusion, the EACJ found in favour of the applicants and declared that, among other things, all the provisions that the applicants argued against were in violation of articles 6(d) and 7(2) of the Treaty for the establishment of the East African Court of Justice (EACJ Treaty) and directed the Republic of Tanzania to take such measures as are necessary to bring the Media Services Act in compliance with the EACJ Treaty.

    Both of these landmark judgments will have a far-reaching impact on other similar laws across the African region and will go a long way in ensuring that any responses to disinformation are based on international freedom of expression standards.

    Footnotes

    1. For more on specific types of speech-related offences, see Media Defence above n 3 at 48-61. Back
    2. Microsoft, ‘Cybercrime and freedom of speech – a counterproductive entanglement’ (2017) (accessible at https://www.microsoft.com/security/blog/2017/06/14/cybercrime-and-freedom-of-speech-a-counterproductive-entanglement/). Back
    3. See UNODC, ‘Module 2: General Types of Cyber Crime; E4J University Module Series: Cybercrime (2019) (accessible at https://www.unodc.org/e4j/en/cybercrime/module-2/key-issues/intro.html) and UNODC ‘Module 3: Legal Frameworks and Human Rights’ E4J University Module Series: Cybercrime (2019) (accessible at https://www.unodc.org/e4j/en/cybercrime/module-3/key-issues/international-human-rights-and-cybercrime-law.html). Back
    4. Id.  See further ITU ‘Understanding cybercrime: Phenomena, challenges and legal response’ (2012) (accessible at http://www.itu.int/ITU-D/cyb/cybersecurity/docs/Cybercrime%20legislation%20EV6.pdf). Back
    5.  ITU Definition of Cybersecurity, (accessible at: https://www.itu.int/en/ITU-T/studygroups/com17/Pages/cybersecurity.aspx). Back
    6. Global Action on Cybercrime Extended, ‘Comparative analysis of the Malabo Convention of the African Union and the Budapest Convention on Cybercrime’ (2016) (accessible at https://rm.coe.int/16806bf0f8). Back
    7.  For further commentary on trends in Africa see CIPESA, ‘Why are African Governments Criminalising Online Speech? Because They Fear Its Power’ (2018) (accessible at https://cipesa.org/2018/10/why-are-african-governments-criminalising-online-speech-because-they-fear-its-power/). Back
    8. Media Defence above n 3. Back
    9.  Access Now, Civil Liberties Union for Europe and European Digital Rights ‘Informing the disinformation debate’ (2018) (accessible at https://dq4n3btxmr8c9.cloudfront.net/files/2r7-0S/online_disinformation.pdf). Back