CLOSE

The crisis of the twenty-six words that founded the internet

The crisis of the twenty-six words that founded the internet

By Lucas Anastácio Mourão, partner at Flora, Matheus & Mangabeira Sociedade de Advogados.

The legal solutions devised to combat problems, as they arise with each new period and context, often seem definitive at the time of their conception. Yet as technologies and behaviours evolve over time, so must our legal responses change and adapt.

The advent of mass fake news through social networks – and its impacts both on private individuals (notably damage to reputation) and on democracy (such as the subversion of the electoral processes through the manipulation of behaviour) – has rekindled the discussion about the roles and responsibilities of social network providers. What’s more, the regulation around these technology and media companies, particularly the extent of their responsibility for content produced by their users, needs to be reviewed. We’re beginning to call into question the founding principles of the internet, established in 1996 by Section 230 of the Communications Decency Act. Dubbed “the “twenty-six words that founded the internet”, Section 230 was a regulatory paradigm that granted immunity from liability to providers for the content produced by their users. This regulatory concept of Section 230 was eventually adopted internationally as the standard over the next few decades.

In the 1990s, with the arrival of the internet as a far-reaching technology, companies emerged with various online services. These included virtual forum sites, comment sections and other tools for publishing content produced directly by the users themselves. Naturally, this scenario resulted in a discussion about the regulation of providers (or intermediaries) and content produced by third parties on these platforms, which generated legal disputes.

Among these disputes was the 1991 “Cubby vs. CompuServe” case. The case involved an episode of defamation against a person on CompuServe’s forum. The person who published the alleged defamatory content, even after being notified by the offended party, didn’t remove it. The understanding that prevailed at the time was that since the company was a distributor of content produced by third parties and had no role in editing the publication identified as illegal, the company would not be held responsible. There was also another relevant case: Stratton Oakmont, Inc vs. Prodigy. Like Cubby versus CompuServe, it entailed defamation committed on a virtual platform (administered by Prodigy). However, in this case, the company performed a certain level of content moderation by posting guidelines for its users, enforcing those guidelines, and removing content deemed offensive. Therefore, the local judiciary understood Prodigy to be responsible for publishing the content created by its users.

In light of this scenario, and heavily influenced by the above cases, in 1996 the United States drafted Section 230 of the Communications Decency Act (CDA). Item (c)(1) of the legislation created, as a general rule, the immunity of intermediary companies from liability for third party content. This allowed their users to create online content, such as comments and publications, by providing the following:

 No provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider.

This provision became known as “the twenty-six words that founded the internet”. The standard is considered essential for the development of the sector, both from a technological and an economic point of view.

In other words, Section 230 of the CDA clarified that companies exploring the early uses of the internet would be immune from liability for content produced by third parties, even via their own digital platforms.

However, when the concept of immunity of intermediaries was conceived in 1996, social networks hadn’t yet built the economic, social and political power we see today. The immense problem of fake news and hate speech in the form and at the scale we know now simply didn’t exist. Nor did social networks have their current reach, enabling fake news’s now global status and its current predominance in public debate.

There have since been technological advances, both in the development of new platforms for social interaction and in the global internet infrastructure itself. This has led to some social network providers forming an oligopoly, in that they’ve achieved a planetary reach, with billions of users, and transformed the way we communicate with, relate to, and inform each other. It’s safe to say, therefore, that these virtual platforms have become major public spaces in which we lead our everyday lives. We use them to exercise our freedom of expression, to consume information, develop business relationships, as well as to shop, find friends and search for jobs.

Given the ease with which we can communicate and meet on these platforms, it’s only natural that they came to be used for political, cultural and social disputes. And there’s nothing wrong with this – in fact, it’s arguably even desirable – considering that the free exchange of ideas is essential to healthy democracies. However, these technological advances have given rise to a phenomenon that today threatens democracies, calls the fairness of electoral processes into question, destroys reputations, and even causes deaths: fake news.

The objective of fake news is to illegitimately manipulate the feelings, thoughts and behaviour of groups, or even entire populations. We’ve seen a number of paradigmatic cases in recent years, such as the US presidential elections in 2016, the general elections in Brazil in 2018, and the Brexit referendum in the UK in 2016. This is in addition to the numerous cases of fake news produced on an industrial-scale,  to destroy reputations (e.g. those of former federal deputy Jean Wyllys and councilwoman Marielle Franco in Brazil), the dissemination of conspiracy theories such as QAnon, and fake news about the  COVID-19 pandemic.

It was in this context, with online content causing these complex and profound problems, that people began to first contest the immunity model. Until then it had seemed unquestionable, and was considered the regulating foundation that guaranteed the existence and technological evolution of the worldwide computer network, and the human relationships established within it. The debate, which was thought to have ended in the 1990s, has returned. Today, after the global population has experienced the damage of fake news and hate speech through social media, we are again discussing whether, and to what extent, providers are responsible for the content produced by their users and for moderating this content.

In September 2020, the US Department of Justice (DoJ) issued recommendations to reduce the degree of immunity granted to social media providers. Amongst other reasons, DoJ cited the “proliferation of illegal and harmful online content that leaves victims without any civil recourse.” In October of the same year, the US Senate, through its Committee on Commerce, promoted a public hearing with the CEOs of the main social media providers, such as Facebook, Twitter and Google. The object of the hearing was to question the terms, effects, adjustments, inadequacies, and possible historical lag of the twenty-six words that founded the internet.

Years earlier, in 2017, the German Neztdurchsetzungsgesetz (NetzDG) came into force. Roughly translated as the ‘Network Inspection Law’, one of its objectives is to shift more of the responsibility for the content produced by users onto providers. This standard, in fact, has become an international reference point. It has stimulated the creation of, and proposals for, similar laws in a number of countries, such as France, the UK, Australia, India, Singapore, Venezuela, Russia, and Brazil.

It’s becoming apparent, therefore, that the regulating consensus, formed by Section 230 of the CDA in 1996 and exported to most nations, is undergoing profound reformulation. This is not only due to the emergence of illegal practices committed within the scope of social networks, but also to the perception that providers have done the bare minimum to address the problem. Many countries are taking the debate further, proposing or creating new laws that, to a greater or lesser degree, review the level of responsibility that should be assigned to the so-called “intermediaries” or providers.  It’s therefore possible to observe the emergence of a “post-CDA” regulatory framework, reviewing the immunity enjoyed by providers for content produced by their users.

All of this leads to the following question: is there a model of accountability that not only protects individual and collective rights, such as freedom of expression, but that also addresses the problem of fake news and hate speech without hindering technological development? So far, the model inaugurated by Section 230 of the CDA has fallen short, so we may need to overhaul this previously-venerated model.  There needs to be an open discussion around what should take its place, and what adjustments need to be made.

Although there is no agreed model to be implemented, there are already procedures and technologies requiring more responsibility from media and technology companies. For example, providers may now need to notify all users who potentially consumed fake news, to alert them to its falsified nature and re-establish the truth. It would also be possible to demand that, once the illegality of certain content is recognised, providers should be required to carry out an active search to remove identical content (therefore already defined in court as illegal), regardless of new legal demands.

The biggest problem we currently face is that there is as yet no answer to what the proper model for apportioning responsibility might be. The undeniable fact, however, is that providers do less than they could and profit more than they should from fake news. Therefore, it is up to us, as a society – especially those holding public mandates – to address this problem and find solutions based on the pillars of freedom of expression and its ethical and legal limits.

 

If you’re interested in how the internet is shaping freedom of expression today, take a look at our article on internet shutdowns, our article on fake news, and our modules on digital rights, disinformation and restricting internet access on the Training Hub.

Recent News

Landmark Ruling: Kenya’s High Court Declares Colonial-era Subversion Laws Unconstitutional

Media Defence welcomes the verdict of the High Court in Nakuru, striking down sections of the Kenyan Penal Code which criminalise subversion, citing them as relics of colonial oppression that curtail freedom of expression. Justice Samwel Mohochi, delivering the judgment, asserted that these provisions were overly broad and vague, stifling dissent rather than serving any […]

Read

UN Rapporteurs Call for Protection of Brazilian Journalist Schirlei Alves

UN Rapporteurs Call for Protection of Brazilian Journalist Schirlei Alves Amid Defamation Charges Stemming from Rape Trial Coverage A letter dispatched by UN rapporteurs to the Brazilian Government calls for protective measures for women journalists covering cases of sexual crimes. The letter also denounces the conviction of Brazilian investigative journalist and women’s rights defender, Schirlei […]

Read

Convite à apresentação de candidaturas: Cirurgia de litígio em português na África Subsariana

Cirurgia de litígio em português na África Subsariana Aplique aqui 23 a 25 de julho de 2024 em Nairobi, Quénia Prazo: 3 de maio A Media Defence está a convidar advogados sediados na África Subsariana que falem português a candidatarem-se a participar numa próxima cirurgia de litígio sobre o direito à liberdade de expressão e […]

Read