Keynote statement by Irene Khan, UN Special Rapporteur on Freedom of Opinion and Expression, at the Asia Pacific Expert Meeting on Disinformation Regulation and the Free Flow of Information, hosted by the World Justice Project in partnership with the Malaysian Human Rights Commission (SUHAKAM) and LexisNexis on 30 September 2023 in Kuala Lumpur, Malaysia.

Digital technology and social media have enhanced our ability to access and share information and ideas, to debate and organize ourselves, and build support networks. They have given space to many who are marginalized and denied a voice in the offline world. But along with the boost to freedom of expression has come a dark wave of disinformation, misinformation and hate speech. Generative AI is taking those threats to new distressing heights of deceit and dangerous consequences.

After I took up my mandate on freedom of opinion and expression my first report to the Human Rights Council in 2021 focused on disinformation. Soon after that, two consensus-based resolutions were adopted by the UN General Assembly (led by the Organization of the Islamic Conference) and by the Human Rights Council (led by a European state). Both have drawn on the findings and conclusions in my report. As the first UN resolutions ever to be exclusively devoted to the issue of combatting disinformation while upholding freedom of expression, I believe a good start has been made. The United Nations has embarked on a broader process to agree on a Global Digital Compact and plans to set up a Commission to look into the regulation of AI, the objective being to make the Internet safe and accessible for all. In the United Nations, there is widespread understanding of the cross-border nature of the issues, but at the same time there is a realisation that reaching consensus is not going to be easy.

At the national level, responses have been varied, ranging across a spectrum of regulation and co-regulation to little or no regulation, or bad regulation. There is plenty of room for this initiative to contribute to building a consensus on how we can enjoy the benefits of technology within a human rights and rule of law framework.

The negative impact of disinformation is palpable.

Disinformation is a serious threat to vulnerable groups and individuals, to institutions, communities, human rights, democracy and sustainable development.

Refugees, migrants and minorities have been frequent targets of disinformation, including when they are most vulnerable, for example, during armed conflicts. The most notorious example is the “Tatmadaw true news information team” in Myanmar, which posted online doctored and mislabelled photographs relating to the Rohingya crisis which led to genocide in the Rakhine State. Disinformation can turn into advocacy of hatred for incitement to violence, discrimination and hostility. It can have deadly effect in unstable and conflict situations.

In two weeks’ time I will present a report to the UN General Assembly on gendered disinformation. My report shows that patriarchal and misogynistic norms and stereotypes which persist in the offline world are being transferred to online spaces, with highly damaging disinformation against women with public profiles, such as female politicians, journalists, human rights defenders and activists. The ultimate aim is to intimidate and drive them off the platforms and out of public life, undermining human rights, diversity and, ultimately, democracy.

“Red tagging”or smear campaigns falsely accusing some journalists and social activists of being affiliated with Communist groups have exposed them to threats of violence and attacks.

Disinformation is often used against independent journalists to discredit them, one of the most best-known cases being that of Maria Ressa, the Nobel Laureate. In long established democracies, political leaders publicly denounce media outlets and journalists as “enemies of the people”, eroding public trust in media and endangering the safety of journalists. In many countries journalists have been jailed for producing “fake news” when they have sought to criticize government policies.

Disinformation has targeted a wide range of human rights – from the right to vote to the right to health. Public trust in democratic processes has been undermined because of well-orchestrated disinformation campaigns attacking the integrity of elections in some countries. The COVID 19 pandemic was mired in disinformation and misinformation – from the origin of the virus to the effectiveness of vaccines and the efficiency of government responses. On climate change, scientific information has been discredited, and environmental activists have been attacked.

Not surprisingly, public trust in the integrity of information is at an all-time low. Seeing is no longer believing in the age of generative AI. People do not know what to believe, what is true and what is false. That heightens the polarization of societies that we see as a common feature in many countries around the world. 

It is imperative to act against disinformation and act urgently, but it is vital to ensure that the action is effective. That is challenging for many reasons, but let me highlight two issues:

  • Firstly, disinformation is an intrinsically political and contested concept.

There is no agreed international definition of disinformation but is widely understood as false or manipulated information disseminated with intent to cause harm. Falsity, malign intent to cause harm and coordinated amplification are the three key elements.  

But what is false and what is true?

Truthful information can be labelled as “fake news”. For instance, the reports of Independent Experts are sometimes dismissed by Member States as “disinformation” when they criticize the human rights records of these States.  

The same information can be instrumentalized by two actors with diametrically opposite objectives. Someone who is regarded as a human rights defender can be considered as a terrorist or a traitor by another. Do we want the State to be the arbiter of truth under the circumstances?

What about parody, satire, opinions, beliefs, uncertain knowledge, evolving science: are they true or false? A binary lens of true and false does not help.

The right to freedom of expression applies to all kinds of information and ideas, including those that may shock, offend or disturb, and irrespective of true or false content. Under international law, we have the right to expound ill-founded opinions, even falsehood. What we do not have is the right to harm others’ rights or reputations. International law permits the restriction of information that is harmful to the rights or reputation of others, or to national security, public order, public health or morals, but such restrictions must be necessary, proportionate – the least intrusive measure – and strictly and directly limited to the harm that they are intended to protect.   

  • Secondly, there is a complex web of actors and vectors who spread disinformation for multiple motives: political, ideological, commercial or criminal.

Digital technology has made the manipulation of information a huge money-making business for companies and private actors. Multiple actors – States, political parties, businesses, unscrupulous media outlets, supported by troll farms and public relations companies – have made it a highly a sophisticated, lucrative business. 

Furthermore, the false messages that these instigators create and spread are then picked up wittingly and unwittingly by traditional media, celebrities and ordinary users – through peer to peer and friend to friend networks and intricate online and offline channels.

When false content spread online with intent to harm is picked up by innocent third parties with no such intent and passed on to others, it complicates the idea of “intent to cause harm”. Intentionally or not, the harm occurs.

Fighting disinformation becomes particularly difficult when the protector becomes the predator. By that I mean situations in which State itself is the source or sponsor of disinformation, either in its own country or abroad.  Unfortunately, we see many examples of that around the world.

State sponsored disinformation is extremely damaging in its impact. Not only does the State have significant resources at its disposable to create, support and spread disinformation, it has also the capabilities to shut down alternative sources of information challenging its false narratives.

Conceptual challenges of defining disinformation, malicious actors, commercial and political interests, all clearly play a role in disinformation but so do some other important factors: a struggling media sector, challenged by digital transformation, competition from online platforms and pressure from governments; the absence of robust public information regimes; low levels of digital literacy among the general public; and increasingly growing sections of populations who are aggrieved and frustrated, who feel politically disenfranchised, left behind by globalization, market failures, decades of economic deprivation and social inequalities. Evidence indicates that these aggrieved groups and individuals are more susceptible to disinformation and political manipulation.

In the face of this range of challenges, the efforts by States and companies to address disinformation are woefully inadequate – sometimes counterproductive.

I have commented on many laws targeting disinformation in the Asian region. I have also commented on similar legislation from other parts of the world. I have received submissions from hundreds of NGOs and legal experts criticizing these laws, raising credible cases of concrete harm, harassment and wrongful prosecution of individuals under these laws.

I can say from my global experience, including from this region, that disproportionate measures such as shutting down or disrupting the Internet, or vague, overly broad “false news” or cyber security laws that restrict freedom of expression beyond what is lawful under international human rights standards do little to combat disinformation, misinformation or hate speech, and much to suppress media freedom, legitimate political dissent and the work of human rights defenders.

By discouraging the flow of diverse sources of information, such laws hamper fact-finding or factual counter-speech, feed rumours, foster fear and undermine trust in public institutions. By compelling social media platforms to police speech, they create a risk that companies will zealously over-remove material and undermine free speech.

States should focus their attention and resources on how best to strengthen public trust in the integrity of information and institutions, how to protect minorities, women, journalists and other groups at risk of disinformation, hate and violence, how to empower people and build social resilience against disinformation and how to hold the platforms accountable without undermining freedom of opinion and expression.

So, what can be done by States, companies and civil society? Let me highlight 5 key points.      

First, disinformation is an attack on human rights, and strategies to fight it, therefore, must be grounded firmly in human rights and the rule of law, especially respect for freedom of expression.

All restrictions on freedom of expression – including those introduced to curb disinformation – must respect international standards of legality, necessity and proportionality, and limited strictly to the legitimate aims set out in the International Covenant on Civil and Political Rights.    

Efforts to eradicate online disinformation should not be used as a pretext by governments to restrict freedom of expression beyond what is permitted under international law.  Freedom of expression is a fundamental human right, essential for economic and social development as well as democracy.

I strongly caution against the prohibition or criminalization of disinformation. It is often counterproductive, and misused to silence critics, as we see in so many countries around the world. 

Criminal defamation is a relic of our colonial past. It has no room in modern, democratic societies.

Excessively harsh and disproportionate punishment can have a chilling effect on freedom of expression. Unfettered discretion must not be given to executive authorities without judicial oversight because of the possibility for abuse and arbitrary decision-making.

State regulation of the digital sector should be “smart”, not seeking to censor content but ensuring that company policies are in line with human rights standards, that companies are undertaking human rights due diligence and assessing and mitigating the negative human rights impact of their business model, policies, products and operations.  

Secondly, tackling disinformation requires a multi-faceted strategy.

Good laws based on human rights standards and the rule of law are a vital tool in the toolbox to tackle disinformation, but they are sufficient on their own.

A multi-faceted strategy should include measures to ensure access to reliable public information, promote independent, free and diverse media, fact-checking, digital and media literacy, and community-based awareness programs.

Protecting and promoting independent, free, diverse and pluralistic media, and ensuring the safety of journalists are crucial. Independent media plays an important role in fact-checking, countering disinformation and State propaganda. 

Fact-checking organisations can also contribute to debunking false information but in order to engender public trust they must be independent of the State.

States themselves have an obligation to proactively provide verifiable, reliable information. Transparency and robust public information are critical for engendering public trust in information integrity.

Human rights commissions have a role to play in putting forward credible information and serving as an institution that builds public trust. To give one example of good practice, in Indonesia, “security guarantees” from the national human rights commission, alongside support from human rights organizations, have helped to counter disinformation about LGBTQ+ communities.                      

Third, social media companies must do more.

Companies should bring their business models, policies and activities in line with the UN Guiding Principles on Business and Human Rights, including human rights due diligence, impact assessment of their business policies, practices.

Platforms need to move away from a “one-size-fits-all” approach. Platforms need to invest the resources to better understand the local contexts. They need to identify the specific factors that increase the risks of disinformation in different contexts and act to minimize them. That means also looking at their business models, practices and operations.

Platforms also need to be more transparent about their own practices, and also about  government requests that they receive to take down information and consider these requests in light of their human rights responsibilities, and disclose them in their periodic transparency reports. 

Fourth, users must be empowered to access diverse sources of information.

The Internet is not equally available or accessible to all. That is deepening existing inequalities, and creating new inequities along lines of gender, geography, ethnicity, income and digital literacy. Lack of meaningful access to the Internet reduces the ability of people to access diverse sources of information and makes them more vulnerable to disinformation. 

States and companies must reinforce their efforts to close digital divides, data gaps, and other barriers to individuals’ ability to exercise their right to information. 

Investing in digital, media and information literacy should be a top priority and should become part of National Education Plans and curricula. It is fundamental for empowering users and restoring public trust in our information society. Civil society and community based organizations should be given the resources to promote digital, media and information literacy and build awareness and support structures to develop social resilience against disinformation and misinformation and counter hate speech.

Fifth, strategies to fight disinformation should be multistakeholder oriented. The Internet by its very nature is multistakeholder, involving companies, governments, State institutions, civil society and ordinary citizens. As providers, regulators, and users they are all relevant in fighting disinformation.

Efforts to fight disinformation should be the product of a deliberative process that includes consultations with all stakeholders, and I emphasize in particular civil society and affected communities.

Let me end by reiterating that the Internet is not a human rights free zone. Rights offline mut be protected online. Responses to disinformation should be grounded firmly in international human rights standards and guidelines.

The right to freedom of opinion and expression is not part of the problem – it is the objective of fighting disinformation. It is the means with which to fight disinformation effectively. It is the key to the solution.