2025.
tunnel vision that comes with the limited triad of misinformation, disinformation, and malinformation.
MIS-, DIS-, AND MALINFORMATION
In casual use, the word misinformation can mean many different things. The phrase fake news became ubiquitous in 2016 but was quickly politicized to mean “anything one disagrees with.”
To avoid confusion, many who work on such problems generally prefer using the umbrella term information disorder.
It’s hard to find any exact definition. We all sense the crisis of credibility in the information that AI algorithms are pushing over the chaos of our social media, news feeds, search results, recommendations, and chatbots. But formulating the problem is trickier than it appears at first blush.
Nearly all sources just more or less follow Claire Wardle and Hossein Derakhshan, who coined and used the term information disorder without explicitly defining it. What they did instead was to describe what they saw as its three types:
1. Misinformation is when false information is shared, but no harm is meant.
2. Disinformation is when false information is knowingly shared to cause harm.
3. Malinformation is when genuine information is shared to cause harm, often by moving information designed to stay private into the public sphere.1
These three types of information disorder are often depicted using a Venn diagram drawn by Wardle and Derakhshan.2
Falseness Intenttoharm
Misinformation
Unintentional mistakes such as innaccurate photocaptions, dates, statistics,translations, or when satireistaken seriously.
Disinformation
Fabricated or deliberately manipulated audio/visualcontent. Intentionallycreated conspiracy theoriesor rumours.
Malinformation
Deliberate publication of privateinformation for personalor corporaterather than publicinterest,such as revenge porn. Deliberatechangeof context, date or time of genuinecontent.
Wardle and Derakhshan’s triadic view of information disorder has spread widely and now appears in a variety of formats.
Among the following page spread of examples of infographics all parroting the same characterization of information disorder, the Bipartisan Policy Center’s infographic notes, “It is important to differentiate between the three types, which are distinguished by truth and intent, in order to properly identify, interpret, and combat false or harmful information without undermining free speech and First Amendment rights.”3
Around the world, politicians, think tanks, nonprofits, regulatory agencies, and governments are issuing rallying cries and funding programs to tackle the misinformation, disinformation, and malinformation triad. The US National Institutes of Health, the United Nations, and the Council of Europe focus on the same three types of information disorder.
on many levels: words, phrases, sentences, paragraphs, situations, and stories.)
Let’s think one step further.
So far we can see that the way information disorder has been getting portrayed is heavily focused upon detecting falsehood and malintent.
For example, the Aspen Institute frames its Commission on Information Disorder with the opening statement: “State and non-state actors are undermining trust and sowing discord in civil society and modern democratic institutions by spreading, or encouraging the sharing of, false information across traditional and non-traditional media platforms.”4
The emphasis is on bad actors with malicious intent and on false information that contradicts our belief system.
Let’s unpack this portrayal.
NEGINFORMATION
Bad actors and false information are actually two independent factors.
When we think about it, a Venn diagram is kind of an odd way to depict two independent factors.
A more natural way to depict them is a two-by-two grid.
We can use the horizontal axis to distinguish whether there’s misleading information being maliciously propagated by bad actors or misleading information being negligently propagated by decent ordinary folk.
And we can use the vertical axis to distinguish whether the misleading information is an outright falsehood or a partial truth that selectively omits crucial context.
Here’s how misinformation, disinformation, and malinformation would fall in that two-by-two grid:
in lots of variations: the crowd and the folks in gray can be criminals, police, terrorists, gangs, protestors, counterprotestors, political groups, or just brawlers.
Imagine that the crowd is a group you dislike and the folks in gray are from a group you sympathize with (if you haven’t already, which many of you inevitably did do).
Try hard to visualize the full picture. What you visualize will depend on where you live, what your sociopolitical leanings are, what recent events have been on your mind, and so on.
Do you find yourself searching just a little harder for justification for the folks in gray firing bullets at the crowd?
Now imagine the reverse—that the folks in gray are from a group you dislike and the crowd is a group you sympathize with.
Do you find yourself searching just a little harder to vilify the folks in gray for firing bullets at the crowd?
Remember fundamental attribution error from chapter 8? It’s that cognitive bias kicking in unconsciously.
When AIs decide to propagate neginformation, they manipulate our unconscious by exploiting its Achilles’ heel.
The selective omission of crucial context opens the door for our many cognitive biases to kick in—in ways that can be highly predictable to AIs.
Confirmation biases let AIs drive us to interpret, process, and remember information in ways that confirm what we already believe:
• Selective perception—AIs can predict how our liking or disliking the crowd or the gray-clad folks triggers expectations that will influence how we perceive different context omissions that the AI is choosing from.
• Semmelweis reflex—If we already operate in the paradigm that we like or dislike the crowd or the gray-clad folks, then
AIs can choose partial truths that omit new evidence to fit our likes or dislikes so that we won’t reflexively reject the partial truths.
• Subjective validation—When our (dis)like for some group is bound up in our own self-esteem or identity (say, because we identify more closely with the crowd or because we believe the gray-clad folks are opposed to our own tribe), then AIs can choose partial truths they predict we’ll see as compatible with our identity.
Anchoring and belief perseverance make it hard for us to change our minds, which AIs can depend on:
• Conservatism bias—AIs can predict that if we’re first shown a misleading partial truth, then we won’t revise our beliefs sufficiently when we’re shown new evidence.
• Backfire effect— Even worse, AIs can predict that if we’re first shown a misleading partial truth, then we’ll react to contradictory evidence by digging in our heels even more deeply.
Compassion fade means that AIs can predict how we’ll be biased unconsciously to have more compassion for a handful of identifiable victims (the couple of gray-clad folks) than for many anonymous ones (the crowd).
Egocentric biases, as AIs can easily predict, drive us to hold too high an opinion of our own perspective:
• Illusion of validity—AIs can predict how we’ll overestimate how accurate our judgments are, especially when we can find a way to fit our own beliefs to the available partial information that AIs choose to show us.
• Overconfidence effect—AIs can predict how we’ll tend to maintain excessive confidence in our own answers. For