Defining (and enforcing) a clear line between information and mis-information is impossible, but that doesn’t mean misinformation doesn’t exist or that there is nothing to be done to combat it.
I found myself caught in an ‘interesting’ set of exchanges on twitter a few weeks ago (I won’t link to it to spare you the tedium, but you could probably find it if you look hard enough). The nominal issue was whether deplatforming known bull******s was useful at stemming the spread of misinformation (specifically with respect to discussions around COVID). There is evidence that this does in fact work to some extent, but the twitter thread quickly turned to the question of who decides what is misinformation in the first place, and then descended into a free-for-all where just the very mention that misinformation existed or that the scientific method provided a ratchet to detect it, was met with knee-jerk references to the Nazi’s and the inquisition. So far, so usual, right?
While the specific thread was not particularly edifying, and I’ll grant that my tweets were not perfectly pitched for the specific audience, this is a very modern example of the classic Demarcation Problem (Stanford Encyclopedia of Philosophy) in the philosophy of science.
Science and Pseudo-Science
The demarcation problem is classically linked to the difficulty in deriving general principles that distinguish real science from pseudo-science. Everyone can name what (to them, and maybe many others) are examples of pseudo-science: astrology, homeopathy, cold fusion, etc., but coming up with characteristics that define ‘pseudo-science’ but that exclude ‘science’ is very hard (and maybe impossible).
For instance, the popularity of a scientific idea is not a useful metric, since many initially fringe ideas have subsequently become the consensus (perhaps all consensus ideas at some point were fringe?). More fruitfully, is the way in which pseudo-scientific ideas are argued for salient? Clearly, ideas that are built on logical fallacies, cherry picking, or rhetorical tricks, shouldn’t be relied on and are frequent signs of misinformation . However, the existence and use of poor arguments does not preclude the existence of better ones. Rightly, we tend not pay much attention to unfalsifiable theories, but not everything unfalsifiable is pseudo-science (string theory, for instance, though some might argue this!). However, some theories’ predictions simply can’t (yet) be tested. Gravitational waves weren’t pseudo-science just because it took a century to verify their existence.
Conversely, many pseudo-sciences make many falsifiable statements (many, indeed, that have already been falsified). Popper’s claim that scientific claims are demarcated by falsifiablity is thus hard to support. What about the skill of predictions? After all, this is the gold standard of scientific theories. Theories with a track record of successful predictions (not just hindcasts) would seem to be sufficient to be considered scientific, but it’s clearly not necessary. And so on…
Pseudo-science and misinformation
Pseudo-science is often thought of at the level of a theory or a body of work, not at the level of a single fact or argument. However, misinformation can be far more granulated and doesn’t necessarily have to relate to a coherent view in any respect. Like pseudo-science, misinformation is often clear in specific examples (claims that vaccines implant micro-chips! or they make you magnetic! or that Space Force is about to stage a coup!) but hard to define in general.
As above, misinformation can’t be defined simply as anything that isn’t part of a scientific consensus (too broad), or that isn’t falsifiable. Sure, it might be easy to say that misinformation is information that is demonstrably false, but that begs the question of how this should be demonstrated and who is to judge when it has been.
Going further, what about information that is simply misleading? As we’ve seen in the climate discourse it’s easy to find red herrings that are true as far as they go, but that don’t go very far. Did you know that climate has changed before? Or that water vapor is the most important greenhouse gas? These claims are not false, but are often used in the service of a false premise (that human-caused climate change is either not happening or not important). Even here, there is a normative (subjective) judgement. Who is to say what is important? for what? or who?
Cherry picking, where a specific, often noisy, metric is highlighted to counter the larger scale picture (see anything published by Steve Koonin) or a single outlier study is hyped without taking account of the caveats or the other work in the area (anything pushed by Bjorn Lomborg), is another example. These claims are intended to mislead, but it is often the implicit warrant of the argument that is the misinformation. And how can you reliably detect what is implicit in an argument for any particular audience? Thus misinformation is often context dependent.
However, the existence of edge cases like this doesn’t mean that one can never say that something is misinformation. In exactly the same way that just because science is uncertain about some things it does not follow that everything is uncertain. Perhaps we should follow Justice Potter Stewart?
I shall not today attempt further to define the kinds of material I understand to be embraced within that shorthand description; and perhaps I could never succeed in intelligibly doing so. But I know it when I see it.’
Can the impacts of misinformation be minimized?
Despite the trouble that arises in trying to find hard and fast rules that delineate misinformation from information, it is still worth pushing back. [And contrary to the opinions of some, pushing back is just as clear an exercise of one’s free speech as is the misinformers pushing their misinformation].
There was a conference last week (#DisInfo2022) on the role that disinformation is playing in our political discourse and had a lot of discussions on what it is and what should be done.
Most pushback is however, reactive. Someone somewhere puts out something stupid, and more informed people respond with facts, or context, or scorn. The pushback rarely gets the same attention as the push, and the exercise is generally futile expect perhaps at the margin, or as record that can be reviewed later. The misinformation peddler benefits from the attention and they parley that into a reputation as someone who ‘owns the libs’ or is a ‘brave truthteller oppressed by the establishment’ or ‘a victim who is being unjustly persecuted’ – a veritable Galileo even!
Perhaps more useful is the idea of inoculation against misinformation (e.g. van der Linden et al (2017) or ). The idea is that if people know what kind of argument or tactic is used by misinformers, they’ll recognize it when it’s used and be able to dismiss bad arguments when they arise without additional help. I think in the end this is the way most bad arguments die – people develop a kind of ‘herd immunity’ to them and the misinformers find that these bad arguments no longer generate the buzz they once did. But like viruses, the bad arguments will evolve as they misinformers try to find something that works, and sometimes they can come roaring back when everyone thought they were dead an buried. Thus maintaining the ‘herd immunity’ to misinformation is a constant battle. It never gets settled because it’s never really about the topic at hand, but it’s almost always a proxy for a deeper clash of values.
However, the empirical evidence suggests the most effective way of preventing misinformation spreading, is simply to reduce exposure to it. For instance, paying people to watch CNN instead of Fox News seems to work. Deplatforming repetitive disinformers does too. Here’s another case.
Social media deplatforming is often strongly criticised as being against the ‘spirit’ of free speech (not the actual First Amendment, which only enjoins the US government). But should the free marketplace for ideas be a total free-for-all, where voices are drowned out by bot farms pouring sh*t onto everyone else’s stall? Creating and curating accessible spaces and environments that elevate information over misinformation seems to me to be an essential part of building an informed democracy (which is what we want, no?). This might not be completely compatible with platforms that are really optimized for engagement rather than discourse (this remains to be seen). But it is surely an impossible task if we don’t take the misinformers seriously.
- S. van der Linden, A. Leiserowitz, S. Rosenthal, and E. Maibach, "Inoculating the Public against Misinformation about Climate Change", Global Challenges, vol. 1, pp. 1600008, 2017. http://dx.doi.org/10.1002/gch2.201600008
- J. Cook, S. Lewandowsky, and U.K.H. Ecker, "Neutralizing misinformation through inoculation: Exposing misleading argumentation techniques reduces their influence", PLOS ONE, vol. 12, pp. e0175799, 2017. http://dx.doi.org/10.1371/journal.pone.0175799