Prebunking or fact-checking? What matters is a comprehensive approach.
Dr. Joachim Rother
As diverse as disinformation strategies are, so are the methods to counter them. However, a look across the world reveals: In actual practice, one can hardly talk about a variety of methods. This needs to change.
Disinformation from Veles: A small town gains fame
2016. Shortly before the U.S. presidential elections. Donald Trump is a few months away from becoming U.S. President. Far away from that, in the small town of Veles in North Macedonia, some teenagers experiment with websites, filling them with random headlines copied from major media outlets, and realise: The articles are generating clicks. And quite a lot of them. The model catches on. The websites become more numerous and professional. Some now appear to resemble legitimate news outlets. Five to ten articles per website are published every day, and although most of the pro-Trump articles make little sense or contain no truth, many of them spread like wildfire. From the middle of nowhere in Europe, public opinion in the U.S. is getting influenced, and some of these teenagers in the economically-struggling North Macedonia suddenly earn money: Between August and November 2016, over 16,000 USD through Google AdSense payouts. It is only when The Guardian and Buzzfeed publish investigations revealing that at least 100 websites registered in the small town in North Macedonia are churning out disinformation about the U.S. elections, that Google demonetises the websites. The advertising revenue dries up, and the operators lose interest.
Anyone who thinks of Russia, China, or Iran when it comes to disinformation campaigns will be surprised by the monetary motives of the Veles example, as the motives for creating and spreading disinformation vary greatly. Whether targeted political influence or purely economic interest, countermeasures must consider the mechanisms and context of specific disinformation efforts to be effective.
The dilemma of choice: Which method is the right one?
The toolbox of countermeasures to mitigate disinformation is versatile. What is striking, however, is that most methods only address disinformation when it is already out and difficult to rein in, including fact-checking or debunking.
Prebunking, on the other hand, attempts to prepare people for disinformation or specific misleading narratives before they even encounter them. The goal is to build resilience through sensitisation, thereby undermining the impact of disinformation. How such prevention against disinformation can work technically is demonstrated by Google subsidiary Jigsaw with its video campaigns: Video snippets that address specific disinformation and warn against it are played as so-called pre-rolls before the actual content. The problem: Prebunking is labour-intensive, must be tailored to specific topics of disinformation, and its effectiveness is limited. According to one study, the proportion of people who could recognise manipulative content after watching a prebunked video increased by an average of 5 percentage points.
In contrast to prebunking, debunking focuses on correcting disinformation once it has already been published. Unlike fact-checking, the strength of debunking lies in placing content and sources within a larger context and identifying patterns through which disinformation is spread in major sectors such as climate or gender. Debunking is practised very successfully by numerous projects worldwide, such as AltNews (India), Mafindo(Indonesia), or Africa Check (South Africa). Typically, these corrections are published and disseminated in comprehensive counterstatements after extensive research. However, this also highlights the challenges of this method: Debunking is labour-intensive and time-consuming, and by the time the counterstatement is published, the original false information is usually several days old. This is problematic because studies show that false information on social media generate 90% of their engagement on the first day—far too quickly for debunking to keep up.
Fact-checks, however, can have a much quicker impact, often taking only a few hours and requiring much less time. In this method, statements or reports are verified for their truthfulness and evaluated through confirmation, correction, or rejection. Fact-checks promote accountability among public figures and encourage verifying the truthfulness of information before it is published or shared. Fact-checks are conducted according to journalistic standards, such as those defined by the International Fact-Checking Network of the Poynter Institute or the European Fact-Checking Standards Network.
Since Donald Trump’s first candidacy in 2016, fact-checking has become the most widely used method globally in the fight against disinformation. In 2023, the Fact-Checking Census by the Duke Reporters’ Lab counted over 400 institutions that are active in fact-checking in about 69 languages across more than 100 countries. Our own international research, based on desktop research, expert interviews, and workshops on five continents, also highlights the dominance of fact-checking as a method. Of more than 230 registered initiatives, more than half are involved in fact-checking to some extent.
Despite these good examples from around the world, the effectiveness of fact-checking is subject of great debate. Apart from the mental strain on fact-checkers, the sheer volume of disinformation is too great, the fact-checks themselves are too slow, and their measurable impact is too limited. And another problem rattles the fact-checking method: Disinformation actors are hijacking the tool and simply publishing their own “fact-checks.” By taking advantage of the trust-building effect of fact-checking, political polarisation can apparently be spread much more easily, as shown by the case of CheckYourFact.com, a right-wing conservative fact-checking outlet of former Fox News host Tucker Carlson.
Follow the Money!
So, are the resources exhausted? Are we powerless against the flood of disinformation? There is hope, as a look at the international landscape of protagonists reveals gaps worth examining more closely. The aforementioned global mapping of anti-disinformation initiatives, with over 200 entries, lists only four organisations (Check My Ads Institute, Global Disinformation Index, Konspirátori, Sleeping Giants Brazil) that use demonetisation as a primary tool in the fight against disinformation.
This is noteworthy because demonetisation fundamentally differs from all the basic concepts mentioned so far, as it targets the incentive that often leads to the spread of disinformation in the first place: Economic interest. When social media accounts or websites are identified as sources of disinformation, platforms or hosts can cut off their funding by, for example, drying up advertising revenue through Google AdSense.
Why this can be a sensible measure is shown by a look at the numbers: The NGO Global Disinformation Index analysed 20,000 domains spreading disinformation in a study and found that ad tech companies had placed ads worth 235 million USD on these sites.
In a system where success is measured by clicks and page views, disinformation content can be monetised relatively easily, as a recent collaborative article by the Centre for Media Pluralism and Media Freedom (CMPF) and the European Digital Media Observatory (EDMO) highlighted. This type of content is quick and cheap to produce and is often prioritised by platforms because it generates high reach under the guise of free speech through emotionalisation, controversy, clickbait, or decontextualisation. While this is not illegal, platforms can decide to take action and penalise the respective accounts—such as by withholding their advertising revenue. However, this is not without problems, especially when advertising revenue is blocked without transparent reasons. As early as 2018, an article in the SZ concluded that “demonetisation […] would be the bogeyman among professional YouTubers,” as the lack of transparency in such measures could suddenly deprive entire livelihoods. Despite vehement demands from the EU Commission’s Vice-President Věra Jourová to enforce demonetisation measures on platforms, the major platforms are still reluctant to consistently implement demonetisation measures due to ongoing criticism, not least from their content creators.
Demonetisation takes many forms: Public pressure is needed
Some are not satisfied with this. Organisations such as the Global Disinformation Index (UK), Sleeping Giants (Brazil), or Konspiratori (Slovakia) evaluate websites or accounts with high reach for their trustworthiness and the reliability of the information provided. If accounts or websites are suspected of spreading disinformation, this is made public, and advertisers or the platforms themselves are urged to stop placing ads there or to block the revenue accordingly. After all, most brands want to avoid being discredited by dubious advertising partners.
The fact that this method of demonetisation through public pressure can be a sharp sword is impressively demonstrated by the example of the teenagers from Veles mentioned at the beginning: When the money dries up, it is often no longer worth maintaining the channel. Unlike many other methods, demonetisation thus goes beyond merely treating the symptoms and, if successful, can tackle a key driver of disinformation at its root and bring it to a halt on the respective channel. However, scientific research on the method of demonetisation is still in its early stages, and beyond anecdotal evidence, such as the successful work of Sleeping Giants USAagainst Breitbart News, no reliable statements can currently be made about the medium- or even long-term effectiveness of the method.
There is no Swiss army knife against disinformation
As shown, the toolbox against disinformation does not offer a single tool that, even in a customised application, can counter the entire spectrum of disinformation. The good news is: It doesn’t have to. Because complementarity is key.
An information ecosystem that wants to successfully defend itself against disinformation in the long term must rely on a plurality of methods and a meaningful interplay of different mechanisms. More is not necessarily better, as the case of fact-checking shows. Rather, methods should be coordinated so that disinformation in its different phases—both before it is created (media literacy, prebunking) and after it is spread (debunking, fact-checking, or, to some extent, demonetisation)—has as difficult of a time as possible to unleash its destructive effects.
Our international research shows that the options here are not yet exhausted. Demonetisation emerges as a strategy in the international comparison that, despite its potentially significant impact, has so far received relatively little attention and is therefore underutilised. It seems that this strategy, due to its approach, is capable of severely undermining the profitability of disinformation. Demonetisation addresses a gap in the current practice of most anti-disinformation strategies and should be much more widely applied in the future than it has been.