visual perspectives
visual perspectives

In conversation with Bellingcat: „Algorithms radicalise people“

    Perspectives

Virginia Kirst

Artikel

In an interview with Upgrade Democracy, Bellingcat-founder Eliot Higgins gives insights into how moral injuries result in the emergence of online disinformation communities, explains why generative AI might contribute to a fracturing of reality and comments on the importance of educating the young generation about these topics.


Eliot Higgins, as over half of the world’s population heads to the ballot box in 2024, experts and politicians are increasingly concerned about the impact of disinformation on these elections. How did we get to this point?

Many of the issues we face today stem from people feeling disempowered and seeking for sources of empowerment online, far away from the mainstream media. Consequently, they end up in spaces dominated by non-professionals and people with agendas, giving rise to disinformation communities. They feel disengaged from society and politics and look for something online to fill in that void.

You have been following these communities for over a decade.

I have been using the Internet since 1995 and have been heavily engaged with online communities spreading disinformation for the past 15 years. I believe these people don’t consider themselves as lying. Rather, they see themselves as truth-seekers, fighting against the bad side. This is evident across different communities, like the alternative health community, the MH17 sceptics and the chemical weapon truthers.

From the outside, these groups seem rather disparate.

They are united by a deep distrust in traditional sources of authority, such as the government, the media, or medical professionals. This distrust often stems from past traumas, like a moral injury.

Can you provide an example of a moral injury?

For many chemical weapon truthers, the moral injury stems from the 2003 US-led invasion of Iraq. They viewed it as a massive betrayal by the government and the media, which shattered their trust. Consequently, they put themselves into opposition. This led them to believe in conspiracies surrounding subsequent events, such as chemical weapon attacks in Syria. Because if you want to believe America is bad, then you will have a lot of history and many books, websites, videos and tweets to reinforce this idea. In this view, anyone who opposes that, like Bellingcat for example, is working with the bad guy. And therefore, they’re also bad. It’s a very binary view of the world.

What is the moral injury that led to the emergence of anti-vaccine conspiracy theories?

That’s a slightly different dynamic. When COVID-19 vaccines appeared, people were concerned about their safety. However, searching for phrases like ‚Are vaccines safe?‘ on Google, can lead to a cascade of increasingly extreme content. Because by clicking on results that suggest vaccines are not safe, the algorithm will recommend more of it, assuming you’re interested in it. In the beginning the recommendations might be mild content like concerns about the additives in vaccines. But reading about this will expose you to the next level of content. Eventually, you end up reading about vaccines causing autism and from there it’s not far to the level where Bill Gates has put microchips in vaccines.

Luckily, not everybody who has doubts about the safety of vaccines ends up there.

The process is like a pyramid. People start at the bottom with the mild content and gradually process to the more extreme views, but not everyone reaches the level of believing that Bill Gates included microchips in vaccines. However, a percentage of people will. And on a global scale, that can amount to hundreds of thousands, or even millions. At this extreme level, I often find a deep distrust in authority that originated in past traumas. For instance, in the alternative health community, many people have had negative experiences with medical professionals. While their concerns are based in genuine anxieties, algorithmic radicalisation exacerbates these fears. They get driven insane by the Internet. And if they cross a certain threshold of distrust, they become unreachable, viewing anyone outside their community as the enemy.

Members of these communities often shift from one topic to the next. How does this happen?

There are many websites and communities whose main topic is their deep distrust in Western governments and the media. Members of the communities will align with opposing forces, like Russia. We’re seeing this with the farmer protests in the EU where some of the participants were flying Russian flags. Their protest becomes an almost childish opposition to the EU and the U.S., that’s not really about politics. It’s about being in opposition to the bad guy, whoever you’ve decided that is.

How do we mitigate this thinking?

Education. We must equip young people with the skills to exist in that environment, where they constantly engage with media. But it’s a different type of media than older generations are used to. The younger generations expect to be part of the conversation and not just receivers of information. If someone posts a video and they disagree, they post a video back. This is how TikTok works.

Which skills do they need to navigate that new media environment?

We need to make sure they aren’t drawn to the people who will exploit them and help them become positive members of these online communities, because they are going to be part of them anyway. At Bellingcat, we have been doing this with our workshops. Young people can do a lot of what we do, they just need a little guidance. Failure to do so will result in many more people being drawn to these disinformation communities with an oppositional approach to the world. That would be poisonous for democracies and an advantage for authoritarian governments.

Does generative AI make your work more difficult?

I’m not concerned about AI-generated imagery in terms of evidence because our whole process of investigation is building networks of information around incidents, not just trusting one picture. In terms of information, though, it’s a huge problem.

Why?

Because it creates a permission structure for people to deny real images. People can claim that real images that don’t fit their beliefs are AI-generated, feeding into the dynamic where disinformation communities deny things because they come from the perceived other side. This creates a gap where anybody thinks they can choose their own reality. People will be able to exist in bubbles and deny the rest of the world is real. And there will be so much information that reinforces their ideas that they won’t have to question them. This will result in a fractured society where it’s not about left or right or political parties or what the newspapers write, because everyone is living in online bubbles, believing whatever they want to and that’s massively damaging to democracy.

Are you worried about this perspective?

I’m very engaged with these topics. But when I speak to other people, I realise that this is going to be a serious issue because many have never heard of generative AI or seen what it can do. That makes it much easier to deceive them. Therefore, we really need to educate everyone and make them part of the conversation.

Eliot Higgins is the founder of Bellingcat, an open-source intelligence organisation, known worldwide for its investigations into conflicts, human rights abuses, and disinformation campaigns.


Virginia Kirst

Virginia Kirst

Freie Journalistin

Ich arbeite als freie Journalistin zwischen Rom und Hamburg. Meine Spezialität ist, die römische Politik zu entwirren und zu zeigen, welche Folgen sie haben wird – für Berlin, Bern, Brüssel und Wien. Als Auslandskorrespondentin schreibe ich Analysen, Berichte, Interviews und Reportagen für Zeitungen, Magazine und Webseiten. Außerdem berichte ich im Live-Fernsehen über aktuelle Ereignisse und werde als Italien-Kennerin ins Fernsehen, ins Radio und zu Podcasts eingeladen.

Freie Journalistin

Share

Similar articles