Riots Reloaded: Major social platforms are still poorly equipped to counter disinformation campaigns ahead of elections
Charlotte Freihse
Disinformation campaigns targeting the electoral process and its legitimacy played a major role in the Brazilian riots on January 8th, 2023, and the U.S. capitol riots in Washington D.C. on January 6th, 2022. In both cases, the actions of social networks that were flooded with claims of election fraud or denial long before the actual rioting leave much to be desired. Considering that the three most populous democracies are calling to the polls in 2024, the Brazilian riots should present a final wake-up call for platforms to increase measures protecting against disinformation and facilitating constructive discourse in times of elections.
Photos and videos of the attacks on the Brazilian Parliament, the Presidential Palace, and the Supreme Court flooded major social platforms, such as Facebook, Instagram, TikTok, Twitter, and YouTube on January 8th. The disturbing footage shows Jair Bolsonaro supporters forcibly gaining access to the country’s three most important democratic institutions, creating a trail of destruction in their wake. Also disturbing: the visual similarities to an event that took place almost exactly two years earlier, the Capitol riots in Washington D.C. on January 6, 2021: desperate security forces trying to regain control over the situation, national and confederate flags everywhere, violence, and destruction. The events in Brasilia after Lula’s electoral win over former President Bolsonaro hence present an awful case of déjà vu.
The Similarities jump off the pages
Quickly after the swarming of the main buildings of Brazil’s government, comparisons to the rioting in Washington D.C. were drawn. No wonder – both the U.S. and Brazil are struggling with growing polarization within society. The fact that only two parties (and thus candidates) with a serious chance of winning run in the elections does not defuse the sentiments at all. On the contrary: the hyperpolarization of two fronts of political positions and the sheer absence of any grey, middle ground in between positions already foreshadowed that it would be challenging to reunite society afterward. And there is more: ahead of both elections as well as after the results, disinformation targeting the legitimacy of the election process was encouraged and amplified by then-Presidents Trump and Bolsonaro to undermine the electoral outcome. Under the phrase “stolen election” it was then taken up by other politicians, alternative media outlets, skeptical voters, and right extremist groups who disseminated these conspiracy ideologies, disinformation and hate speech on social networks. And, more importantly, who formulated direct calls for action, mobilizing people for the attacks in the weeks leading up to the riots.
So, what about platform governance?
Taking stock of social platform actions does not make them look good in either one of the cases. In the U.S., Meta, Twitter, and Google banned political ads and labeled misinformation ahead of the elections. After the riots, the account of Donald Trump was banned from Meta and Twitter (however it was restored on Twitter with Musk’s takeover and will be so on Meta, as the company announced in January 2023.) In Brazil, the Superior Electoral Court (TSE) released an agreement with Google, Meta, TikTok, and Kwai nine months ahead of the elections, however leaving open how they will react to electoral disinformation campaigns, let alone to contested election results and incitement to violence in connection to that. It’s not that major platforms are sitting idly by, it’s just that they seem to struggle to come up with a strategy that includes ongoing prevention efforts as well as capabilities to react to warning signals and crisis mechanisms during events – in short: platform operators were and are insufficiently prepared to events as we have seen in Washington D.C. and Brasilia. And in both cases, Meta, Google, Twitter, and TikTok could and should have known better. While platform usage differs across countries, all of them were used to organize and rally. And all of them allowed a concerning amount of disinformation and hate speech in connection to the elections to spread unchecked. This brings up pressing questions: how do we learn from other countries’ cases? And how do we develop concrete measures that go beyond naming the obvious like that we need more content moderation of platforms? Not only are cultural contexts specific and thus require context-specific measures, the digital ecosystem, too, unfolds specific dynamics depending on user numbers, habits, communications preferences and similar.
Increased Social Media Monitoring and cross-platform analysis are needed
While very large platforms such as Twitter, YouTube, TikTok, and Instagram were used to accelerate disinformation in both countries, it is a lot harder to pin down the mobilization of rioters to one network. They use various ones besides Telegram and WhatsApp: in the U.S. Parler and Gab (after the rioting also TruthSocial and Gettr), and in Brazil particularly Kwai. These small, alternative platforms play a huge role in the radicalization of right-wing groups in the U.S. and in Brazil. Via cross-platform posting radical narratives are increasingly transferred to mainstream discourse. It is thus not sufficient to monitor narratives on major social platforms to fully understand actor groups and dynamics to predict and prevent such events in the future. So here again, social media monitoring of various platforms is much more needed. Additionally, it is not enough to understand the technical features of platforms but to also analyze data in the specific cultural context. This does not only mean contextualization but also more and better moderation of non-English content.
Let’s not forget, 2024 will be a global super-election year: 35% of the world’s population will elect their representatives and government: India with 1.4 billion people, the European Union with 447 million, and the United States with 331 million. From a platform governance perspective, this means a lot of work. Let’s hope that they will assume their responsibility and don’t leave the majority of counter-disinformation work to civil society efforts.