AI Deepfakes Fuel Disinformation War in Sudan’s Social Media

As Sudan faces an ongoing conflict between its national army and the paramilitary Rapid Support Forces (RSF), a new and troubling battleground has emerged: social media, where AI-generated deepfakes are routinely deployed to manipulate public opinion and sow confusion. While deepfake technology remains rudimentary, especially with voice-cloning models that struggle to accurately mimic Sudanese dialects, it has nonetheless become a powerful tool in the psychological warfare shaping the narrative of Sudan’s civil unrest.

Weaponized Information: Deepfakes in Sudan’s Conflict

In April 2024, a fake image of a burning building allegedly belonging to Al-Jazeera University in Wad Madani went viral on Facebook, with captions blaming the Sudanese Armed Forces (SAF) for bombing the structure. Even prominent political leaders shared the image, believing it to be real. The incident was not isolated; as the conflict drags on, both sides have used AI deepfakes to generate misleading visuals and audio recordings, aiming to turn the public against their opponents.

This digital misinformation wave has significant implications for Sudan, a country already struggling with instability and a fragile information ecosystem. In a place where credible news sources are limited, these deepfakes serve as potent instruments of deception, spreading fabricated stories that are often challenging for the public to verify.

Early Instances of AI-Generated Disinformation:

The use of AI-generated media in Sudan’s civil conflict is not new. In August 2023, a video surfaced featuring a purported speech by the U.S. ambassador to Sudan, where he allegedly declared intentions to reduce Islam’s influence in the country. The video, which was later debunked, marked one of the first AI-manipulated videos aimed at inciting public reaction.

In another case in October 2023, a deepfake impersonating Sudan’s former leader, Omar al-Bashir, went viral on TikTok, gathering hundreds of thousands of views. As these examples show, deepfakes are being used not only to discredit public figures but also to inflame an already volatile political landscape.

Satirical Deepfakes and the Fine Line Between Humor and Harm:

While some tech-savvy Sudanese have turned to AI deepfakes for satirical purposes, this type of content often walks a fine line. In September 2023, an AI-altered video of RSF leader Mohamed Hamdan Dagalo, or “Hemedti,” singing a pro-military song with one of his officers was widely shared. Although intended as satire, such content can blur the lines between humor and harmful misinformation. Misleading AI-generated content has also been shared by reputable figures, demonstrating how deeply misinformation can penetrate.

Source

The Growing Influence of “Liar’s Dividend”:

The spread of deepfakes has given rise to a phenomenon known as the “liar’s dividend,” wherein individuals deny genuine events by blaming AI manipulation. In June 2023, Sudanese politician Mubarak Ardol accused AI of fabricating an audio recording in which he criticized military leadership, claiming it was generated using voice samples available online. This tactic, made possible by deepfake technology, leads to an atmosphere of doubt that makes it difficult for people to distinguish between authentic and manipulated content.

Countermeasures: Fact-Checking and Tech Interventions

Several efforts are underway to combat deepfake-driven disinformation in Sudan. Beam Reports, a Sudan-based fact-checking organization, has been instrumental in identifying and debunking deepfake content since 2023. Verified by the International Fact-Checking Network, Beam Reports plays a crucial role in Sudan’s information landscape, where credible news sources are often scarce. In May, Beam Reports, in collaboration with UNESCO, stressed the importance of on-the-ground reporting to counter disinformation, noting that AI-generated deepfakes complicate an already challenging situation.

Tech activists and social media platforms are also fighting back. Sudanese activist Mohanad Elbalal, based in the UK, uses reverse image searches to verify suspect content. Platforms like YouTube have introduced policies against deepfakes intended to mislead, mandating transparency labels and warnings on synthetic content. However, enforcement remains inconsistent, especially in regions like Sudan, where content moderation often lacks local context.

The Future of Deepfakes in Conflict: A Warning for Sudan

As deepfake technology becomes more sophisticated, the potential for misuse in Sudan could increase dramatically. Mohamed Sabry, a Sudanese AI researcher at Dublin City University, warns that while current AI-generated deepfakes are often of low quality, this may change if creators invest in advanced tools. Poor quality often gives away these deepfakes, particularly in voice cloning, where Sudanese dialects are challenging for AI to mimic accurately. However, as the technology advances, deepfakes could become harder to detect, amplifying the risk of misinformation.

Shirin Anlen, a media technologist at Witness, points out the limitations in deepfake detection. Publicly available tools often lack transparency, are prone to false positives, and depend on high-quality training data that is scarce for Sudanese dialects. File compression and low resolution also impair detection accuracy, adding another layer of complexity to identifying deepfakes.

Conclusion:

Sudan’s struggle with AI-generated disinformation is a wake-up call about the risks of unchecked technological misuse. The growing prevalence of deepfakes in the country’s information ecosystem poses threats to public trust and safety, as false narratives foster divisions and erode faith in reliable information. While initiatives like Beam Reports offer critical support, Sudan urgently needs stronger digital literacy programs and robust AI regulation to curb the misuse of these technologies.

For Sudan, the battle against deepfake misinformation is far from over. As more tools become available and AI manipulation becomes more seamless, safeguarding truth and transparency in digital spaces will require a concerted effort from government bodies, tech companies, and communities alike.

Leave a Comment

Your email address will not be published. Required fields are marked *

Scroll to Top