Pentagon Wants to Use AI for Creating Deepfake Internet Users

The use of artificial intelligence in creating deepfake personas is rapidly becoming a focus for the U.S. Special Operations Command (SOCOM), raising ethical, privacy, and geopolitical concerns. According to a procurement document reviewed by The Intercept, SOCOM is actively seeking advanced technologies that can generate highly convincing fake online personas, so realistic that neither humans nor computers can detect that they are not real. These personas would be deployed for social media and other online platforms as part of clandestine military operations.

This article explores the emerging role of deepfakes in military operations, the technology behind creating undetectable personas, and the potential global consequences of using such methods.

Source

The Technology Behind Creating Deepfake Personas:

SOCOM’s interest lies in technologies that can create entirely fabricated individuals with unique, human-like characteristics. This involves the use of AI-driven deep learning models, similar to the methods behind popular tools like Nvidia’s StyleGAN, which was used to generate synthetic faces for the website “This Person Does Not Exist.” The goal is to create online profiles that look convincingly real, complete with:

  • 1. Multiple expressions and ID-quality photos
  • 2. Selfie videos with matching fabricated backgrounds
  • 3. Audio layers to complement the visuals

These deepfakes are designed to fool both human observers and automated detection systems used by social media platforms. SOCOM’s wish list goes beyond static images, seeking the ability to generate background and facial videos that create entire virtual environments.

This technology could be used in psychological operations (PsyOps), digital deception, and information campaigns, allowing for the manipulation of narratives on a large scale.

Psychological and Geopolitical Implications:

The potential for harm from undetectable deepfake personas is immense. This technology could not only mislead foreign governments and populations but could also erode trust in public information at home. While the Pentagon has been vocal about the dangers of foreign adversaries like Russia and China using deepfakes to spread disinformation, the U.S.’s own interest in developing this technology raises concerns about hypocrisy and unintended consequences.

Deepfakes have already been used in campaigns designed to influence public opinion. For example:

  1. 1. U.S. Central Command was found operating fake social media accounts in 2022, using similar technology to that sought by SOCOM.
  2. 2. In 2024, a campaign aimed at discrediting China’s Covid vaccine employed fake social media personas to undermine foreign confidence.

By using these tactics, the U.S. risks normalizing the use of deepfake technologies, leading other countries and non-state actors to adopt similar methods. The offensive use of AI-driven deception could destabilize global relations and create a world where it is nearly impossible to distinguish between truth and fabricated content.

Ethical Concerns and Domestic Trust:

One of the most pressing concerns surrounding SOCOM’s interest in deepfakes is the impact on domestic trust in government. SOCOM’s mission may involve deception for national security purposes, but using such deceptive technology risks undermining public confidence in all government communications. Daniel Byman, a professor at Georgetown University, warns that using AI-driven deepfakes could make U.S. citizens more suspicious of information from their own government.

This tension between national security objectives and ethical standards has led to growing concerns within the U.S. government itself. On one hand, national security officials emphasize the importance of maintaining public trust by consistently providing truthful information. On the other hand, military branches like SOCOM may see deepfakes as a powerful tool for psychological operations, particularly when targeting foreign adversaries.

Global Ramifications and the Risk of Proliferation:

As deepfake technology becomes more sophisticated, its use by the U.S. military could spur widespread adoption by other governments. Both Russia and China have already used deepfakes in online propaganda efforts, prompting international efforts to counter information manipulation. In January 2024, the U.S. State Department introduced a framework to combat foreign state-backed deepfake campaigns, labeling them a significant national security threat.

However, if the U.S. continues to develop and use deepfake personas for its own operations, it risks fueling an arms race in AI-driven deception. Countries and non-state actors may increasingly rely on deepfakes to conduct disinformation campaigns, leading to:

  • 1. Erosion of trust in public information across the globe
  • 2. Increased difficulty in verifying the authenticity of media, whether in journalism, government reports, or social media
  • 3. A polarized geopolitical environment, where disinformation becomes a standard tool of warfare

In this context, deepfakes present a paradox. While governments like the U.S. warn against their use by foreign adversaries, they are simultaneously pursuing similar technologies for their own purposes.

Conclusion:

SOCOM’s interest in using deepfakes for military operations opens up a Pandora’s box of ethical, legal, and geopolitical challenges. While the ability to create undetectable personas for PsyOps may offer tactical advantages, the risks to global trust, domestic confidence, and international relations are profound.

The use of deepfake technology by the U.S. military could accelerate its proliferation, making it a common tool for governments worldwide. This, in turn, could lead to an environment where distinguishing truth from fiction becomes increasingly difficult, both in the geopolitical sphere and in everyday life. The U.S. government must carefully weigh the short-term benefits of deploying such technology against the long-term consequences of normalizing AI-driven deception.

Leave a Comment

Your email address will not be published. Required fields are marked *

Scroll to Top