Pentagon Seeks AI Technology to Create Undetectable Deepfake Internet Users

army robot putting on human mask
Bing AI Creator

The U.S. military seeks to develop advanced AI capable of generating fake online personas that are indistinguishable from real people, according to a procurement document recently reviewed by the Intercept.

The Intercept reports that the United States Special Operations Command (SOCOM) is seeking assistance from private companies to create highly convincing deepfake internet users as part of its clandestine military efforts. The request, outlined in a 76-page document by the Department of Defense’s Joint Special Operations Command (JSOC), details the advanced technologies desired for the country’s most elite military operations.

According to the document, JSOC is interested in technologies that can generate unique online personas that appear to be recognizable as human but do not exist in the real world. These fabricated personas should include multiple expressions, government-quality identification photos, facial and background imagery, video, and audio layers. The goal is to create virtual environments undetectable by social media algorithms, complete with selfie videos featuring the faked individuals and matching backgrounds.

This is not the first time the Pentagon has been caught using phony social media users to further its interests. In recent years, Meta and Twitter have removed propaganda networks operated by U.S. Central Command that used faked accounts with generated profile pictures.

The use of deepfake technology, synthesized audiovisual data meant to be indistinguishable from genuine recordings, has been a topic of interest for SOCOM. Last year, the command expressed interest in using video deepfakes for influence operations, digital deception, communication disruption, and disinformation campaigns. The technology behind deepfakes involves AI and machine learning techniques that analyze vast databases of faces and bodies to recognize and recreate human features.

While the United States pursues these technologies, it simultaneously condemns their use by geopolitical foes. National security officials have long described the state-backed use of deepfakes as an urgent threat when done by other countries. Joint statements by the NSA, FBI, and CISA have warned of the growing challenge posed by synthetic media and deepfakes, describing the global proliferation of the technology as a “top risk” for 2023.

Experts argue that there are no legitimate use cases for this technology besides deception, and the U.S. military’s interest in it is concerning. Heidy Khlaaf, chief AI scientist at the AI Now Institute, says that this will only embolden other militaries or adversaries to do the same, leading to a society where it becomes increasingly difficult to distinguish truth from fiction.

Read more at the Intercept here.

Lucas Nolan is a reporter for Breitbart News covering issues of free speech and online censorship.

COMMENTS

Please let us know if you're having issues with commenting.