In a recent security report, Mark Zuckerberg’s Meta has identified Russia as the leading source of global coordinated inauthentic behavior (CIB) efforts, with at least 39 covert influence operations. Leading into the 2024 election, it appears that Facebook and Instagram are once again pushing the threat of Russian interference much like the Russiagate attacks against Donald Trump that started in 2016.

Business Insider reports that Meta’s newly released security report blames Russia for the extensive use of artificial intelligence to bolster its efforts in influencing international politics through social media. The report claims that Russia has been employing generative AI to create personas for fake journalists and publish distorted information from authentic articles on fictitious news sites.

According to Meta, Russia’s current “deceptive campaign” is primarily focused on garnering support for its ongoing war in Ukraine, marking a shift from its previous efforts that relied on exploiting divisive social and cultural issues within targeted countries to gain traction. The report suggests that between now and the upcoming US elections in November, Russia-based operations are expected to promote supportive commentary about candidates who oppose aid to Ukraine while criticizing those who advocate for aiding its defenses.

Meta anticipates that these efforts could manifest in various forms, such as blaming the US’s economic hardships on providing financial assistance to Ukraine, portraying Ukraine’s government as unreliable, or amplifying voices that express pro-Russia views on the war and its prospects.

The relationship between Russia and Meta has been strained since Russia’s invasion of Ukraine in 2022. In response to the invasion, Facebook took swift action by pulling all advertising in Russia and blocking Russian ads. Months later, Russia retaliated by categorizing Meta as an extremist and terrorist organization.

Despite Russia’s adoption of AI-powered tactics, Meta remains confident in its ability to detect and remove deceptive posts and accounts. The company targets and removes more deceptive content that heavily relies on AI or is run by contractors in for-hire deception campaigns. Meta characterizes these operations as “low-quality, high-volume” with lapses in operational security, making them less effective at avoiding detection.

The report emphasizes that “GenAI-powered tactics provide only incremental productivity and content-generation gains to the threat actors, and have not impeded our ability to disrupt their influence operations.” Meta further notes that real people continue to call out these networks as trolls, as they struggle to engage authentic audiences.

Read more at Business Insider here.

Lucas Nolan is a reporter for Breitbart News covering issues of free speech and online censorship.