A recent study by Indiana University researchers reveals a surge in AI-generated spam bots on social media platforms, particularly exploiting OpenAI’s ChatGPT to promote scams and degrade online information quality.
Business Insider reports that researchers at Indiana University have uncovered a disturbing trend — the proliferation of AI-generated spam bots on social media platforms. The study found that over 1,000 such bots were active on Twitter/X. These bots are part of larger networks, commonly referred to as “botnets,” which have been designed to evade current anti-spam measures. According to researchers, bot accounts try to persuade individuals to invest in fraudulent cryptocurrencies and may even pilfer from their current crypto wallets.
While AI has been hailed for its potential to revolutionize various sectors, its darker side is becoming increasingly evident. “New AI tools further lower the cost to generate false but credible content at scale, defeating the already weak moderation defenses of social-media platforms,” said Filippo Menczer, a computer-science professor involved in the study. The bots not only promote fraudulent investments but also contribute to the degradation of the quality of online information.
Regulators are finding it increasingly difficult to keep up with the rapid advancements in AI technology. Current AI content detectors, such as ZeroGPT and the OpenAI AI Text Classifier, have proven to be unreliable. Wei Xu, a computer science professor at the Georgia Institute of Technology, has cautioned that without proper regulation, bad actors will continue to stay ahead of those attempting to stop AI-generated content creation due to the availability of more incentives and lower costs.
Breitbart News previously reported on problems with AI detectors, including baseless accusations against foreign students:
Turnitin ended up labeling more than 90 percent of the paper as AI-generated, so Hahn set up a meeting to question the student about their paper.
“This student, immediately, without prior notice that this was an AI concern, they showed me drafts, PDFs with highlighter over them,” Hahn recalled of his meeting with the student.
The professor, therefore, was convinced that Turnitin’s AI-catching tool had made a mistake.
The study raises concerns about the future integrity of online information. “The advancement of AI tools will distort the idea of online information permanently,” said Kai-Cheng Yang, a computational social science researcher involved in the study. As AI-generated content becomes more sophisticated, distinguishing between genuine and fake information will become increasingly challenging for the average internet user.
Read more at Business Insider here.
Lucas Nolan is a reporter for Breitbart News covering issues of free speech and online censorship. Follow him on Twitter @LucasNolan
COMMENTS
Please let us know if you're having issues with commenting.