Marketers across the Microsoft-owned jobs platform LinkedIn are reportedly using AI-generated human faces and automated messages to trick potential hires into thinking they’re speaking to a real person.
NPR reports that markers are using AI-generated human faces to pose as real people on the Microsoft-owned jobs platform LinkedIn in order to trick users. NPR tells the story of a woman named Renée DiResta who received a message from a user named Keenan Ramsey attempting to sell a software solution.\
DiResta planned to ignore the message but the sender’s profile picture grabbed her attention. DiResta had studied disinformation campaigns online and had heard that some used AI-generated human faces in order to appear more realistic. DiResta noticed that in Ramsey’s profile pic she was wearing one earring and parts of her hair appeared to appear and disappear. “The face jumped out at me as being fake,” said DiResta.
The message resulted in DiResta and her colleague Josh Goldstein at the Stanford Internet Observatory finding more than 1,000 LinkedIn profiles with AI-generated avatars. Many of the profiles were used to generate sales for companies, with accounts like Keenan’s reaching out to users like DiResta to offer paid services.
Those that engaged with the fake accounts would be connected to a real salesperson who would then attempt to close the deal. Over 70 businesses were listed as employers on the fake profiles and several told NPR that they had hired marketing firms to boost their sales but had not authorized the use of AI-generated images.
Hany Farid, an expert in digital media forensics at the University of California, Berkeley, co-authored a study with Sophie J. Nightingale of Lancaster University on AI faces which found that many of these AI images were “indistinguishable” from real faces. People displayed a 50 percent chance of guessing correctly whether the face was computer generated.
Farid stated: “If you ask the average person on the internet, ‘Is this a real person or synthetically generated?’ they are essentially at chance.” The study further found that people consider AI-generated faces slightly more trustworthy than real ones. “That face tends to look trustworthy, because it’s familiar, right? It looks like somebody we know,” he said.
User safety on the Microsoft-owned platform LinkedIn has been worrying for some time. Breitbart News reported in June of 2021 that a second LinkedIn data breach exposed the data of 700 million users and the database was made available for sale on the dark web. The user information reportedly included phone numbers, physical addresses, geolocation data, and inferred salaries.
Read more at NPR here.
Lucas Nolan is a reporter for Breitbart News covering issues of free speech and online censorship. Follow him on Twitter @LucasNolan or contact via secure email at the address lucasnolan@protonmail.com