In a recent report, the FBI says that it has received multiple complaints of people using stolen information and AI-generated deepfake videos and voices to apply to remote tech jobs.
Gizmodo reports that the FBI wrote to its Internet Crime Complaint Center on Tuesday that it has been receiving complaints of people using stolen personal information and deepfake videos and voices to apply to remote tech jobs. The FBI stated that more companies have been reporting people applying to jobs using video, images, or recordings that are manipulated to look and sound like somebody else. Deepfake videos are generated by AI programs often capable of fooling a casual observer. They have most notoriously been generated using the likeness of celebrities.
The report states that many of the job listings were for IT, programming, database, and software jobs that would provide access to sensitive user information as well as financial and proprietary company info. This seems to imply that the imposters are looking to gain access to sensitive info as well as receive a fraudulent paycheck.
However, it’s currently unclear how many of these fake job applications were successful, meaning that some imposters may have already infiltrated companies. The fake applicants were reportedly using voice spoofing techniques during online interviews where lip movement did not match with what was being said during video calls. Many were caught out when they coughed or sneezed during the interview, which wasn’t displayed on the video feed.
The FBI and several federal agencies recently warned companies of individuals working for the North Korean government applying for remote working positions in IT and other tech jobs in May.
Read more at Gizmodo here.
Lucas Nolan is a reporter for Breitbart News covering issues of free speech and online censorship. Follow him on Twitter @LucasNolan or contact via secure email at the address lucasnolan@protonmail.com