FBI says people are using deepfakes to apply for remote tech jobs

We’ve seen examples of deepfakes being used almost to change the course of history when a Zelensky footage emerged back in March and told the Ukrainian army to lay down arms amid the Russian invasion. Fortunately, it was sloppy, and the army didn’t buy that.

And now, if you consider what happens when a post-covid world that birthed many remote job opportunities for digital nomads merges with AI, The FBI Internet Crime Complaint Center (IC3) has the answer for you. 

It turns out that people are now using deepfakes to act like someone else during the job interviews for remote positions. And this is not the worst part. 

The FBI revealed in a public announcement on June 28 that it detected an increase in complaints that report the use of deepfakes and stolen Personally Identifiable Information (PII). Deepfakes include a video, an image, or manipulated recordings to mispresent one as doing something that wasn’t actually done. The reported positions also include information technology and computer programming, database, and software-related jobs. And some of these job ads contain access to customer PII, financial data, corporate IT databases, and proprietary information, which could result in unwanted scenarios for the individuals or companies in question. 

People who chose to use deepfakes during interviews probably didn’t recognize that the actions and lip movements on camera don’t entirely match with the audio. The FBI also reported that coughing, sneezing, and similar actions are not synchronized with the footage displayed during the interviews. 

Are deepfakes the enemy?

Back in 2020, a study published in Crime Science ranked fake audio and video content, or in other words, deepfakes, the most dangerous AI crime threat. The study suggested that humans have a strong tendency to believe their own eyes and ears, as expected, which gives the visuals and great audio credibility. In the long run, it could get really easy to discredit a public figure, extract funds by impersonating different people, and that could lead to distrust of such content and result in societal harm.

“Unlike many traditional crimes, crimes in the digital realm can be easily shared, repeated, and even sold, allowing criminal techniques to be marketed and for crime to be provided as a service. This means criminals may be able to outsource the more challenging aspects of their AI-based crime,” First author Dr. Matthew Caldwell said.

Advertisement

Damn technology, sometimes you’re scary. But don’t worry, it’s not you, it’s us.

This post was originally published on this site