(PresidentialHill.com)- The Federal Bureau of Investigation (FBI) has stated in a recent report that it has received many complaints about individuals applying for remote tech jobs using stolen information and AI-generated deepfake videos and voices.
According to Gizmodo, on Tuesday, the FBI wrote to its Internet Crime Complaint Center that it had been getting instances of individuals applying for remote computing jobs using stolen personal information and deepfake videos and voices.
According to the FBI, an increasing number of businesses have reported that individuals are applying for employment using videos, photos, or recordings that have been altered to make them seem and sound like somebody else. Artificial intelligence systems create deepfake films, and they frequently can trick even the most observant of viewers. The most famous examples of them were created utilizing the likenesses of famous people.
According to the report, a significant number of the job postings were for positions in information technology (IT), programming, database management, and software development—all of which would grant the incumbent access to confidential user information and financial and proprietary company information. It would appear from this that the imposters are attempting, in addition to obtaining a phony salary, to acquire access to confidential information.
However, it is still unknown how many of these phony applications for jobs were successful, which suggests that some impostors may already be working inside such firms. It was stated that the bogus candidates were employing voice spoofing techniques during the online interviews. As a result, the fake applicants’ lip movements did not correspond with what was being spoken during video conversations. Several of them coughed or sneezed during the interview, which was not seen on the video stream, which led to their embarrassing exposure.
Recently, the FBI and many other federal agencies warned businesses about individuals working for the government of North Korea who was applying for remote working positions in the field of information technology and other technical vocations in May.
In 2018, US researchers discovered that deepfake faces don’t blink properly. Algorithms never learn about blinking because most photographs feature people with open eyes. It appeared to be the perfect solution to the detection issue, but deepfakes started working on it. The game is designed in such a way as to rectify weaknesses.