Appetite for deepfake scams is expanding among users of underground forums, leading to concerns the technology could be used as part of extortion-based ransomware attacks.
Deepfakes are AI-generated videos and images that transplant the face of another individual – traditionally a celebrity or politician – into a scene in which they were not originally present.
In recent years, deepfakes have been used primarily in the dissemination of fake news and the creation of hoax pornography – and have become increasingly convincing.
According to a report from security firm Trend Micro, deepfake technology could soon be used to blackmail members of the public or workforce into divulging sensitive information or paying significant ransom fees.
As part of a wider investigation into trends in underground cybercriminal forums and marketplaces, Trend Micro found that interest is growing among forum members in the ability to monetize deepfake technology.
According to the firm, underground forum users often discuss how AI could be used for “eWhoring” (or sextortion) and for circumventing Face ID authentication, especially on dating websites.
While sextortion attacks traditionally rely on social engineering techniques to manipulate the victim into paying a cryptocurrency ransom, Trend Micro fears the increasing sophistication of deepfakes could make reputation scams of this kind all the more potent.
“A real image or video would be unnecessary. Virtually blackmailing individuals is more efficient because cybercriminals wouldn’t need to socially engineer someone into a compromising position,” explains the report.
“The attacker starts with an incriminating Deepfake video, created from videos of the victim’s face and samples of their voice collected from social media accounts. To further pressure the victim, the attacker could start a countdown clock and include a link to a fake video…If the victim does not pay before the deadline, all contacts in their address books will receive the link.”
Based on its analysis of underground communities, Trend Micro believes the use of deepfakes for extortion-based ransomware is set to take off in the near future.
While attacks of this kind have not yet been identified in the wild, it is thought a range of different demographics could be at risk – from political candidates to senior executives, celebrities and teenage civilians.