Dive Brief:
- Artificial intelligence has made it easy for scammers to pose as job candidates, opening the door for cyberattacks such as ransomware that can result in huge financial losses for targeted businesses, according to fraud detection company Pindrop.
- Scammers can use AI to generate polished resumes and LinkedIn profiles, Christine Aldrich, chief people officer for Pindrop, said in a recent blog post. They can also leverage deepfake technology, a form of AI, to mimic another person’s face and voice during a virtual job interview.
- “Once inside a company, deepfake employees can hold systems hostage, locking critical files and demanding ransom payments,” Aldrich wrote. “This could result in millions in losses, not just from the ransom but also from system downtime, recovery efforts, and legal fees.”
Dive Insight:
Today, criminals can quickly and easily make deepfakes thanks to recent advancements in AI technology, according to a KPMG article.
“It's already possible to go online and learn how to make a convincing deepfake, based on a mere three seconds of recorded audio of someone’s voice — using off-the-shelf, publicly available software,” the article said. “On top of this, there is an emergence of ‘deepfake-as-a-service’ as a lucrative market on the dark web.”
Attack surfaces are expanding, in part due to the hybrid work environment where many people are connecting with organizations remotely from homes, coffee shops, airports, gyms and other locations, it said.
In September, finance software provider Medius published a study finding that just over half (53%) of businesses in the U.S. and U.K. had been targets of a financial scam powered by deepfake technology, with 43% falling victim to such attacks. Over 80% of finance professionals polled by Medius viewed such scams as an “existential” threat to their organization’s financial security.
Last year, British engineering group Arup was in the spotlight after reports that scammers successfully siphoned $25 million from the company by using deepfake technology to pose as the organization’s CFO. Following a video conference with the false CFO and other AI-generated employees, an Arup staff member made a number of transactions to five different Hong Kong bank accounts before discovering the fraud.
Luxury sports car manufacturer Ferrari was unsuccessfully targeted in a deepfake attempt last year. As part of the scam, the fraudster tried to dupe a company executive into signing off on a transaction using WhatsApp messages that appeared to be sent by CEO Benedetto Vigna. The attempt was foiled after the targeted executive became suspicious and asked the fraudster a question that only the real Vigna would be able to answer — the title of a book Vigna had recommended days earlier. Unable to answer the question, the fraudster abruptly ended the call.
Pindrop says a fraudster tried to dupe the company in November after it posted a job announcement seeking a software engineer. The fake candidate appeared well-qualified on paper, but several red flags emerged during the video interview, including facial expressions that seemed slightly out of sync with his words, a telltale sign of deepfake video manipulation, according to Aldrich’s blog post.
Overall, Pindrop received over 800 applications. After a deeper analysis of 300 candidate profiles, the company discovered that over one-third were fraudulent, Aldrich said.
“In reality, no organization is immune — especially those operating in remote-first or globally distributed environments,” she said. “Fraudsters actively exploit hiring vulnerabilities in engineering, IT, finance, and beyond, seeking access to sensitive systems, proprietary data, and financial assets.”