Deepfake fraud has evolved into an industrial-scale operation, with AI tools now making it possible for virtually anyone to create sophisticated scams targeting individuals and organizations worldwide, according to a new analysis from AI experts.

The Guardian reports that the creation of personalized scams using AI deepfake technology has shifted from a niche threat to an inexpensive, easily deployable operation accessible at scale, according to an analysis by the AI Incident Database. The technology is no longer limited to sophisticated criminal enterprises but is available to virtually any fraudster with internet access.

The database has catalogued more than a dozen recent “impersonation for profit” incidents, including deepfake videos of Western Australia Premier Robert Cook appearing to promote an investment scheme, fabricated videos of doctors endorsing skin cream, and deepfakes of Swedish journalists and the president of Cyprus used for fraud.

The financial impact has been substantial. Last year, a finance officer at a Singaporean multinational paid out nearly $500,000 to scammers during what he believed was a legitimate video call with company leadership. In the UK, consumers lost an estimated 9.4 billion pounds to fraud in just nine months leading up to November 2025.

Simon Mylius, an MIT researcher working on a project linked to the database, emphasized the dramatic shift. “Capabilities have suddenly reached that level where fake content can be produced by pretty much anybody,” Mylius said, noting that “frauds, scams and targeted manipulation” have been the largest category of reported incidents in eleven of the past twelve months. “It’s become very accessible to a point where there is really effectively no barrier to entry,” he added.

Harvard researcher Fred Heiding echoed these concerns. “The scale is changing,” Heiding said. “It’s becoming so cheap, almost anyone can use it now. The models are getting really good – they’re becoming much faster than most experts think.”

Even AI security companies are being targeted. In early January, Jason Rebholz, CEO of AI security startup Evoke, posted a job on LinkedIn and was soon exchanging emails with someone who appeared to be a talented engineer. Despite noticing red flags — emails going to spam, resume quirks — he proceeded with an interview. When the candidate’s video feed finally appeared, problems were obvious.

“The background was extremely fake,” Rebholz explained. “It was really struggling to deal with [the area] around the edges of the individual. Like part of his body was coming in and out … And then when I’m looking at his face, it’s just very soft around the edges.”

Rebholz completed the interview, then sent the recording to a deepfake detection firm, which confirmed the video was AI-generated. The scammer may have been seeking an engineering salary or access to trade secrets.

“It’s like, if we’re getting targeted with this, everyone’s getting targeted with it,” Rebholz observed.

According to Heiding, the situation will likely worsen before improving. Deepfake voice cloning has already reached excellent sophistication, making it easy to impersonate people like distressed family members calling for help. Video technology, while already concerning, continues to advance rapidly.

Heiding warned about long-term consequences: “That’ll be the big pain point here, the complete lack of trust in digital institutions, and institutions and material in general.”

Read more at the Guardian here.

Lucas Nolan is a reporter for Breitbart News covering issues of free speech and online censorship.

Read the full article here

Share.
Leave A Reply

Exit mobile version