AI fakery turbo-charging fraud, cyber attacks

AI-guided impersonation attacks have become both more numerous and more sophisticated as the technology improves, and will likely not abate anytime soon. This is according to a series of polls that point to the rise of deepfakes and other tactics to craft extremely specific and personalized social engineering attacks. 

Processing Content

For example, recent data from cybersecurity solutions provider Ironscales has found that technology leaders are reporting a sharp rise in AI-driven fakery centered around subverting trust and faking identities. Overall, Ironscales found that 88% of organizations had at least one such security incident over the past 12 months. Of those, 11.7% saw six or more. 

Those working in finance functions were seen as those most at risk of being targeted by such an attack. The poll found 50% of respondents rated them as a priority target with high concern about readiness, followed by IT personnel (46.9%) and HR employees (38.3%). 

Deep Fake
Businessman hands typing on laptop, man face hologram and glowing biometric scanning on computer screen. Concept of face swapping, deep fake and personal information protection
ImageFlow - stock.adobe.com

While these include many different kinds of attacks, none were seen as abating — the only difference was how quickly they were growing. 

At the top was using AI to create highly personalized and targeted attacks against employees, with 39.1% of respondents saying such incidents have increased either significantly or moderately. Similarly, 23.6% said that imitating vendors has increased significantly or moderately; 32.7% said the use of deepfake audio to trick people into taking malicious actions has grown significantly or moderately; and 26.1% said the use of deepfake videos to join meetings has increased significantly or moderately. Good old-fashioned text also remains relevant, with 31.2% reporting a significant or moderate rise in false or misleading reports about their company on social media platforms. 

Leaders also cited other trust-undermining attacks like credential theft, hosting malicious files on sharing platforms, installing infostealers on endpoints, and even trying to get hired in order to gain access to privileged data. 

Years ago, such attacks were often crude, with lots of outrageous claims written in bad grammar stuffed with typos. A paper from Microsoft from 2012 argued these scams would filter out all but the most gullible people, as those who would respond to such a poorly worded spam email are more likely to fall for a scam than an astute observer who can tell it is sketchy. 

No more. Today's attacks, guided by generative AI, focus on imitating trusted sources, like an email from Google asking you to confirm your identity. The Ironsides poll found it's becoming increasingly difficult to determine what is and is not real now that AI can easily generate authentic-looking communications. The poll found a large proportion of respondents either agreeing or strongly agreeing that it is getting harder to distinguish truth from fiction on social media (60.9%), detect phishing emails (59.4%), detect fraudulent requests via phone call (57.1%), know if a job applicant is who they say they are (57%), or keep threat actors out of online meetings (54.4%).

Deepfakes were of particular concern among leaders. When asked about the specific types of social engineering and impersonation attacks they were especially worried about over the next 12 months, deepfakes were the highest rated answer at 19.5%. This was followed by simply "impersonation" (18.8%), which is often aided by deepfakes, then phishing (13.3%), which may or may not involve deepfakes, then "offensive AI" (10.9%), followed by straight up "deepfakes" (8.6%) and, finally, business email compromise (7%). 

This lines up with an analysis from Cybernews which found that, of 346 "AI incidents" recorded last year, the majority — 179 — involved deepfakes, whether that be voice, video, image or some combination. If one drills down to fraud specifically, the data showed that 81% of cases were driven by deepfake technology. 

This also echoes a recent report from the World Economic Forum's 2026 Global Cybersecurity Outlook, which draws its data from 873 C-suite executives, academics, civil society and public-sector cybersecurity leaders. It found that fraud, much of it enabled by AI, has become a major concern for leaders, as 73% said they or someone else in their personal or professional networks have been affected by "cyber-enabled fraud" over the past 12 months. The most common incident, at 62%, was phishing and phishing-related attacks, followed by payment or invoice fraud at 37%, identity theft at 32%, insider or employee-led fraud at 20%, romance or impersonation scams at 17%, and investment or crypto frauds at 17%. 

Similar to the Ironscales data, leaders believe the threat is growing. The poll found 87% of leaders saying the risk of AI-related vulnerabilities in general has increased over the past year, as has the risk of cyber-enabled fraud and phishing (which often uses AI), with 77% reporting an increase. As one might imagine, the chief threat named by CEOs went from ransomware attacks in 2025 to cyber-enabled fraud and phishing in 2026. 

CISOs, on the other hand, named ransomware attacks as chief threats in both years. They have good reason to be wary: A recent review of Global Threats from NordVPN found ransomware attacks surged 45% in 2025. December alone set a two‑year record with 1,004 incidents in a single month.

Still, according to the WEF, leaders are not taking these risks lying down. Within just one year there has been a dramatic increase in awareness of AI threats. For example, while in 2025 only 37% of respondents said they have a process in place to assess the security of AI tools before deploying them, in this most recent poll that proportion has grown to 64%. 

The majority (77%) are also deploying AI-enabled tools to enhance its cybersecurity. Most commonly, at 52%, are those who have them for phishing and email threat detection, followed by AI to detect and respond to intrusions or anomalies (46%), and automating for security operations (43%). 

"There are reasons for optimism," the report concluded. "Organizations that embed resilience into leadership agendas, proactively manage supply chain and AI risks, and engage their broader ecosystems are better positioned to withstand shocks and adapt to uncertainty. The shift toward intelligence-driven collaboration, scenario-based testing and regulatory harmonization signals a maturing approach to collective defense." 

For reprint and licensing requests for this article, click here.
Technology Practice management Cyber attacks Artificial intelligence Cyber security
MORE FROM ACCOUNTING TODAY