AI-enabled fraud has become so sophisticated more than one-third of senior internal audit leaders aren't even sure whether or not their organizations have been the targets of any attempts, while almost one-fifth are aware of at least one instance.
A new poll from the
It's not that they're unaware of the risks. The poll found 51% are at least somewhat familiar with the concept of AI-enabled fraud, and 34% said they are either very or extremely familiar with it. The degree to which one believe this is a risk scales with AI familiarity; when asked to rate on a scale of 1-5 the risk of AI-enabled fraud for their own organizations, those with little to no familiarity with AI placed their risk score around 2.8 and 2.9, those who are somewhat familiar with AI pegged the risk at just shy of 3.2, while those very familiar said 3.3 and those very familiar said 3.4.

The main risks auditors are looking for specifically are AI-powered phishing attempts, cited by 88% of respondents; followed by fabricated invoices or financial documents at 65%; automated social engineering at 58%; deepfake audio or video impersonations at 45%; and using AI to develop or insert harmful code at 41%. Auditors rated the remaining risks much lower.
This may not be to their benefit, however, as this is also an example of the awareness gap the report discussed. For example, while only 27% of respondents cited synthetic identity fraud as a major risk factor, it's the fastest-growing financial crime in the U.S., claiming an estimated $5 billion a year.
"This suggests that some AI-enabled fraud risks remain underrecognized, specifically those more difficult to detect," said the report.
Poll respondents were generally honest in their assessments of their own ability to respond in the event of AI-enabled fraud, as most said they are simply not prepared to handle such incidents. Only 2% felt very prepared and 34% felt they were moderately prepared. In contrast, 46% think they are only minimally prepared and 16% say they are not prepared at all.
They would like to be more prepared, but many pointed to barriers such as lack of the right technology or tools (57%), insufficient staff with relevant skills (55%), budgeting constraints (46%), competing priorities (43%) and lack of time (43%) to dedicated to AI-specific risk management efforts.
The poll found that auditors who see AI as the problem also see it as a solution. Professionals are already using the technology in a variety of ways, some of them extensively. The top use cases were reporting and audit planning, both at 35% for extensive use, followed by risk assessment at 25% and field work at 19%. Only 7% used AI extensively for follow-up. Still, AI use is expected to intensify: 83% of internal auditors said they plan to increase their use of AI over the next year, with only 12% saying it would remain about the same and 5% not sure.
"This underscores the need for internal auditors to stay well-informed about AI's potential misuse, both within internal audit and across the organization," said the report. "Developing a more nuanced understanding of this rapidly advancing technology's capabilities and limitations, along with the skills needed to engage with it, can help internal auditors recognize the risks associated with improper or malicious use. This future-focused knowledge is critical for strengthening internal audit's ability to assess controls, advise management, and anticipate emerging AI-enabled fraud risks."
The data comes amid a time when AI-enabled fraud is spiking (
This lines up with an analysis from
This also echoes a recent report from the World Economic Forum's 2026





