34% of internal auditors unsure if they were targets of AI-enabled fraud

AI-enabled fraud has become so sophisticated more than one-third of senior internal audit leaders aren't even sure whether or not their organizations have been the targets of any attempts, while almost one-fifth are aware of at least one instance. 

Processing Content

A new poll from the Institute of Internal Auditors asked professionals about whether their organizations were targeted for AI-enabled fraud attempts and found 34% saying they weren't really sure, while 18% were aware of at least one instance. Nearly half (48% ) said they have not been targeted by such attempts (though one must wonder whether they would really know for sure), the report said the results highlight an awareness gap that could have material implications for organizations. 

It's not that they're unaware of the risks. The poll found 51% are at least somewhat familiar with the concept of AI-enabled fraud, and 34% said they are either very or extremely familiar with it. The degree to which one believe this is a risk scales with AI familiarity; when asked to rate on a scale of 1-5 the risk of AI-enabled fraud for their own organizations, those with little to no familiarity with AI placed their risk score around 2.8 and 2.9, those who are somewhat familiar with AI pegged the risk at just shy of 3.2, while those very familiar said 3.3 and those very familiar said 3.4. 

AI thief
Robot taking the money and running
Yurii - stock.adobe.com

The main risks auditors are looking for specifically are AI-powered phishing attempts, cited by 88% of respondents; followed by fabricated invoices or financial documents at 65%; automated social engineering at 58%; deepfake audio or video impersonations at 45%; and using AI to develop or insert harmful code at 41%. Auditors rated the remaining risks much lower. 

This may not be to their benefit, however, as this is also an example of the awareness gap the report discussed. For example, while only 27% of respondents cited synthetic identity fraud as a major risk factor, it's the fastest-growing financial crime in the U.S., claiming an estimated $5 billion a year. 

"This suggests that some AI-enabled fraud risks remain underrecognized, specifically those more difficult to detect," said the report. 

Poll respondents were generally honest in their assessments of their own ability to respond in the event of AI-enabled fraud, as most said they are simply not prepared to handle such incidents. Only 2% felt very prepared and 34% felt they were moderately prepared. In contrast, 46% think they are only minimally prepared and 16% say they are not prepared at all. 

They would like to be more prepared, but many pointed to barriers such as lack of the right technology or tools (57%), insufficient staff with relevant skills (55%), budgeting constraints (46%), competing priorities (43%) and lack of time (43%) to dedicated to AI-specific risk management efforts. 

The poll found that auditors who see AI as the problem also see it as a solution. Professionals are already using the technology in a variety of ways, some of them extensively. The top use cases were reporting and audit planning, both at 35% for extensive use, followed by risk assessment at 25% and field work at 19%. Only 7% used AI extensively for follow-up. Still, AI use is expected to intensify: 83% of internal auditors said they plan to increase their use of AI over the next year, with only 12% saying it would remain about the same and 5% not sure. 

"This underscores the need for internal auditors to stay well-informed about AI's potential misuse, both within internal audit and across the organization," said the report. "Developing a more nuanced understanding of this rapidly advancing technology's capabilities and limitations, along with the skills needed to engage with it, can help internal auditors recognize the risks associated with improper or malicious use. This future-focused knowledge is critical for strengthening internal audit's ability to assess controls, advise management, and anticipate emerging AI-enabled fraud risks."

The data comes amid a time when AI-enabled fraud is spiking (see previous story). Ironscales, a cybersecurity company, recently found that technology leaders are reporting a sharp rise in AI-driven fakery centered around subverting trust and faking identities. Overall, Ironscales found that 88% of organizations had at least one such security incident over the past 12 months. Of those, 11.7% saw six or more. Those working in finance functions were seen as those most at risk of being targeted by such an attack. The poll found 50% of respondents rated them as a priority target with high concern about readiness, followed by IT personnel (46.9%) and HR employees (38.3%). 

This lines up with an analysis from Cybernews which found that, of 346 "AI incidents" recorded last year, the majority — 179 — involved deepfakes, whether that be voice, video, image or some combination. If one drills down to fraud specifically, the data showed that 81% of cases were driven by deepfake technology. 

This also echoes a recent report from the World Economic Forum's 2026 Global Cybersecurity Outlook, which draws its data from 873 C-suite executives, academics, civil society and public-sector cybersecurity leaders. It found that fraud, much of it enabled by AI, has become a major concern for leaders, as 73% said they or someone else in their personal or professional networks have been affected by "cyber-enabled fraud" over the past 12 months. The most common incident, at 62%, was phishing and phishing-related attacks, followed by payment or invoice fraud at 37%, identity theft at 32%, insider or employee-led fraud at 20%, romance or impersonation scams at 17%, and investment or crypto frauds at 17%. 

For reprint and licensing requests for this article, click here.
Technology Artificial intelligence Fraud Audit
MORE FROM ACCOUNTING TODAY